Viewing entries in
Thoughts and Predictions

Zagat: Down But Not Out?

This week, Google announced that it had sold its Zagat guide business, for which it paid a stunning $151 million in 2011, for “an undisclosed amount” to a company you’ve likely never heard of, The Infatuation.

It’s an ignominious development for the former household name brand, and true pioneer in the data business. Well before the Internet, Zagat had blazed new trails in the area of user-generated content and consumer reviews. Tim and Nina Zagat, the founders, proved to be creative and talented marketers and self-promoters. For many years, the name Zagat was synonymous with restaurant ratings in the United States.

But the Zagat empire was print-based. Moreover, the Zagat business model depended in large part on selling bulk orders to companies with their names and logos on the covers, to be given away as gifts. That made it difficult for Zagat to economically expand its coverage beyond the largest cities, so it never became a truly national data provider. And its attempts to expand into other segments of the hospitality industry where more competition existed, fell flat. But the Zagat brand transcended all these shortcomings.

And it is the brand that Google apparently paid so much to acquire. Everything about the Zagat business was at odds with the Google model and ethos. I trashed the deal at the time.

The irony is that Google couldn’t even find an effective way to leverage the Zagat brand. Fortunately, the $151 million purchase price is a rounding error for Google.

What’s the future of Zagat? I do believe the brand still has some life, and could be resuscitated. There’s a role for well-curated, tightly edited slightly snarky, user-generated content that helps you decide where to eat – especially when coupled with a restaurant booking engine. That’s why I would have been much more excited if Zagat had been acquired by, say, OpenTable.

Looking for New Product Ideas? They're Not All in Your Head

Part Three.

For many information and media companies, the preferred way to develop new product ideas is to brainstorm them internally. Get your best minds in a room and talk about the industry and its needs. You can conduct these sessions in a highly structured way or make them completely freewheeling and open-ended. Good, solid ideas can result.

Brainstorming sessions are both convenient and efficient. And if your staff is deeply engaged in your market, bringing them together to discuss new product concepts can yield powerful, even electrifying results. That’s because your staff is essentially reporting back what it is hearing and seeing in the marketplace. Synthesizing their different inputs, finding themes and conceptualizing solutions to problems is a great group activity, and resulting new product ideas can be very strong indeed.

Contrast that with companies that aren’t close to their markets. Their group brainstorming sessions will yield bigger product concepts (arguably bigger opportunities, but also riskier and harder to execute), incomplete concepts (based on lack of detailed market knowledge), and little certainty about market appetite. Perhaps most significantly, these product concepts, because they tend to be bigger, somewhat amorphous and without clarity as to market need, rarely get developed further.

My bottom line view of new product brainstorming is that it works, but the output can’t ever be better than the input. If your staff knows your market, they can effectively act as customer proxies, and the results can be compelling. If your staff doesn’t, brainstorming results in pipe dreams.

Workflow Elimination

The power of embedding one’s data product into a customer’s workflow is well understood by data publishers. Simply put, once a customer starts depending on your data and associated software functionality, it’s hard to cancel or switch away from you because the customer’s work has become designed around your product. It’s a great place to be, and it’s probably the primary reason that renewal rates for data products can sometimes verge on 100%.

But should workflow embedment be the ultimate objective of data publishers? This may depend on the industry served, because we are starting to see fascinating glimpses of a new type of market disruption that might be called “workflow elimination.”

Here’s a great example of this phenomenon in the insurance industry. A company called Metromile has rolled out an artificial intelligence system called Ava. What Ava does is stunning.

Auto insurers using Ava require their policyholders to attach a device called Metromile Pulse to their cars. As you may know, virtually all cars now have onboard computers that log tremendous amounts of data about the vehicle. In fact, when your local auto mechanic performs a computerized diagnosis of your car, this is where the diagnostic data comes from. Metromile Pulse plugs into this onboard computer. The device does two things for insurance companies: It allows them to charge for insurance by the mile, since the onboard computer records miles driven and the device transmits them wirelessly to the insurer. That’s pretty cool and innovative. But here’s what’s mind-blowing: if a policyholder has an auto accident, he or she can file an online claim, and Ava can use the onboard data to confirm the accident, re-construct the accident using artificial intelligence software, and automatically authorize payment on the claim if everything checks out, and all this can be done within a few seconds. The traditional claims payment workflow hasn’ just been collapsed, it’s effectively been eliminated.

How does a data publisher embed in workflow if there’s no workflow? That’s a problem, but it’s also an opportunity, because data publishers are well positioned to provide the tools to eliminate workflow. If they do this, and do this first, they’ll be even more deeply embedded in the operations of their customers. And doubtless you’re already thinking about all the subsidiary opportunities that would flow out of being in the middle of so much highly granular data on automobile operation.

“Workflow elimination” won’t impact every industry quickly if at all. But it’s an example of how important it is to stay ahead of the curve on new technology and always seeking to be the disrupter as opposed to the disruptee.

 

Sharing in Private

While there are many, many B2C ratings and review sites where consumers rate and otherwise report their experiences with businesses, there are relatively few B2B sites where businesses rate other businesses. There are multiple reasons for this, but prime among them is that while businesses tend to have a strong interest in using this kind of information, they typically don’t want to supply this kind of information. In short, they see competitive advantage in keeping their vendor experiences confidential.

One fascinating example of this in the legal market is a company called Courtroom Insight. Originally founded with the simple and reasonable idea of creating a website where lawyers could rate expert witnesses (experts hired by lawyers to testify in court), the company hit this exact wall: lawyers didn’t want to tell other lawyers about which experts they did and didn’t like.

Rather than close up shop, though, Courtroom Insights pivoted, in an interesting way. It discovered that large law firms were very sloppy about keeping records of their own expert witnesses. So, Courtroom Insights built a database of expert witness from public sources and licensed data. It then went to large law firms an offered them an expert witness management database. Not only could lawyers search for expert witnesses and verify their credentials, it could flag those experts they used, along with private notes that could be shared freely within the law firm, but not externally.

This pivot created a nice business for Courtroom Insights but it wasn’t done. Since all of its large law firm clients were sharing the same database, but also individually flagging the experts they were using, could Courtroom Insights convince them to share that information among themselves? Recently, they offered this “who’s using who” data to its clients on a voluntary, opt-in basis. And it worked. While not every client opted in, enough did so that Courtroom Insights could make another level of valuable information available.

While this is just my personal prediction, I think Courtroom Insights will ultimately be able to offer the expert witness ratings that it originally tried to provide. How? By using the protected space of its system to let lawyers trade this high-value information with each other. It will probably start small: perhaps lawyers could click a simple “thumbs up/thumbs down” icon next to each expert that could be shared. But I also suspect that if Courtroom Insights can crack the initial resistance to share information, the floodgates will open, because lawyers will realize they are communicating only with other lawyers, and because the benefits of “give to get” information exchange becomes so compelling.

The Courtroom Insights story provides a fine example of the power of what we call the Closed Data Pool in our Business Information Framework. Sometimes data that nobody will share publicly can in fact be shared among a restricted group of participants, with of course, a trusted, neutral data publisher making it all happen.

Top Level Domains/Low-Level Trustmarks

If you’re not immediately familiar with the term top level domain (TLD), think of “.com” and “.net” and “.edu” – they are all top-level domains, along with hundreds of others, and by the way, they are not limited to three characters anymore.

In the early days of the Internet, domain names were free for the asking, and I stocked up on quite a few for no other reason than a gut feeling they had some value. I did ultimately sell a lot of them, including several Fortune 500 companies who bought their corporate names back from me. By the time I realized there might be a bigger opportunity here, the rules of the game changed and big companies that had previously shown up with checkbooks now showed up with lawyers. Ah, well!

But for all my domain name hoarding, I couldn’t ever get domains names with the “.edu” TLD because they were reserved for schools. Similarly, “.net” was reserved for Internet Service Providers back then, and “.org” was reserved for non-profits. These distinctions were widely understood back then, and even today, I hear people telling me some organization “must” be a non-profit because it has a “.org” domain name. Old naming conventions die hard. More importantly, people are hungry for trustmarks.

But TLDs were never great trustmarks, for two reasons. First, validating an organization’s credentials before handing out a domain name is hard and expensive work. Second, domain names don’t sell for a lot, so you can only make money with volume. The pickier you are, the less money you make.

Despite this, the non-profit sector is now pushing the “.ngo” TLD. Think of it as a do-over of the “.org” TLD, because the operator of the domain is trying to limit sales to non-profit entities with the explicit hope that the TLD will become a trustmark over time. Similarly, the AICPA, the big association of certified public accountants, is in a fierce battle to control the forthcoming “.cpa” TLD, again with the hope it can restrict its use to certified public accountants and build it into a trustmark.

My view is that TLDs make for poor trustmarks. The economics make it hard to enforce standards, and there are too many sleazy operators in the business that drag down the credibility of TLDs across the board. The need for online trustmarks remains high. Who better than data companies to seize the opportunity?