Comment

Thinking About Privacy and Data? Good.

We have heard a lot in the past few weeks about the travails of Facebook, as it became widely known that many millions of its user profiles had been,  for lack of a better term, hacked. That in turn brought Facebook’s advertising microtargeting capabilities into focus, creating more widespread privacy concerns.

But does the average data publisher have to worry about privacy? The short answer is yes.

Data publishers, including B2B data publishers, often control a wealth of extremely valuable data. Many data publishers don’t fully appreciate what valuable insights they could glean from their own data. Fortunately, data thieves haven’t figured it out either … yet.

The highest value data in a typical commercial database isn’t the data itself, it’s the information on what users are doing with the data. Knowing, for example, that the head of acquisitions at a public company was doing deep research on another public company, could be extremely valuable to certain people. Knowing that an executive suddenly started looking at job openings could be valuable. Knowing that five venture capital firms in three days had looked up information on a particular start-up could be extremely valuable. You get the idea.

We already sell some types of information about how users interact with data, and we do this with very little thought about how it might blow up in our faces. Other of our data is clearly quite sensitive and we’d never sell it, but what if somebody stole it?

Going back to 2013, Bloomberg came in for tough public scrutiny after it was revealed its reporters had used Bloomberg terminal access data to track an individual in order to write a story. That’s pretty tame compared to the recent Facebook revelations, but it shows there is often tremendous inferential data hiding in the intersection between our databases and how our customers interact with it. Monetize where appropriate. Protect where appropriate. But whatever you do, don't ignore it. 

Comment

Comment

Get Me a Lawyer!

When you own the domain “lawyer.com” you inherently have a big opportunity. But trying to be a national B2C online lawyer finding service means you need both creativity and deep pockets to cut through the competition.  That’s why it was fun to see the new owners of Lawyer.com choose someone with extensive experience with the law to become their new spokesperson: Lindsey Lohan.

The challenge for lawyer.com is to build and maintain broad, front-of-mind awareness among consumers. The domain name is a great start, and having a memorable spokesperson adds even more marketing firepower. 

Lawyer.com does another smart thing, too, by emphasizing personalized assistance to consumers. While every data provider would prefer that consumers answer their own questions by searching the database, there is a large percentage of the market that can’t or won’t try to find answers themselves. If you are trying to be a “go to” destinations for consumers, you can’t afford to write off this big piece of the market. 

Another interesting tactic is a program called LAWPOINTS, a 1 to 100 scoring system that is not based on consumer ratings as you might expect, but rather on the completeness of the lawyer’s profile. The LAWPOINTS score appears with each lawyer’s profile. While I have some concern that a system like this could be confused for a rating (although the company does clearly explain its function), it does recognize the important key to a buyers’ guide that is so often forgotten: the advertising is the content, and this is an attempt to get lawyers to do the right thing by providing as complete a profile as possible. You can call this hokey, because the LAWPOINTS scores really doesn’t mean anything, but detailed profiles can mean the difference between success and failure, so if it works, go for it!

We’ll see if lawyer.com has more staying power than Lindsey Lohan, but they are off to a promising start. 

Comment

Zagat: Down But Not Out?

This week, Google announced that it had sold its Zagat guide business, for which it paid a stunning $151 million in 2011, for “an undisclosed amount” to a company you’ve likely never heard of, The Infatuation.

It’s an ignominious development for the former household name brand, and true pioneer in the data business. Well before the Internet, Zagat had blazed new trails in the area of user-generated content and consumer reviews. Tim and Nina Zagat, the founders, proved to be creative and talented marketers and self-promoters. For many years, the name Zagat was synonymous with restaurant ratings in the United States.

But the Zagat empire was print-based. Moreover, the Zagat business model depended in large part on selling bulk orders to companies with their names and logos on the covers, to be given away as gifts. That made it difficult for Zagat to economically expand its coverage beyond the largest cities, so it never became a truly national data provider. And its attempts to expand into other segments of the hospitality industry where more competition existed, fell flat. But the Zagat brand transcended all these shortcomings.

And it is the brand that Google apparently paid so much to acquire. Everything about the Zagat business was at odds with the Google model and ethos. I trashed the deal at the time.

The irony is that Google couldn’t even find an effective way to leverage the Zagat brand. Fortunately, the $151 million purchase price is a rounding error for Google.

What’s the future of Zagat? I do believe the brand still has some life, and could be resuscitated. There’s a role for well-curated, tightly edited slightly snarky, user-generated content that helps you decide where to eat – especially when coupled with a restaurant booking engine. That’s why I would have been much more excited if Zagat had been acquired by, say, OpenTable.

When Algorithms and Advertising Collide

You may remember when real estate listings firm Zillow first burst on the scene back in 2006. While there are many online real estate listings sites, Zillow distinguished itself with its “Zestimates,” an algorithmically-derived valuation for every house in the United States. Many Americans amused themselves throughout 2006 checking Zestimates for their own homes, as well as the homes of neighbors and friends.

Zestimates were never intended to be appraisals. After all, Zillow has no idea what is on the inside of any home. But the Zestimate algorithm does use many of the same approaches as appraisers use, including comparisons of recent sale prices of similar houses and historical sales trends. To the average consumer, they sure looked and felt like appraisals, and in a sense, that’s what really matters.

While Zestimates were unquestionably a brilliant way to launch a new website in a crowded vertical (Zillow become one of the highest traffic websites virtually overnight), Zestimates have always been an awkward fit with the Zillow business model. That’s because Zillow is an advertising-based business.

Think about it from the perspective of the real estate agent – the advertising buyer. The agent is attracted by Zillow’s huge traffic numbers and pays for an enhanced listing to get even more prominence. But Zillow automatically (and prominently) displays its Zestimate right near the asking price. Imagine asking $1 million for a home when the seemingly authoritative Zestimate pronounces the value of the home to be $700,000. As an agent, you’re not going to be happy.

Zillow’s stance is basically, “hey, it’s just an objective data point.” But advertisers don’t want to hear it. And that’s the essence of several recent lawsuits. In one lawsuit, the plaintiff argues that Zillow damaged her selling prospects by posting a lower Zestimate near her asking price and doing so without her permission. Another lawsuit goes further, saying that Zillow agreed with certain real estate agents to “de-emphasize” (read: hide) the Zestimate within the listing, meaning that some agents were getting a more attractive listing presentation, and those that didn’t pay an advertising fee were being disadvantaged.

This may sound like a problem peculiar to Zillow but it’s not. Yelp has dealt with a similar issue for years. In short, Yelp is finding it hard to sell advertising to customers whose listings are chock full of negative reviews. Yelp has been repeatedly accused of “de-emphasizing” (read: hiding) these negative reviews to satisfy advertisers.

The simple lesson here is that objective data and advertising don’t always mix, and that creates complexity and legal exposure unless you are aware of the issue and identify a solution that works for everybody. Those solutions can be hard to find.

 

 

Search is Easy; Data is Hard

The New York Times magazine has just published a fascinating article about Google, discussing whether Google has become an aggressive monopolist in the area of search, and if so, whether or not it needs to be broken up under anti-trust law. The article, which is well worth a read, cites case after case where Google ostensibly derailed other companies that had seemingly developed better search tools than Google.

Better search tools than Google? Is that even possible? That’s where I take some slight exception to the article. Possibly in order to make this topic more accessible to a mass audience, it labels all these competitive search providers as “vertical search” companies.

Those of us with some history in the business remember back to around 2006 when “vertical search” was a thing, a thing that has long since faded. At the time, the concept of vertical search was a full-text search engine, much like Google, but one that was focused on a single vertical market or specific topic area. The thinking was that if publishers curated the content that was being indexed, the search results would be stronger, more accurate, more contextual and more precise. A prime example at the time was the word “pumps.” As a search term, it’s a tough one – the user could be looking for a device that moves fluids … or shoes. A vertical search engine, which would be oriented towards either equipment or fashion, would reliably return more relevant responses. Vertical search failed as a business not because it was a bad idea, but because (and let’s be honest here) most of the publishers rushing to get into it were too lazy to do the up-front curation work. And without quality, up-front curation, vertical search quickly becomes just plain bad search.

Vertical search as used in the article really refers to vertical databases. The difference is important because the article also states that parametric search is hard. That statement is simply more proof that vertical search as used in this article means databases. Parametric search is not hard: collecting and normalizing data so it can be searched parametrically is hard. Put another way, searching a database is easy, providing there is a database to search.

Google never wanted to do the work of building databases. It sometimes bought them (example: a $700 million acquisition of airline data company ITA) or “borrowed” them (pulled results from third-party databases into its own search results, effectively depriving the database owner of much of its traffic – think Yelp). What Google did instead was devote unfathomable resources to develop software code to try to make unstructured data as searchable as structured data. While it made some impressive strides in this area, overall Google failed.

With this context, you can clearly see why data products are so important and valuable. Data collection is hard. Data normalization is hard. But there’s still no substitute for it, something Google has learned the hard way. It may be disheartening to see survey after survey where we learn that users turn to Google first for information. But this is the result of habituation, not superior results. For those who need to search precisely, and for those who really depend on the information they get back from a search, data products win almost every time … provided that users can find them. Read this article and judge for yourself  just how evil Google may be…