I am constantly reviewing new online data products looking for fresh ideas and innovative business models. I do see both, but increasingly I am seeing more lazy and uninspired products.

They are also formulaic . Here's the recipe: grab some public domain database, add some modest value to it (sometimes little more than simple parametric searching), build a user interface and ... well actually, that's about it. Business models vary: there might be a paywall, there might be ads from an ad network. It's actually not uncommon for some of these sites to have no business model at all. It is as if the owners want somebody else to identify the hidden value in what they've done and bring it to them on a silver platter.

This formula play has actually become so common that that there isn't much opportunity left. The number of public domain databases that anyone really wants to access is small, and too many have piled on. That's why a new class of online data sites has emerged: those based on licensed, but generally low-value data. You see them everywhere. The "locate anyone" sites that are generally derived from credit report header data. The business directories are based on compiled yellow pages listings.

The harsh reality is that it's cheap and easy to throw together a website to access a database, especially when the data are low-cost or free. And with the cost so low, there's little incentive to spend time and money to confirm a market need, and sometimes there is little urgency to even generate any revenue.

Professional data publishers point out that these products add little value and tend to be poorly produced and indifferently marketed. Yet there is a segment of the market that is content with "good enough" content, especially when the content is low-priced or even free. It's rare that data products of this type will steal large amounts of revenue from professional data publishers with similar content, but they still have a corrosive effect over time by de-valuing content in the eyes of users, creating downward pressure on prices.

What's a publisher to do? Those who rely largely on public domain data need to understand that this is a precarious place to be these days. Yes, normalization and cleaning and even verifying add value, but these important steps aren't the compelling value proposition they used to be, in part because with some smarts and programming skill, a lot of this value can be replicated by a low-end competitor on an automated basis. The only real long-term answer is differentiated, proprietary data. Even better, publishers need to be thinking about offering insight, analysis, commentary and context. And amid all the new technology that can do such wonders scraping websites to build databases and turning data into polished text, let's not forget that these can only be process steps, not a destination. All these clever tools are available to your competitors as well, and they get cheaper and more powerful every day.

Want long-term success? Think different. It works for content just like it works for computer products!

Comment