The recent dust-up between Google and Microsoft (in short, Google is accusing Microsoft of copying its search results) is more entertaining than informative. What is of particular interest to me is that the web cognoscenti have largely come down on the side of Microsoft, many going so far as to proclaim Microsoft clever and creative for trying to use Google search results to improve its own.

This latest controversy -- building a search engine, in part, by taking results from another search engine -- reminded me of a larger issue I have been pondering for a while: the seeming tilt in favor of aggregating data over creating it.

Consider "Web 2.0." There are thousands of competing definitions for this term, but to me it stands for a period when we celebrated the power of programmers who aggregated large and complex datasets in clean and powerful user interfaces. These Web 2.0 sites were exciting, useful, powerful and innovative, but their existence depended on OPD: Other People's Data. Sometimes that data was licensed, but just as often it was public domain data or simply scraped and re-formatted. To the extent all this data was legally obtained, I take no issue with it. But it does seem to have created a mindset, even among publishers. As we discuss new products with publishers of all different shapes and sizes, it's not uncommon that one of the first questions asked is, "where will we get the data?"

I jump from that thought to some interesting insights from the Media Dealmakers Summit I attended yesterday. A number of speakers brought up the topic of curation, usually with near-glee in their voices. That's because curation looks to be the next big thing, and who better than publishers to offer content made more valuable through curation? But aggregation is curation-lite. By that I mean you add relatively little value simply deciding what sources to aggregate. Real curation, and hence real value, comes from getting under the hood and selecting, standardizing, normalizing and editing individual units of content. Arguably, the highest form of curation is compilation, where you not only create a unique dataset to meet a specific need, but you make highly granular decisions about what to include or exclude.

At the Dealmakers Summit  Dan Lagani, president of Reader's Digest, reminded us that Reader's Digest was created in 1922 specifically to address information overload. What resulted was one of the most successful magazines in the history of publishing. If content curation was that valuable back then, imagine its value today! But again, simple aggregation is the lowest form of curation and compilation is the highest. And if we want to have high value, differentiated products, we must never let our content creation skill atrophy. Aim high.

Comment