Viewing entries in
Publishing Trends

Data Democratization: A Timely Trend That Empowers Users

“Democratization” is the latest trend in data. While it is rapidly acquiring multiple definitions, the one I find most useful suggests that there is a growing opportunity to open up complex datasets to people who could benefit from them, but haven’t traditionally used them.

With this definition, data democratization usually involves some combination of pricing and user interface design. Reduced pricing is meant to make a data product more broadly accessible, and user interface design is about making the data incredibly easy to use. Putting these two together, those employing a data democratization strategy believe they can significantly expand their markets. In addition, a powerfully simple user interface should result in reduced support costs by enabling less sophisticated data users to start getting the answers they need directly, by themselves.

The best opportunities for data democratization? Look for data silos.  The data provider combines several datasets, doing all the complex normalization and matching that is required. The user interface then lets users painlessly do what amounts to cross-tabulation and filtering with all the complexity carefully hidden. Results are usually in the form of highly visual data presentations.

Data democratization is not “dumbing down” data. Indeed, a democratized data product often has all the power of much more complex and expensive business intelligence (BI) software. The nuance is making the user interface more accessible and less scary, and reducing the price point so that the product isn’t a major purchase decision.

You can see an analogy of sorts with what happened with computers, moving from centralized, expensive installations operated by a few with specialized skills to the amazing desktop computing capabilities we all enjoy today. Whether data democratization is an opportunity of the same scale and profundity as the computer revolution is unclear, but it certainly bears close watching because this is a strategy with a powerful first-mover advantage.

To see a great example of data democratization, check out one of this year’s Models of Excellence, Franklin Trust Ratings.

Better yet, meet the founder behind it. John Morrow, at this year’s Business Information and Media Summit, Nov. 13 – 15 in Ft. Lauderdale. There will be lots of other data trendsetters there too! 

The Next Data Gold Rush

A recent article in the Harvard Business Review, entitled “To Get More Value from Your Data, Sell It,” jumps on the data bandwagon, arguing that many companies own valuable data that can be monetized through sale to third parties. The authors do a good job pointing out that many companies don’t realize that the data they generate as a by-product of other activities has value. Even more usefully, they point out that some companies automatically treat all their data as top secret, and lose revenue opportunities as a result of this lack of discernment.

But where the article goes a bit off the path is its implicit view that almost all data are valuable. They’re not. As I pointed out in a post just a few weeks ago, there are a lot of reasons any given dataset may have little or no commercial value. And sometimes company data really is too sensitive to be resold.

Later in the article, the authors laud Cargill for building and selling a database of seed information. While the authors correctly note that getting into the data business is a risky move for most companies, the authors felt this was a low-risk move for Cargill because Cargill had “already developed a database to support seed development.” As every data publisher knows, having a database and being in the data business are two very different things. To create a saleable product, there is a lot of investment and work to develop a user interface. Then there’s the challenge of bringing a data product to market. Cargill knows farmers and Cargill knows seeds, but Cargill knows very little about subscription data products. It’s extremely rare when a non-data company, however good its data, suddenly decides it wants to be in the data business and finds success.

We can expect the idea of companies monetizing internal data to become mainstream. But once the hype settles down and reality kicks in, these companies will be very receptive to working with data publishers to optimize the value of their data, because they’ll see both the opportunities and complexities involved in monetizing it.

The flipside is that data publishers should start to look to non-data companies as potentially rich information sources. Many companies do indeed have valuable datasets built as a by-product of other activities. Finding and licensing these datasets could be a quick way to market for a data publisher, and can also yield that rarest of things: a high value dataset with none of the traditional compilation hassles and the possibility of licensing on an exclusive basis.

Finding internal company datasets isn’t easy, but as the concept of turning internal data into dollars gets more visibility, companies will start to actively look for potential licensees. Stay alert for these opportunities, and be prepared to move quickly because licensed internal company data could be the next data gold rush!

 

 

 

 

 

 

 

 

Do You Rate?

An article in the New York Times today discusses the growing proliferation of college rankings as focus shifts to trying to evaluate colleges based on their economic value.

Traditionally, rankings of colleges have tended to focus on their selectivity/exclusivity, but now the focus has shifted to what are politely called “outcomes,” in particular, how many graduates of a particular college get jobs in their chosen fields, and how well they are paid. Interestingly, many of the existing college rankings, such as the well-known one produced by U.S. News, have been slow to adopt to this new area of interest, creating opportunities for new entrants. For example, PayScale (an InfoCommerce Model of Excellence winner) has produced earnings-driven college rankings since 2008. Much more recently, both the Economist and the Wall Street Journal have entered the fray with outcomes-driven college rankings. And let’s not forget still another college ranking system, this one from the U.S. Department of Education.

At first blush, the tendency is to say, “enough is enough.” Indeed, one professor quoted in the Times article somewhat humorously noted that there are so many college rankings that, “We’ll soon be ranking the rankings.”

However, there is typically always room for another useful ranking. The key is utility. Every ranking system is inherently an alchemic blend of input data and weightings. What data are used and how they are evaluated depend on what the ratings service thinks is important. For some, it is exclusivity. For others it is value. There are even the well-known (though somewhat tongue in cheek) rankings of top college party schools.

And since concepts like “quality” and “value” are in the eye of the beholder with results often a function of available data, two rating systems can produce wildly varying results. That’s why when multiple rating systems exist, most experts suggest considering several of them to get the most rounded picture and most informative result.

It’s this lack of a single right way to create a perfect ranking that means that in almost every market, multiple competing rating systems can exist and thrive. Having a strong brand that can credential your results always helps, but in many cases, you can be competitive just with a strong and transparent methodology. It helps too when your rankings aren’t too far out of whack with general expectations. Totally unintuitive ranking results are great for a few days of publicity and buzz, but longer term they struggle with credibility issues.

A take-away for publishers is that just because you weren’t first to market with the rankings for your industry, there may still be a solid opportunity for you, if you have better data, a better methodology and solid credibility as a neutral information provider. 

Supercharge Your Audience Database

Time, Inc. recently announced a brilliantly simple and smart deal with a company called Audience Partners just in time for the 2016 election season. Simply stated, it is overlaying its massive consumer database with voting data from Audience Partners that has created a National Online Voter File database.

Yes, just in time for the elections, Time can now offer political campaigns access to its audience, but now with the ability to target not only by Time’s demographic data, but also by political party affiliation. And lest I leave you with the impression that Audience Partners only offers simple party registration data, let me be clear: there’s much more. The National Online Voter File allows targeting by voting frequency, donor history, types of elections the voter typically participates in and more.

Hopefully, your head is already spinning with the possibilities for your own audience database. In the B2B world, a voter overlay wouldn’t be my first choice because that’s a volume game: you need a huge audience to be attractive. But let your mind wander to other possibilities a bit closer to home. Is there public domain or even licensed data that you could overlay on your own audience database to create new high-value marketing opportunities? All this added intelligence about your audience is also something you can leverage internally as well, getting smarter about who you are talking to and what content they engage with.

None of this is a new concept. I remember back to the good old days of pressure sensitive labels and postal mail. Even back then, mailing list compilers were slamming together lists of mail order buyers to identify coveted “multi-buyers.” And “master files” of names, sometimes with hundreds of possible demographic and behavioral selections were readily available.

As a B2B example, I often offer up the example of Randall-Reilly, which has publications serving the trucking industry. It overlaid its basic audience data with a rich public domain database that allowed to it append truck ownership data. Suddenly, it went from offering modestly value truck driver contact information to hugely valuable market intelligence and targeting capabilities as it now knows the exact make, model and year of the trucks its subscribers operate.

This is perhaps the simplest and fastest way to supercharge your audience database. Think of what data you and your advertisers would kill for, then rather than trying to acquire it yourself, see how you might acquire it through one or more overlays. This has been a good business for 30 years; it’s even better today.     

Monetizing Information Flows

StreetContxt is a hot, Canadian-based start-up that just raised $8 million from A-list investors, including a number of big banks and brokerage houses. Its mission is simple: to maximize the value of the mountain of investment research that gets generated each year. But what really makes StreetContxt stand out to me is that it offers a very compelling business proposition to both those who create the research and those who use it.

For the sell-side (those who create the content), it’s currently difficult to measure the impact much less the ROI on the huge volume of research they create annually. They send it out to presumably interested and qualified recipients, with no way of knowing if it is acted on, or even viewed.

For the buy-side (those who receive and use the content), it’s impossible to keep up with the blizzard of information being pushed out to them. Even more significantly, some of this research is very good, but a lot of it isn’t. How do you identify the good stuff?

StreetContxt offers the sell-side a powerful intelligence platform. By distributing research through StreetContxt, research producers can learn exactly who viewed their research and whether it was forwarded to others (multiple forwards are used as a signal to suggest a timely and important research report). What naturally falls out of this is the ability to assess what research is having the most market impact. But StreetContxt also helps research producers correlate research with trading activity to help make sure that their research insights are being rewarded with adequate commission revenue. Even better, StreetContxt helps the sell-side by providing insight into who is reading research on what topics and with what level of engagement in order to help power sales conversations. In short, StreetContxt tracks “who’s reading what” at a very granular level both to measure impact but also to inform selling activity.

On the buy-side, StreetContxt helps those who use research with a recommendation engine. Research users can specify topical areas of interest that get tuned by StreetContxt based on who is reading and forwarding what research reports. In other word, StreetContxt has found an approach to automatically surface the best and most important research. StreetContxt also helps research users by monitoring relevant research from sources to which the research user may not currently subscribe. And since much research is provided in exchange for trading commissions, StreetContxt can help research users get the most value from these credits.

The magic described here happens because the content creators post to the central StreetContxt portal, and research users access content from the same portal. This allows StreetConxt to monitor exactly who is using what research.

Why would research users allow their every click to be tracked and turned into sales leads? Because StreetContxt offers them a powerful inventive in the form of curated research recommendations, a better way to manage research instead of having it flood their in-boxes as it does now, and most importantly of all, a way to ferret out the best and most important research.

The big lesson for me is that with a sufficiently compelling value proposition on both sides, smart companies can position themselves in the middle of an information flow and monetize the resulting data in powerful and profitable ways.