In my discussion of the Internet of Things (IoT) a few weeks back, I mentioned that there was a big push underway to put sensors in farm fields to collect and monitor soil conditions as a way to optimize fertilizer application, planting dates, etc. But who would be the owner of this information, which everyone in agriculture believes to be exceedingly valuable? Apparently, this is far from decided. An association of farmers, The Farm Bureau, recently testified in Congress that it believes that farmers should have control over this data, and indeed should be paid for providing access to it.
We’ve heard this notion advanced in many different contexts over the past few years. Many consumer advocates maintain that consumers should be compensated by third parties who are accessing their data and generating revenue from it.
Generally, this push for compensation centers on the notion of fairness, but others have suggested it could have motivational value as well: if you offer to pay consumers to voluntarily supply data, more consumers will supply data.
The notion of paying for data certainly makes logical sense, but does it work in practice? Usually not.
The first problem with paying to collect data on any scale is that it is expensive. More times than not, it’s just not an economical approach for the data publisher. And while the aggregate cost is large, the amount an individual typically receives is somewhere between small and tiny which really removes its motivational value.
The other issue (and I’ve seen this first-hand) is the perception of value. Offer someone $1 for their data, and they immediately assume it is worth $10. True, the data is valuable, but only once aggregated. Individual data points in fact aren’t worth very much at all. But try arguing this nuance to the marketplace. It’s hard.
I still get postal mail surveys with the famous “guilt dollar” enclosed. This is a form of paying for data, but it drives, as noted, off guilt, which means undependable results. Further, these payments are made to assure an adequate aggregate response: whether or not you in particular respond to the survey really doesn’t matter. It’s a different situation for, say, a data publisher trying to collect retail store sales data. Not having data from Wal-Mart really does matter.
Outside of the research world, I just haven’t seen many successful examples of data publishers paying to collect primary source data. When a data publisher does feel a need to provide an incentive, it’s almost always in the form of some limited access to the aggregated data. That makes sense because that’s when the data becomes most valuable: once aggregated. And supplying users with a taste of your valuable data often results in them purchasing more of it from you.
Last week, I discussed how the Internet of Things creates all sorts of potential opportunities to create highly valuable, highly granular data. The Billion Prices Project, which is based at MIT, provides another route to the same result. Summarized very simply, two MIT professors, Alberto Cavallo and Roberto Rigobon, collect data from hundreds of online retailers all over the world to build a massive database of product-level pricing data, updated daily. It’s an analytical goldmine that can be applied to solve a broad range of problems.
One obvious example is the measurement of inflation. Currently, the U.S. Government develops its Consumer Price Index inflation data the old fashioned way: mail, phone and field surveys. And inherently, this process is slow. Contrast that with the Billion Price Project that can measure inflation on a daily basis, and do so for a large number of countries.
But measuring inflation is just the beginning. The Billion Prices Project is exploring a range of intriguing questions, such as the premiums that are charged for organic foods and the impact of exchange rates on pricing. You’re really only limited by your specific business information needs – and your imagination.
The Billion Prices Project also offers some useful insights for data publishers. First, the underlying data is scraped from websites. The Billion Prices Project didn’t ask for it or pay for it. That means you can build huge datasets quickly and economically. Secondly, the dataset is significantly incomplete. For example, it entirely ignores the huge service sector of the economy. But’s it’s better than the existing dataset in many ways, and that’s what really matters.
When considering building a database, new web extraction technology gives you the ability to build massive, useful and high quality datasets quickly and economically. And as we have seen time after time, the old aphorism, “don’t let the perfect be the enemy of the good” still holds true. If you can do better than what’s currently available, you generally have an opportunity. Don’t focus on what you can’t get. Instead, focus on whether what you can get meaningfully advances the ball.
The Internet of Things (IoT) is, as buzzwords go, pretty easy to understand: it describes the concept of connecting things (other than computers) to the Internet. You may have heard one popular example of IoT in the not-too-distant future, when your Internet-connected refrigerator determines your orange juice is running low, and automatically places an online order to have more delivered to your doorstep. We’re a bit away from this scenario, but inching closer every day. The automobile companies in particular have been actively exploring ways for your car to alert you via email or text when it needs service or other attention. This is a clear, obvious and powerful example that you’ll soon see in dealer showrooms.
But is IoT strictly a consumer phenomenon? I think not. There are potentially huge opportunities to bring the concept of IoT to the world of business. And I think the data that can be collected by these devices will in many cases by organized and sold by data publishers.
As I have said repeatedly, data publishers are natural organizers of data for vertical markets because they’re neutral, trusted players in their markets and importantly, they’re already doing it. Moving from tracking a company and its people to tracking the location of a company’s equipment really isn’t that big a stretch. Consider Lloyd’s that tracks the exact position of all cargo ships at sea and Drilling Information, that tracks the location of drilling rigs. There’s a lot of value in knowing where things are if someone needs fast access to them. Extend that thinking a little bit, and you can start to see the opportunities. And some data is even more valuable when it is centralized and organized. That’s the traditional role and strength of data publishers.
Another tantalizing example can be found in the 2010 Model of Excellence company Spiceworks. This company offers software that helps companies manage their computer networks – and everything connected to them. Spiceworks not only knows the make and model of every printer owned by hundreds of thousands of companies, it even knows when they’re running low on toner, and all in real-time. Think of how many different ways you could monetize data like this! And as just one more example, there’s a big push in agriculture right now to use sensors that monitor moisture and other conditions in farmers’ fields. We’ve moved rapidly from first collecting information about farms, to collecting information about the crops produced at these farms, to collecting information about the soil that produces the crops at these farms. It’s about as granular as you can get, and best of all, it’s collected by devices and sensors meaning low cost and high accuracy.
Of course, not every shipping company or farmer will want to have the intimate details of their businesses tracked and reported to others. But here again, a central repository can return valuable data to those who contribute, including performance benchmarks or other useful trend data. Indeed, that’s the big goal driving the push for electronic health records – the ability to tap into large pools of data to find patterns that will make healthcare providers smarter and more productive.
The Internet of Things really is as big as our imaginations and it’s happening now. And like so many things on the Internet, the biggest opportunities go to those who move fast and early. That too is an Internet thing.
If there’s a trend in the world of journalism, it’s that quality is rapidly and powerfully asserting itself. In a growing number of cases, those who have strong, well-articulated opinions, those who can spot trends, and those who can analyze data are outgrowing the media platforms that launched them. This creates new opportunities for them, not the least of which is being able to charge money for their valuable knowledge. It’s an encouraging trend. But while developing talented trend-spotters and opinion leaders is a hit-and-miss process, journalism based on data is a much more dependable route to building quality. That’s because the data confers authority - your journalism is not only based on facts, it’s derived from facts. Data journalism is also valuable because the underlying data is often proprietary, and even if not, the analysis of the data is proprietary. Numerous studies have shown that data is often more popular than straight news, and venture capitalists are noticing this as well. In short, there’s a lot to commend the marriage of journalism and data, and that’s good news for many B2B publishers.
Yes, many B2B publishers have both data and journalism businesses. But even when they’re under the same roof, for the most part they might as well be in different worlds. The news folks and the data folks aren’t working together. In many publishing companies, they don’t even regularly talk to each other. And this is a huge missed opportunity.
Your data group knows how to collect, maintain and analyze data. These are skills sorely lacking in the journalism world today. And your news group knows what the burning issues are in the marketplace, and how to turn often mind-numbing tables of data into lively, understandable prose. It’s a great match-up of skills, but one that rarely seems to happen organically.
Using your proprietary data in news stories is the best possible kind of promotion for your paid data products because it shows clearly how valuable and useful your datasets are. And introducing proprietary data into your news content sets you apart in the marketplace as a source of evidence-based insight.
So in my view, there’s a powerful case to be made to get your data and news groups working together. Once you do, you’ve set the stage to move to the next level, what’s being called analytical journalism, where you not only present the facts, but explain their implications, which starts you down the road to being able to offer data, trend-spotting and opinion leadership. That’s an editorial package your audience will respect, and pay for.
In a speech at the D2 Digital Dialogue conference yesterday, a top Macy's marketing executive, in a true "I'm mad as hell and I'm not going to take it anymore" moment, made the following statement: "Consumers are worried about our use of data, but they're pissed if I don't deliver relevance. … How am I supposed to deliver relevance and magically deliver what they want if I don't look at the data?"
This question speaks directly to the larger issues facing the publishing industry today: how to make money in a world where today’s consumer wants everything … and nothing. Consumers want their content free of charge, free of advertising and free of tracking. And what do content providers get in return for all this freedom? Well, freedom from revenue.
All this stems from the dot com mania when it became both fashionable and conventional wisdom that success online depended on free content. In the process of doing this, we’ve trained an entire generation to expect everything for free, to the extent they become indignant if any modest attempt at monetization offends their delicate sensibilities.
The content industry to a large part created this mess by enabling this unsustainable state of affairs. Ironically, those information companies that stuck to their paid subscription models are the ones in the best shape right now. And therein lies the answer to this problem: let’s move past increasingly intrusive, contorted and ultimately futile efforts to monetize our visitors, and start turning our visitors into subscribers. No, it won’t be easy or painless, but is there a real alternative?
We can look to the newspaper industry for inspiration. The entire industry, to paraphrase Churchill, is finally doing the right thing after having explored every other option. Yes, newspapers are charging for their content. That’s all the more remarkable because newspapers are burdened with a severe commoditization issue. And just this week, People magazine announced a news subscription bundle priced at $100 per year. Yes, People magazine. If that doesn’t embolden you, what will?
So if you are still mired in the dismal world of free content where consumers want to get everything for nothing and advertisers want to pay next to nothing to reach them, there is an option. And if you honestly don’t think you can make the shift, you need to take a hard look at your content. As Sharon Rowlands, former CEO of Penton once said to me, “If our content is as valuable as we say it is, why do we all spend so much money begging people to take it?” Answer that question and your business direction becomes clear.