Is It App Time?
In a move that further signals the remarkable growth and increasing importance of mobile devices,
Google has announced changes to its search algorithms to prioritize apps.
Starting April 21, for all searches on mobile devices, Google will present search results that identify and prioritize mobile-friendly sites. That means those with mobile-friendly sites should rank higher in search results conducted from mobile devices. Further, if you wish, Google will also begin to index content on mobile apps that may not appear on your website. And to close the loop, Google will allow app developers to use mobile search results to guide users to either the website or the app (provided the app is installed on the mobile device, something Google will check), wherever you the content owner think they’ll have the best experience.
These are cool if not world-changing new features from Google, but they indicate clearly the rapid evolution of the mobile ecosystem, one where the quality of the displayed information is becoming nearly as important as the information itself. This means that mobile-friendly websites are important, but the future lies with apps that will become an increasingly seamless part of the search experience. Think about it. Google will now check (on Android devices) what apps you have installed, index the content of those apps, present this content in regular Google search results, and allow you to seamlessly view that content in the installed app.
If your website isn’t already mobile-friendly (and it should be just as a best practice), Google’s giving you a big incentive to do so by pushing you up in mobile search results.
And if you’re wondering about the value of apps for your products, consider how quickly they are moving from handy appendages to the mobile experience to becoming central to that experience. If your data makes sense on a mobile device (and it doesn’t always), it’s probably time to stop thinking and start coding!
Buying Guides That Do Stuff
It’s been very interesting to watch the transition of buying guides from print to online. Print buying guides were a pretty good business, although in fact few of them were very good products. That’s because most buying guides were what I call shallow information products: they would typically list a product and the names and addresses of companies that (hopefully) made or sold the product. After that, users were on their own. This stripped-down format was in part practical, because even this limited information was hard to obtain. It was in part by design, because it encouraged companies to buy advertising next to their listings to provide additional information. There’s no room on the web for shallow information products anymore. Search engines have gotten good enough that you can find at least a few manufacturers or sellers of just about anything with very little effort. And company websites now typically contain a wealth of product information, in part because it is so cheap and efficient to do so. Overall, this leaves little room for buying guides to add value, at least in their traditional format.
So is the buying guide model dead? If you are talking about the traditional shallow information model, the answer is yes (something that the big yellow page publishers, incredibly, have still not figured out). But what is emerging in its place are a number of exciting new products that mix and match such features as:
- User ratings and reviews (and some now validate users and even confirm that they have purchased the product they are reviewing)
- Links to third-party professional reviews
- Downloadable CAD drawings
- Photo portfolios showing product applications and/or the product in use
- Strong parametric search
- Side-by-side comparison of selected products
- Guided search where instead of traditional searches, users answer a questionnaire instead
- Shared online areas where users can post products for review by co-workers
- Ability to request product samples from the manufacturer
- Integrated ordering capabilities
- Warehousing and shipping of product on behalf of manufacturers
- Product specification data, warranty data, installation instructions, manuals
- Real-time inventory information
- Real-time pricing information
In short, the list is long. And what results is a true destination purchasing research site and, increasingly, a central marketplace. Find exactly what you need and order it. That’s been the holy grail of buying guides for decades, and it’s finally becoming a reality.
The other piece of the puzzle is advertising. Because publishers are now building these true destination sites, they can also develop substantial traffic simply because they are offering utility and value. And advertisers respect these highly qualified or often quite large audiences because they are truly “in the market,” and what advertiser doesn’t want visibility when the buying decision is being made. It is, as we like to say, “data that does stuff.”
So while the approach is different, what we see with buying guides is exactly the same as what we see with other forms of data, and exemplifies infocommerce: creating a high value proposition with better, deeper data and tools to act on it.
Monetizing Your Unfair Advantage
In the news today was the announcement that BusinessWire, a press release distribution company owned by Warren Buffett’s Berkshire Hathaway, had decided to stop offering direct access to its press releases to high frequency traders. This follows on the heels of a decision by Thomson Reuters not to sell advance access to market-moving economic data that it publishes. I find myself concerned about these decisions. That’s in part because what these two companies were doing was actually quite different. And as you dive into the details, you start to see issues that a broader range of data publishers may ultimately have to confront.
The Thomson Reuters situation involves two indexes: Consumer Confidence and the ISM Manufacturing Index. These are both major indexes that can and do influence the stock market broadly. In both cases, Thomson Reuters had licensed the rights to publish them. Nobody argues that Thomson-Reuters should have the right to monetize these indexes. But it’s one particular aspect of this monetization that raised concerns. Thomson Reuters openly offered to sell access to these indexes either a few seconds or a few minutes before they were released to the public. That’s more than enough time for computerized trading systems to analyze the news and place buy or sell orders accordingly. And by the way, it’s all legal, and Thomson-Reuters wasn’t hiding any of these arrangements. But is it fair?
The BusinessWire case is even more innocuous. BusinessWire is in the business of pushing our press releases far and wide. To that end it offers direct electronic access to anyone who might benefit from it. Some smart traders figured out how to take that innocent feed, process it, and make buy and sell decisions on it very quickly. BusinessWire was just going about its business. Third parties figured out how to profit from their activities, with no help or encouragement from BusinessWire. And while press releases don’t sound that interesting, keep in mind it’s the way many public companies first announce big events such as acquisitions.
I’m not a lawyer, so there may be nuances to this I am missing, but I understand that public policy recognizes the value of a level playing field when it comes to the stock markets, in part to build confidence. And as an individual investor, providing advance peeks to savvy stock traders doesn’t feel right to me. But as an information professional, my view is why not? The entire B2B information industry largely exists to provide unfair advantage. In fact, I know data publishers who have seriously considered variants of “Your Unfair Advantage” as corporate tag lines.
Given the murkiness of the legal issues, I think it’s fair to conclude that both companies stopped these activities primarily for reputational reasons. And that’s important to think about. These two events are very different, but you’d never know that from a quick scan of the headlines they generate. Our products are complex, sophisticated and nuanced. Typically, they are used by a range of users in a range of ways. You can’t – and shouldn’t – police what users do with your data. But you should put some thought into how you position your data and its uses, especially if there is potential to use your data for stock trading. It’s too easy to get painted as the bad guy even if you’ve done nothing wrong.
The bottom line is that as data becomes more powerful and important, we’re all going to receive more scrutiny. And the complexity of our products works against us in the media. That’s why sensitivity to how we present our data products is going to become increasingly important. And if yours is one of the companies considering a tag line that includes the words “unfair advantage,” may I politely suggest a re-think?
Source Data’s True Worth
In my discussion of the Internet of Things (IoT) a few weeks back, I mentioned that there was a big push underway to put sensors in farm fields to collect and monitor soil conditions as a way to optimize fertilizer application, planting dates, etc. But who would be the owner of this information, which everyone in agriculture believes to be exceedingly valuable? Apparently, this is far from decided. An association of farmers, The Farm Bureau, recently testified in Congress that it believes that farmers should have control over this data, and indeed should be paid for providing access to it.
We’ve heard this notion advanced in many different contexts over the past few years. Many consumer advocates maintain that consumers should be compensated by third parties who are accessing their data and generating revenue from it.
Generally, this push for compensation centers on the notion of fairness, but others have suggested it could have motivational value as well: if you offer to pay consumers to voluntarily supply data, more consumers will supply data.
The notion of paying for data certainly makes logical sense, but does it work in practice? Usually not.
The first problem with paying to collect data on any scale is that it is expensive. More times than not, it’s just not an economical approach for the data publisher. And while the aggregate cost is large, the amount an individual typically receives is somewhere between small and tiny which really removes its motivational value.
The other issue (and I’ve seen this first-hand) is the perception of value. Offer someone $1 for their data, and they immediately assume it is worth $10. True, the data is valuable, but only once aggregated. Individual data points in fact aren’t worth very much at all. But try arguing this nuance to the marketplace. It’s hard.
I still get postal mail surveys with the famous “guilt dollar” enclosed. This is a form of paying for data, but it drives, as noted, off guilt, which means undependable results. Further, these payments are made to assure an adequate aggregate response: whether or not you in particular respond to the survey really doesn’t matter. It’s a different situation for, say, a data publisher trying to collect retail store sales data. Not having data from Wal-Mart really does matter.
Outside of the research world, I just haven’t seen many successful examples of data publishers paying to collect primary source data. When a data publisher does feel a need to provide an incentive, it’s almost always in the form of some limited access to the aggregated data. That makes sense because that’s when the data becomes most valuable: once aggregated. And supplying users with a taste of your valuable data often results in them purchasing more of it from you.
Read More
Edmunds.com Yields Multi-Million Dollar Revenue Opportunity from its Free API
APIs, which stand for Application Programming Interfaces, are all the rage these days. APIs, which can be described as online back doors into your database, allow programmers to seamlessly integrate your data into their products, particularly, but not necessarily, mobile apps. Increasingly, customers are asking companies selling subscription data products for “API access” to their data. The reason for this is that these companies want to integrate commercial datasets into their own internal software applications. So you’ve got application developers looking for API access to your data in order to build it into software products for resale. You’ve also got companies that want API access to your data to power their own internal software. If you are charging a high enough price for your data that reflects the convenience and power of API access, as well as the expanded audiences your data will reach, APIs are nothing but great news for data publishers.
But can you also make money giving away API access to your data for free? A growing number of companies think so. We recently spoke with Ismail Elshareef, Senior Director, Open Platform Initiatives for Edmunds.com. Edmunds makes its data available via API for free, and can directly attribute millions of dollars in recurring revenue to this initiative.
According to Ismail, Edmunds.com launched its API about two years ago, primarily as a way to get more exposure for the Edmunds.com brand. The second objective was one we often hear from those with open APIs: a desire to encourage innovation. As Ismail puts it, “We can’t hire all the smart people out there.” The goal is to put Edmunds data in the hands of a broad array of talented developers and see what they can do with it – whether it’s new applications software to leverage the data, or even entirely new and unintuitive uses for the data itself.
The additional brand exposure for Edmunds worked exactly as planned, according to Ismail, who said it has become “a huge differentiator.” Edmunds displaced a number of competitors who were charging money for equivalent data, and with the “powered by Edmunds” attribution on so many different products, Edmunds saw immediate brand benefit, not the least of which was more advertisers specifically acknowledging the reach of Edmunds in sales meetings.
Overall, Edmunds has found a number of partner deals came together more quickly as well, “because using the API, they can get comfortable with our data first.” A great example of this is a major deal Edmunds put together with eBay. Ismail emphasized the growing popularity of this “try before you buy” approach to data content, and that publishers need to respond to this growing preference among data buyers.
Ismail is careful to note that Edmunds wasn’t seeking to actively disrupt paid data providers in its vertical; the free data it offers simply reflects lower barriers to entry, and to an extent, the increasing commoditization of much of data it offers for free.
And while additional market exposure is clearly beneficial, as Edmunds saw it, the big upside opportunity was to see what dozens or even hundreds of talented, motivated independent developers would do with the data. And that’s exactly where Edmunds found gold. Acknowledging that of the apps developed around its data, “only 1 in a 100 is really interesting,” Ismail noted that one really interesting application emerged after only seven months of offering the free API. An independent software provider in the Northeast built a cutting-edge application for automobile dealerships. But while they had a great solution, they didn’t have a sales force to market it to dealers. Edmunds contacted the CEO of the software company, struck a partnership deal, and already the product generates millions of dollar in annual revenues.
One of the keys to Edmunds’ success is that while its data is free, it isn’t free for the taking. Every developer who wants to use Edmunds data has to adhere to a terms of service agreement, which specifies the attribution that Edmunds is to receive, as well as reserving the right for Edmunds to cut off data delivery to anyone who acts irresponsibly, though Ismail notes that most developers are very responsible and “know what’s cool and what’s not.” Also important to the Edmund’s model is that it initially only provides enough free data to developers for testing purposes. Before raising a developer’s API quota, Edmunds looks at each application to make sure attribution and back-links are correct, and that the application overall is using the data correctly (not incorrectly labeling data elements or incorrect calculations) and that the application is a quality product that Edmunds is comfortable being associated with.
As guidance to other data publishers interested in pursuing an open API, Ismail feels it is essential to use a service that provides an API management layer. After extensive research, Edmunds went with Mashery, which stood out to Ismail in particular because “Mashery already works with major publishers like the New York Times and USA Today, so they know the issues that are specific to publishers. They also have a huge developer outreach program, now over 100,000+ developers, which made it easy for us to get the word out in the developer community.”
Internally, Ismail notes that the Edmunds API was initially a tough sell. Not everyone believed in the concept, so executive support was a huge factor. It was only because the company’s chairman was such a believer that the API became a reality. As Ismail notes, “ultimately a free API is a leap of faith.” Ismail also noted the difficulties in getting the concept cleared by the company’s lawyers, who simply weren't initially comfortable with exposing our data to everyone." Executive sponsorship was key to ultimately clearing these legal impediments as well.
Launching the API involved “a lot of small steps in the beginning.” Initially, Ismail worked by himself on the API program. Now, his team consists of four engineers and a designer. And just yesterday, the Edmunds API has been certified for “Best Developer Experience” by Mashery – more evidence of how far Edmunds has come so quickly.