Viewing entries in
Best Practices

Comment

Monetizing Your Unfair Advantage

In the news today was the announcement that BusinessWire, a press release distribution company owned by Warren Buffett’s Berkshire Hathaway, had decided to stop offering direct access to its press releases to high frequency traders. This follows on the heels of a decision by Thomson Reuters not to sell advance access to market-moving economic data that it publishes. I find myself concerned about these decisions. That’s in part because what these two companies were doing was actually quite different. And as you dive into the details, you start to see issues that a broader range of data publishers may ultimately have to confront.

The Thomson Reuters situation involves two indexes: Consumer Confidence and the ISM Manufacturing Index. These are both major indexes that can and do influence the stock market broadly. In both cases, Thomson Reuters had licensed the rights to publish them. Nobody argues that Thomson-Reuters should have the right to monetize these indexes. But it’s one particular aspect of this monetization that raised concerns. Thomson Reuters openly offered to sell access to these indexes either a few seconds or a few minutes before they were released to the public. That’s more than enough time for computerized trading systems to analyze the news and place buy or sell orders accordingly. And by the way, it’s all legal, and Thomson-Reuters wasn’t hiding any of these arrangements. But is it fair?

The BusinessWire case is even more innocuous. BusinessWire is in the business of pushing our press releases far and wide. To that end it offers direct electronic access to anyone who might benefit from it. Some smart traders figured out how to take that innocent feed, process it, and make buy and sell decisions on it very quickly. BusinessWire was just going about its business. Third parties figured out how to profit from their activities, with no help or encouragement from BusinessWire. And while press releases don’t sound that interesting, keep in mind it’s the way many public companies first announce big events such as acquisitions.

I’m not a lawyer, so there may be nuances to this I am missing, but I understand that public policy recognizes the value of a level playing field when it comes to the stock markets, in part to build confidence. And as an individual investor, providing advance peeks to savvy stock traders doesn’t feel right to me. But as an information professional, my view is why not? The entire B2B information industry largely exists to provide unfair advantage. In fact, I know data publishers who have seriously considered variants of “Your Unfair Advantage” as corporate tag lines.

Given the murkiness of the legal issues, I think it’s fair to conclude that both companies stopped these activities primarily for reputational reasons. And that’s important to think about. These two events are very different, but you’d never know that from a quick scan of the headlines they generate. Our products are complex, sophisticated and nuanced. Typically, they are used by a range of users in a range of ways. You can’t – and shouldn’t – police what users do with your data. But you should put some thought into how you position your data and its uses, especially if there is potential to use your data for stock trading. It’s too easy to get painted as the bad guy even if you’ve done nothing wrong.

The bottom line is that as data becomes more powerful and important, we’re all going to receive more scrutiny. And the complexity of our products works against us in the media. That’s why sensitivity to how we present our data products is going to become increasingly important. And if yours is one of the companies considering a tag line that includes the words “unfair advantage,” may I politely suggest a re-think?

Comment

Comment

Source Data’s True Worth

In my discussion of the Internet of Things (IoT) a few weeks back, I mentioned that there was a big push underway to put sensors in farm fields to collect and monitor soil conditions as a way to optimize fertilizer application, planting dates, etc. But who would be the owner of this information, which everyone in agriculture believes to be exceedingly valuable? Apparently, this is far from decided. An association of farmers, The Farm Bureau, recently testified in Congress that it believes that farmers should have control over this data, and indeed should be paid for providing access to it.

We’ve heard this notion advanced in many different contexts over the past few years. Many consumer advocates maintain that consumers should be compensated by third parties who are accessing their data and generating revenue from it.

Generally, this push for compensation centers on the notion of fairness, but others have suggested it could have motivational value as well: if you offer to pay consumers to voluntarily supply data, more consumers will supply data.

The notion of paying for data certainly makes logical sense, but does it work in practice? Usually not.

The first problem with paying to collect data on any scale is that it is expensive. More times than not, it’s just not an economical approach for the data publisher. And while the aggregate cost is large, the amount an individual typically receives is somewhere between small and tiny which really removes its motivational value.

The other issue (and I’ve seen this first-hand) is the perception of value. Offer someone $1 for their data, and they immediately assume it is worth $10. True, the data is valuable, but only once aggregated. Individual data points in fact aren’t worth very much at all. But try arguing this nuance to the marketplace. It’s hard.

I still get postal mail surveys with the famous “guilt dollar” enclosed. This is a form of paying for data, but it drives, as noted, off guilt, which means undependable results. Further, these payments are made to assure an adequate aggregate response: whether or not you in particular respond to the survey really doesn’t matter. It’s a different situation for, say, a data publisher trying to collect retail store sales data. Not having data from Wal-Mart really does matter.

Outside of the research world, I just haven’t seen many successful examples of data publishers paying to collect primary source data. When a data publisher does feel a need to provide an incentive, it’s almost always in the form of some limited access to the aggregated data. That makes sense because that’s when the data becomes most valuable: once aggregated. And supplying users with a taste of your valuable data often results in them purchasing more of it from you.

Comment

Comment

Edmunds.com Yields Multi-Million Dollar Revenue Opportunity from its Free API

edmundsAPIs, which stand for Application Programming Interfaces, are all the rage these days. APIs, which can be described as online back doors into your database, allow programmers to seamlessly integrate your data into their products, particularly, but not necessarily, mobile apps. Increasingly, customers are asking companies selling subscription data products for “API access” to their data. The reason for this is that these companies want to integrate commercial datasets into their own internal software applications. So you’ve got application developers looking for API access to your data in order to build it into software products for resale. You’ve also got companies that want API access to your data to power their own internal software. If you are charging a high enough price for your data that reflects the convenience and power of API access, as well as the expanded audiences your data will reach, APIs are nothing but great news for data publishers. But can you also make money giving away API access to your data for free? A growing number of companies think so. We recently spoke with Ismail Elshareef, Senior Director, Open Platform Initiatives for Edmunds.com. Edmunds makes its data available via API for free, and can directly attribute millions of dollars in recurring revenue to this initiative.

According to Ismail, Edmunds.com launched its API about two years ago, primarily as a way to get more exposure for the Edmunds.com brand. The second objective was one we often hear from those with open APIs: a desire to encourage innovation. As Ismail puts it, “We can’t hire all the smart people out there.” The goal is to put Edmunds data in the hands of a broad array of talented developers and see what they can do with it – whether it’s new applications software to leverage the data, or even entirely new and unintuitive uses for the data itself.

The additional brand exposure for Edmunds worked exactly as planned, according to Ismail, who said it has become “a huge differentiator.” Edmunds displaced a number of competitors who were charging money for equivalent data, and with the “powered by Edmunds” attribution on so many different products, Edmunds saw immediate brand benefit, not the least of which was more advertisers specifically acknowledging the reach of Edmunds in sales meetings.

Overall, Edmunds has found a number of partner deals came together more quickly as well, “because using the API, they can get comfortable with our data first.” A great example of this is a major deal Edmunds put together with eBay. Ismail emphasized the growing popularity of this “try before you buy” approach to data content, and that publishers need to respond to this growing preference among data buyers.

Ismail is careful to note that Edmunds wasn’t seeking to actively disrupt paid data providers in its vertical; the free data it offers simply reflects lower barriers to entry, and to an extent, the increasing commoditization of much of data it offers for free.

And while additional market exposure is clearly beneficial, as Edmunds saw it, the big upside opportunity was to see what dozens or even hundreds of talented, motivated independent developers would do with the data. And that’s exactly where Edmunds found gold. Acknowledging that of the apps developed around its data, “only 1 in a 100 is really interesting,” Ismail noted that one really interesting application emerged after only seven months of offering the free API. An independent software provider in the Northeast built a cutting-edge application for automobile dealerships. But while they had a great solution, they didn’t have a sales force to market it to dealers. Edmunds contacted the CEO of the software company, struck a partnership deal, and already the product generates millions of dollar in annual revenues.

One of the keys to Edmunds’ success is that while its data is free, it isn’t free for the taking. Every developer who wants to use Edmunds data has to adhere to a terms of service agreement, which specifies the attribution that Edmunds is to receive, as well as reserving the right for Edmunds to cut off data delivery to anyone who acts irresponsibly, though Ismail notes that most developers are very responsible and “know what’s cool and what’s not.” Also important to the Edmund’s model is that it initially only provides enough free data to developers for testing purposes. Before raising a developer’s API quota, Edmunds looks at each application to make sure attribution and back-links are correct, and that the application overall is using the data correctly (not incorrectly labeling data elements or incorrect calculations) and that the application is a quality product that Edmunds is comfortable being associated with.

As guidance to other data publishers interested in pursuing an open API, Ismail feels it is essential to use a service that provides an API management layer. After extensive research, Edmunds went with Mashery, which stood out to Ismail in particular because “Mashery already works with major publishers like the New York Times and USA Today, so they know the issues that are specific to publishers. They also have a huge developer outreach program, now over 100,000+ developers, which made it easy for us to get the word out in the developer community.”

Internally, Ismail notes that the Edmunds API was initially a tough sell. Not everyone believed in the concept, so executive support was a huge factor. It was only because the company’s chairman was such a believer that the API became a reality. As Ismail notes, “ultimately a free API is a leap of faith.” Ismail also noted the difficulties in getting the concept cleared by the company’s lawyers, who simply weren't initially comfortable with exposing our data to everyone." Executive sponsorship was key to ultimately clearing these legal impediments as well.

Launching the API involved “a lot of small steps in the beginning.” Initially, Ismail worked by himself on the API program. Now, his team consists of four engineers and a designer. And just yesterday, the Edmunds API has been certified for “Best Developer Experience” by Mashery – more evidence of how far Edmunds has come so quickly.

Comment

Comment

Don’t Be Embarrassed

“If you’re not at least a little embarrassed by something you just launched, you probably waited too long to start it.” So says Alexis Ohanian, founder of Reddit and a number of other high profile web media products. This statement, provocative as it is, actually is little more than a smart synthesis of the current state of play in the world of online product development. You no doubt hear variants of this theme regularly, sometimes expressed as “minimum viable product,” “rapid iteration,” and even “fast fail.” They all embody the philosophy that it’s more important to launch a new product quickly than launch a really good new product. I credit Google for raising this practice to high art by teaching users that the word “beta” appended to any product name excused the product from delivering much value, or even working properly, for an often extended period of time.

I certainly agree that there is an imperative for speed in the world of online content. We’re surrounded by hordes of competitive start-ups, many of them explicitly attempting to disrupt market incumbents. But before we decide to emulate these companies, it’s important to note their typically distinctive business models.

First, most of these not-ready-for-primetime products are thrown onto the market for free. The companies behind them are betting that if they can build an audience and usage, everything will work out fine in the end, even if they don’t generate any revenue in the short-run. Further, most of these products come from start-ups, so there’s not much in the way of reputation risk. Thirdly, these companies are staffed and funded to iterate rapidly – some actually push out updates and fixes on an almost daily basis. If you’re going to play in this world, you can’t just talk a good line about evolving the product – you have to deliver, and often at a blistering pace.

That’s why I would argue that for most established data publishers with subscription businesses, applying this approach of pushing out half-baked new releases can be very dangerous. When working with an existing customer base, be clear that your subscribers don’t value speed for its own sake – they want clear product improvement. And frankly, this shouldn’t be that hard. You’ve got the advantage of a successful existing product you want to evolve, so you’re not starting from scratch. You’ve got loyal customers who will test with you and offer input. You have a deep understanding of the market you serve, so there’s no need to guess about what might be useful or compelling. Perhaps most importantly, your subscribers often need your data and your tools to conduct business. That’s a world away from a nifty new pizza delivery app. You have an implicit obligation to move prudently and get it “as right as possible” right out of the gate.

So yes, constant evolution of your product is essential. Speed is important. It’s also important to understand that nothing will be perfect the first time around, and that’s okay if you fix problems as quickly as they are identified.

What about new products from established publishers? First off, if you plan to charge for the product from the start, this comes with a much higher level of subscriber expectations. If it’s free (at least initially), expectations are lower, but be aware that those who go away unimpressed probably won’t come back. Finally, being embarrassed may well be an issue for an established publisher known for quality data and solid applications.

So before speeding up, slow down and think it through. The people advocating speed often are in a very different place from you.

Comment

Comment

TripAdvisor and the Ignorance of Crowds?

We all know the many benefits of user-generated data, including ratings and recommendations. Underlying all of it is a general belief in “truth in numbers” - if enough people say it is so, then it must be so. But what happens when the crowd returns a result that is obviously and intuitively wrong? A small but perfect example of this occurred recently when TripAdvisor published its list of “Americas Top Cities for Pizza.”

I’ll spare you the suspense: the top-rated city for pizza in the United States is (drumroll, please) ... San Diego, California. Are you already checking flights to San Diego or are you shaking your head in disbelief? I am guessing the latter, and that’s exactly my point.

Something went wrong in this tally by TripAdvisor. Most likely the underlying methodology wasn’t sound. TripAdvisor merely rolled up ratings for individual pizzerias and restaurants around the country and ranked them by city. There’s a little problem with that: context. What TripAdvisor converted to a national ranking were individual ratings made in the context of specific geographies. That’s why you see comments for top-rated San Diego along the lines of “it’s just as good as New York.” These people clearly didn’t see themselves voting in a national poll.

Also odd is that TripAdvisor decided to go ahead and publicize the results of this analysis. Clearly, TripAdvisor saw the potential for buzz with its surprising findings. Yet the flip side of this is the creation of a credibility issue: if TripAdvisor thinks the nation’s best pizza is in San Diego, how can I trust the rest of its information?

A few lessons for the rest of us can be found here. First Big Data is only valuable if properly analyzed. Second, evaluating data supplied by a large and disparate crowd can be tricky. Third, always balance the potential for buzz in the marketplace against reputation risk. Say enough dumb things online and you will hurt your credibility.

TripAdvisor will no doubt say in its defense that it is simply reporting what its users are saying. But if your user base is saying something dumb, it’s probably not in your interest to amplify it. And in fact, the TripAdvisor user base isn’t dumb; their individually smart comments were aggregated in such a way that they yielded a dumb result. But most people aren’t going to dig that deep. They’ll simply say that’s one less reason to rely on TripAdvisor.

Comment