Viewing entries in
Sources and Tools

Data Marketplaces: Almost There

There has been much excitement about the recent launch of the Salesforce Data Studio, a new data-sharing platform within the Salesforce Marketing Cloud.

The idea of the Data Studio is simple: marketers can, on a fully automated basis, identify, order and integrate datasets that others are offering for sale. In its early implementation, the Data Studio seems mostly like a cool way for marketers to buy email lists. But the vision is much bigger and more interesting: to allow marketers to augment and overlay existing email lists with more data so that they become smarter about their lists, target their efforts more effectively, and get better results.

Data Studio at time of launch is heavy on audience data, mostly from larger publishers, but there’s no reason any data publisher couldn’t participate as well, especially if the Data Studio wants to exploit its full potential.

Interestingly, Salesforce is not the only big player that has an interest in data marketplaces. The Amazon Web Services Marketplace sells software through its marketplace – again, a totally automated buying experience – but it also offers a selection of public domain datasets for free. It’s a small jump then for Amazon to start selling databases on behalf of others.

As you can see, neither of these two marketplaces is quite ready for prime time as far as becoming a meaningful sales channel for data publishers, but they’re tantalizingly close. Keep an eye on these marketplaces: they could become very important to data publishers very quickly.

Inferring Intent

Today’s Gartner blogpost points to some interesting limitations and opportunities surrounding intent data. Let’s start at the beginning by defining what it is.

Simply put, intent data is an indication that an individual or organization is actively interested in purchasing a specific product or service. You may already be familiar with sales triggers. One classic sales trigger is so-called “new move” data. It’s valuable to know when a company moves offices because it is highly likely that the company will likely make lots of new purchases such as office furniture and the like. Think of intent data as a more sophisticated cousin of the sales trigger.

Media companies are in a great position to generate sales intent data, because much intent data is generated by watching what a person reads and does online. If a reader looks at five articles on 3-D printers in a short period of time, those actions can be viewed as indicating an intention to purchase a 3-D printer. Intent data can get a lot more sophisticated than that, but this gives you the general idea.

You might think that if a sales organization has intent data available to it, that’s probably all the data it needs. After all, intent data is like mind-reading: it’s identifying people who are likely to be purchasing a product before they purchase it. What could be better?

Well, as the Gartner blogpost points out, many companies are filtering sales leads based on intent data with something called “fit analysis.” This is an automated attempt to evaluate if the company is a likely buyer. If your company typically sells to larger, multi-office organizations, a fit analysis will filter out smaller, single location companies because they represent lower grade prospects.

Further, the Gartner blogpost notes that companies selling highly specialized products or brand-new technologies often can’t get enough intent-based sales leads or they get leads that are weak because the intent indicators aren’t sufficiently granular. Finally, some sales departments don’t like intent-based sales leads because they identify prospects too early in the sales process. As you can see, sales leads based on intention are still fairly rudimentary, and there is lots of opportunity to refine them.

But what’s most worthy of note is that Gartner believes that most intent-based sales lead data is focused on the technology industry. But there is no reason that it should. Technology sellers just happen to be free-spending early adopters. I have long preached the virtues of what I call “inferential data,” a term that includes both intent and sales trigger data. I firmly believe that many data publishers have opportunities in this area, and if they happen to be part of larger media companies, they are even greater. In fact, data publishers are natural providers of fit analytics as well. If you look at your data creatively and read between the lines you can make some very lucrative connections. 


Adwords Now Promote Tele-Measurability

Google has just launched a new Adwords feature that offers lots of useful applications for online marketers. It’s so slick, I am a little surprised it hasn’t received more press. In a nutshell, Google Adwords can now track and report not only how many people click on an ad, but how many people call based on an ad.

Yes, you’ve doubtless heard claims like this before, but this one seems pretty solid and pretty powerful as well. Here’s how it works.

Google Adwords  dynamically inserts a key telephone number (supplied by Google) into all places you specify on your landing pages or even your full website. Put another way, anyone who gets to your website via Adwords will see a Google-supplied phone number and everyone who gets to your website any other way will see your regular phone number. You can format the Google numbers to match your website look and feel, and the phone numbers will be dynamically supplied for up to 90 days.

While Google doesn’t mention B2B applications specifically, you can immediately see the benefits. Few B2B buyers click through from an ad and immediately place an order. More likely than not, there’s a phone call involved, and even then, the phone call might not happen immediately. With this new feature, it’s possible to measure and track this unique aspect of B2B buying which is well understood, but has been devilishly difficult to measure.


Comment Yields Multi-Million Dollar Revenue Opportunity from its Free API

edmundsAPIs, which stand for Application Programming Interfaces, are all the rage these days. APIs, which can be described as online back doors into your database, allow programmers to seamlessly integrate your data into their products, particularly, but not necessarily, mobile apps. Increasingly, customers are asking companies selling subscription data products for “API access” to their data. The reason for this is that these companies want to integrate commercial datasets into their own internal software applications. So you’ve got application developers looking for API access to your data in order to build it into software products for resale. You’ve also got companies that want API access to your data to power their own internal software. If you are charging a high enough price for your data that reflects the convenience and power of API access, as well as the expanded audiences your data will reach, APIs are nothing but great news for data publishers. But can you also make money giving away API access to your data for free? A growing number of companies think so. We recently spoke with Ismail Elshareef, Senior Director, Open Platform Initiatives for Edmunds makes its data available via API for free, and can directly attribute millions of dollars in recurring revenue to this initiative.

According to Ismail, launched its API about two years ago, primarily as a way to get more exposure for the brand. The second objective was one we often hear from those with open APIs: a desire to encourage innovation. As Ismail puts it, “We can’t hire all the smart people out there.” The goal is to put Edmunds data in the hands of a broad array of talented developers and see what they can do with it – whether it’s new applications software to leverage the data, or even entirely new and unintuitive uses for the data itself.

The additional brand exposure for Edmunds worked exactly as planned, according to Ismail, who said it has become “a huge differentiator.” Edmunds displaced a number of competitors who were charging money for equivalent data, and with the “powered by Edmunds” attribution on so many different products, Edmunds saw immediate brand benefit, not the least of which was more advertisers specifically acknowledging the reach of Edmunds in sales meetings.

Overall, Edmunds has found a number of partner deals came together more quickly as well, “because using the API, they can get comfortable with our data first.” A great example of this is a major deal Edmunds put together with eBay. Ismail emphasized the growing popularity of this “try before you buy” approach to data content, and that publishers need to respond to this growing preference among data buyers.

Ismail is careful to note that Edmunds wasn’t seeking to actively disrupt paid data providers in its vertical; the free data it offers simply reflects lower barriers to entry, and to an extent, the increasing commoditization of much of data it offers for free.

And while additional market exposure is clearly beneficial, as Edmunds saw it, the big upside opportunity was to see what dozens or even hundreds of talented, motivated independent developers would do with the data. And that’s exactly where Edmunds found gold. Acknowledging that of the apps developed around its data, “only 1 in a 100 is really interesting,” Ismail noted that one really interesting application emerged after only seven months of offering the free API. An independent software provider in the Northeast built a cutting-edge application for automobile dealerships. But while they had a great solution, they didn’t have a sales force to market it to dealers. Edmunds contacted the CEO of the software company, struck a partnership deal, and already the product generates millions of dollar in annual revenues.

One of the keys to Edmunds’ success is that while its data is free, it isn’t free for the taking. Every developer who wants to use Edmunds data has to adhere to a terms of service agreement, which specifies the attribution that Edmunds is to receive, as well as reserving the right for Edmunds to cut off data delivery to anyone who acts irresponsibly, though Ismail notes that most developers are very responsible and “know what’s cool and what’s not.” Also important to the Edmund’s model is that it initially only provides enough free data to developers for testing purposes. Before raising a developer’s API quota, Edmunds looks at each application to make sure attribution and back-links are correct, and that the application overall is using the data correctly (not incorrectly labeling data elements or incorrect calculations) and that the application is a quality product that Edmunds is comfortable being associated with.

As guidance to other data publishers interested in pursuing an open API, Ismail feels it is essential to use a service that provides an API management layer. After extensive research, Edmunds went with Mashery, which stood out to Ismail in particular because “Mashery already works with major publishers like the New York Times and USA Today, so they know the issues that are specific to publishers. They also have a huge developer outreach program, now over 100,000+ developers, which made it easy for us to get the word out in the developer community.”

Internally, Ismail notes that the Edmunds API was initially a tough sell. Not everyone believed in the concept, so executive support was a huge factor. It was only because the company’s chairman was such a believer that the API became a reality. As Ismail notes, “ultimately a free API is a leap of faith.” Ismail also noted the difficulties in getting the concept cleared by the company’s lawyers, who simply weren't initially comfortable with exposing our data to everyone." Executive sponsorship was key to ultimately clearing these legal impediments as well.

Launching the API involved “a lot of small steps in the beginning.” Initially, Ismail worked by himself on the API program. Now, his team consists of four engineers and a designer. And just yesterday, the Edmunds API has been certified for “Best Developer Experience” by Mashery – more evidence of how far Edmunds has come so quickly.



People Power

There was news this week about the formation of Bloomberg Beta, a new venture capital fund sponsored by data company Bloomberg LP. One of Bloomberg Beta’s early investments is a company called Newsle, that will alert you whenever someone you specify – a friend or colleague – is in the news. This is a tough nut to crack. Searching thousands of news sources and trying to determine if the John Smith mentioned in an article is the same John Smith of interest to you is a complex undertaking. But what really intrigued me is that those who have written about Newsle see another major problem that the company faces: lack of activity. Think about it. If you import your list of Facebook friends (something Newsle encourages you to do), the chances of any of them appearing in news stories is pretty low. That means most people will sign up for Newsle and nothing will happen, not because Newsle isn’t working, but because there is no news to report. It’s hard to establish the value of your service if you’re not delivering at least a little something every now and then.


That’s why in addition to your Facebook friends, Newsle also encourages you to import your LinkedIn contacts, and while you are at it, your address book as well. Somewhat incongruously, Newsle also encourages you to follow politicians and celebrities. The hope is the more people you track, the more likely you’ll get hits.

But what if Newsle flipped its model? Instead of serving individuals who for the most part have small lists of mostly boring contacts, why not hook up with commercial data publishers, many of whom have tens and even hundreds of thousands of contacts in their databases? Publishers could then send real-time alerts out to their subscribers who are interested in specific people or any activity relating to executives at a given company. In addition to sales intelligence, these news alerts could also provide a basis for making contact with a prospect. Plus, publishers could database these news events to build deep profiles on company executives that would have evergreen value.

This could be a great opportunity for Newsle to crack its volume problem, and for data publishers to add in high-value alerting services and historical data all in one fell swoop.

That’s powerful, people!