Content, Technology or Something Better?
There has been a recent flurry of head-scratching (here, here and here) on the topic of when a media company should better be thought of as a technology company. It’s a good question, but it’s a question muddied by the slippery, umbrella term "media." I am of the belief that if you create and publish articles, you are a media company, even if you happen to be running on a proprietary content management platform.
But when it comes to data publishers, things aren’t so clear. Data is a form of content that plays very well with software. In fact, most data products would be a lot less valuable if they couldn’t be used effectively by software. The real question is: whose software? Those data publishers whose roots were in print directories had the business mentality of most print publishers, which was to ship out big fat books filled with information and let the customer figure out to how extract value from them. When these publishers first began to offer electronic versions, they followed the same approach, shipping out Excel sheets and letting the customers once again figure out what to do with them. Those who did wrap software around their data were known mostly for creating really bad software, and found that their customers were asking, sometimes begging, just to get the raw data without the software. This led to the conventional wisdom that publishers couldn’t and shouldn’t create software, and technology companies couldn’t and shouldn’t create content. In theory, the two camps were supposed to partner, thus marrying great content and software. But it never seemed to work. There were too many issues around revenue splits and who owned the customer, not to mention a bevy of marketing, sales and operational issues. It’s only been fairly recently that most data publishers woke up to the fact that that selling raw data was not only leaving serious money on the table, it was eroding their perceived value as well. Thus the smart ones began to invest to bring in the talent and tools they needed to create top-notch software customized not only to their data, but to the needs of their customers. The results have been uniformly brilliant. Data wrapped in (good) software means higher price points, more customer engagement and better renewal rates. It’s also forced publishers to get a lot closer to their customers, because you can’t build good software unless you fully understand its use cases. As I see it, data publishers can fairly lay claim to being technology companies. Indeed, many now report spending more on software development than content. But when you think about it, why would a data publisher want to be considered a tech company? In a way, that’s slumming. After all, what’s more valuable: a salesforce productivity tool, or a salesforce productivity tool pre-populated with high quality and regularly updated sales leads?
Adwords Now Promote Tele-Measurability
Google has just launched a new Adwords feature that offers lots of useful applications for online marketers. It’s so slick, I am a little surprised it hasn’t received more press. In a nutshell, Google Adwords can now track and report not only how many people click on an ad, but how many people call based on an ad.
Yes, you’ve doubtless heard claims like this before, but this one seems pretty solid and pretty powerful as well. Here’s how it works.
Google Adwords dynamically inserts a key telephone number (supplied by Google) into all places you specify on your landing pages or even your full website. Put another way, anyone who gets to your website via Adwords will see a Google-supplied phone number and everyone who gets to your website any other way will see your regular phone number. You can format the Google numbers to match your website look and feel, and the phone numbers will be dynamically supplied for up to 90 days.
While Google doesn’t mention B2B applications specifically, you can immediately see the benefits. Few B2B buyers click through from an ad and immediately place an order. More likely than not, there’s a phone call involved, and even then, the phone call might not happen immediately. With this new feature, it’s possible to measure and track this unique aspect of B2B buying which is well understood, but has been devilishly difficult to measure.
The Value of Volume
A fascinating story in the New York Times takes us inside a planning session between Facebook and a major consumer products marketer, Reckitt Benckiser (RB). RB wants to market its fish oil nutritional supplement via Facebook. After listening to Facebook executives pitch a large and broad marketing campaign, the RB marketing manager stops the conversation to note that he’s interested in using Facebook because of its targeting capabilities, not because of its massive reach. He suggests that perhaps RB would be better served by targeting known fish oil buyers as well as buyers of other products that are suggestive of the consumer having an interest in heart health – a key benefit of taking fish oil.
The representatives from Facebook, rather than embracing a targeted approach, instead pushed back. It wasn’t that Facebook couldn’t deliver this highly targeted audience – it could. The primary objection of the Facebook team was that a highly targeted marketing campaign like this would be “too expensive.” And keep in mind that RB is a major global marketer, with annual sales pushing $15 billion.
If you find it odd that Facebook is pushing marketers to broadly-based marketing campaigns, the underlying logic is even more intriguing. The Facebook “advertising strategist” in the meeting explains the company’s view by saying that advertising on Facebook was like firing a shotgun. My early education in the world of direct marketing taught me that the goal was to use information to move away from sloppy, imprecise shotgun marketing, to precise rifle-like marketing. Perhaps this strategist simply butchered this well-known metaphor. But as you read deeper into this article, you start to see that Facebook really wants to sell big, volume campaigns with only minimal targeting.
You probably see where I am going with this. Facebook develops this truly massive and engaged audience. On top of this, it has unbelievably detailed and accurate information about this audience, and can target against these data. Yet when a large and sophisticated consumer marketer wants to take full advantage of this deep targeting capability, Facebook proposes a mass media solution instead. Sure, Facebook is happy to tweak this huge campaign once it rolled out, but Facebook clearly wants you to start big and whittle your campaign down over time.
Why is Facebook seemingly disavowing the precision targeting that is ostensibly its greatest asset and differentiator? While the article doesn’t provide any numbers, the likely answer is this: money. You make more money selling volume than precision.
I saw this many years ago in the direct marketing world. Everyone there preached the gospel of targeting – rifle precision over shotgun sloppiness. I listened to endless presentations by clever people showing how you could use data to identify and reach the exact best prospects for your product. But try to actually purchase these highly targeted names, and you got the exact same tap dance now being performed by Facebook. The reality is that nobody with a ten million name database wanted to sell you just 500 of those names – your absolutely best prospects. Direct marketing was a volume-based business: you made your money selling big lists, not precisely targeted lists (which truth be told were kind of a hassle to produce anyway). It was a business where you talked targeting and sold saturation.
And lest you think I am pointing a finger only at media companies, be assured that marketers need to sign up for their fair share of the blame. For example, I see B2B media companies working so hard to deliver fresh, hot, highly qualified sales leads to marketers who in many cases won’ t buy them unless the media company can guarantee a minimum volume each month. Quantity trumps quality – again.
Like it or not, there’s an important lesson here for those of who run media and data companies. Precision and targeting are great – but only to a point. Marketing and sales activities are inherently sloppy and imprecise activities, and that’s why they depend on volume. And we also now must acknowledge that in a digital world, mass marketing is cheap and can often yield powerful market research insights. That’s why, despite all the advances that have been made in increasing precision, there will be continuing value in volume.
Deals of Excellence
It’s been a banner few weeks for deal making for companies honored as Models of Excellence by InfoCommerce Group.
In the mega-deal category, we have 2006 honoree real estate data powerhouse Zillow, entering into a $3.5 billion deal to acquire arch-rival Trulia. This will of course put pressure on market leader Home.com, which operates the Realtor.com website. The whole real estate vertical has been one to watch from a data perspective. Zillow was not only an early innovator in the area of map-based user interfaces, it also blew more than a few minds by not only aggregating property data on almost every home in the country, but creating a home price estimate for every home as well. If this merger goes through, expect even more extreme innovation as these two giants battle it out for audience and advertising.
In the smaller (but hardly small) category, we have the $175 million acquisition of 2009 Model of Excellence honoree Bizo by 2004 Model of Excellence honoree LinkedIn. From a strategic standpoint, I’d rate this acquisition as nothing short of brilliant. At a high level, you are putting together “who” (the LinkedIn database, with “where,” (the Bizo B2B ad network). The potential opportunities are endless.
And while we’re in the world of high finance, a shout-out to 2010 Model of Excellence honoree SmartZip also seem in order, as they’ve just closed on a new $12 million financing round.
Where do these and other Models of Excellence companies meet each year to get their deals on? InfoCommerce Group’s DataContent gathering, now part of an even bigger show, the Business Information & Media Summit. See you in Miami!
Transforming Data the Humin Way
Imagine launching a start-up that is touted as a pioneering “social operating system” a key player in the burgeoning area of “contextual computing” and even a “digital butler.” Let’s go even further, and imagine the burden of having to live up to the goal of “organizing the world” and most intriguing of all, building “a master contacts database for pretty much the entire world?” Well, if you can in fact imagine living up to expectations like this, you’ll probably want to apply for a job at a company called Humin.
On a more practical level, Humin (at least for now) is an app that grabs your contact list, calendar entries and social networks to build a master list. It then automatically contacts everyone on the list and asks them to confirm their details and provide additional information. Once all these data are confirmed and unduplicated, you get a contact list that can be searched by location, by connections (who knows who) and a lot of other ways that go far beyond the typical address book.
To live up to its contextual computing hype, Humin wants to move into push mode. Fly into Cincinnati, for example, and it will present you with a list of your contacts there. Humin will of course get smarter as it begins to find deeper meaning in both the data itself and how you use it. Privacy concerns? Not to worry. Humin hangs onto only the minimum amount of data needed to do its magic – all the most valuable data stays right on your phone.
Those of you who are students of data may see shocking similarities between to an earlier service called Plaxo. In its original incarnation, Plaxo grabbed your address book and would periodically query everyone in it via automated emails to confirm that their details were current. Even more cool, if you updated your own information, Plaxo pushed it out to all your contacts automatically. It was the original globally synchronized contact list. Ultimately, Plaxo went astray, jumping on the social media bandwagon in a failed attempt to challenge Facebook.
The lesson of Humin (beyond possible confirmation that all great data publishing ideas are derivative), is that while Humin may be loosely based on the Plaxo concept, it is moving aggressively to surround data with tools. Humin isn’t just organizing and tidying a giant pile of data and then asking the user to find value in it – it is innovating in multiple ways to do that thinking for the user, and to deliver the right data in the right format at the right time to offer maximum value. We at InfoCommerce Group call it “data that does stuff.” Surround good data with good tools, and you, too, can become master of the data publishing universe.