Good Ideas Any Publisher Can Use

A recent article in Forbes offers a very thoughtful interview with Marvin Shanken, founder of the eponymous M. Shanken Publications, a company best known for its titles such as Wine Spectator and Cigar Aficionado.

Marvin Shanken is more than a successful publishing entrepreneur. He’s also a true industry innovator. He has started publications that were mocked at launch because nobody thought they had a chance, before they went on to achieve remarkable success. He blends B2B and B2C publishing strategies in ways that few have tried. He’s stayed focused on print more than his peers and continues to profit handsomely from doing so. 

Shanken attributes his success to the quality of his content, and there is no doubt he produces smart, passionate content for smart, passionate audiences. But as the article notes, that alone is not enough these days. So what’s his secret? I think it’s a series of things. Interestingly, many are concepts we’ve held out to data publishers over the years. Let’s review just a few:

First and foremost, Shanken makes his publications central to their markets. His primary technique: rankings and ratings. By offering trusted, independent ratings on a huge number of wines, Wine Spectator in particular began to drive sales because its audience relied on it so heavily. This in turn caused retailers to promote the ratings to drive more sales. That in turn forced wine producers to highlight the ratings, and in many cases, to advertise as well. Wine Spectator is a central player and made itself a real force in the wine business. This drives both readership and advertising.

Secondly, Shanken gets data the way few B2C publishers do. You can’t spend much time on the Wine Spectator website without getting multiple offers to subscribe to the Wine Spectator database – reviews and ratings on a remarkable 378,000 wines. Content never ends up on the floor at M. Shanken Publications – it’s systematically re-used to create not the typical, mediocre searchable archive offered by most publishers, but rather a high-value searchable database. It’s more work but it’s work that yields a lot of revenue opportunity.

Third, Shanken believes in premium pricing because it reinforces the quality of his content. There is something of a universal truth here, provided you don’t go crazy. I can think of few data publishers who charge for their content “by the pound” and are at the same time market leaders.

Finally, Shanken sees the power of what I call crossover markets, where there is an opportunity for a B2B publisher to repurpose its content as B2C.  Indeed, Shanken got into many of his current titles by creating glossy B2C magazines from modest B2B titles.  But he hasn’t exited B2B: he successfully publishes for both business and consumer audiences.

There’s more, much more, but you get the idea. Some of the key success strategies in data publishing work just as well in other forms of publishing because they are so powerful and so fundamental.

 

Crossbeam’s Mission Impossible

I write often about the opportunity for data companies to operate as central information exchanges because they have a central position in their markets, and this neutral market position makes them trustworthy.

Lots of sensitive market information gets exchanged through central data hubs. Companies routinely exchange credit data, pricing data, business metrics and much more. They do this because they know the data they submit will only be released in aggregate or anonymized form. As importantly, they do this because they need the answers that only data exchanges can provide.

This is why I got excited when I heard about a stealthy start-up called Crossbeam. Crossbeam wants to build a database that consists of company customer lists. Yes, they are asking companies to upload their entire customer files to the Crossbeam database!

Mission impossible? Not at all. Consider when companies discuss merging. One big, burning question is always how much customer overlap there is between the two companies. Even in merger situations, companies are reluctant to hand over their crown jewels to what often is a direct competitor. Crossbeam is offering to compare those two customer files on a confidential basis and report out the results, something that demands a neutral market position, and the trust that goes along with it.

You might think that this idea, while interesting, isn’t all that big. Think again. Crossbeam aims to be a business development tool for those in charge of partnering and strategic alliances. Using Crossbeam, a partnership manager can easily search out companies with a large overlap in customers – almost always the key to a successful partnership or business alliance. It’s an efficient, quantitative way to take the guesswork out of developing alliances, affiliates and business partnerships, because you know in advance you are selling to the same customers they are.

Crossbeam never releases customer data of course. It simply flags companies where there is a large overlap between your customer file and theirs. This is a wonderful example of the distilled magic of the central information exchange: companies contribute data that they would ordinarily not share because it provides back information they cannot otherwise get.

In the course of helping to accelerate business partnering, the other data and business insights that Crossbeam will be able to access are potentially staggering. Of course, Crossbeam also has the challenge of protecting all this sensitive data, making sure it can’t be used in unintended ways, and making sure it doesn’t kill the golden goose by mining all the data in its possession too aggressively. Still, those are manageable issues, and all part of the mission Crossbeam has chosen to accept!

Not All Platforms Are Created Equal

In the data business, the prize positioning that everyone seeks is to become integrated into client workflow. Having achieved this enviable goal, publishers know that extraordinarily high renewal rates are certain and profits are assured, because clients in effect are dependent on these workflow products to do their jobs and sometimes to run their entire businesses.

Workflow integration is assumed to be a B2B thing. After all, consumers don’t have workflow. Or do they?

I got thinking about this after having read several articles suggesting that Amazon may be considering getting involved in the sale of financial products such as mutual funds, perhaps even offering a robo-advisor service that would use software to manage the investment portfolios of their customers. This is a big, scary thought for online brokers and investment managers. And while Amazon hasn’t yet made any concrete moves in the financial services area, it’s a big, juicy target for Amazon, a company not known for its timidity or lack of ambition. As several industry observers point out, Amazon already has made moves into the massive and regulation-heavy pharmaceutical industry, seeking to become the nation’s pharmacist, with potentially even grander plans beyond that.

What allows Amazon to even consider entering the financial services market? It’s the fact that Amazon has a massive consumer platform. Many people consider Facebook a platform too, yet Facebook isn’t launching online pharmacies and the like. What makes the Amazon platform different is that it is a commerce platform.

Of course, Amazon is no ordinary commerce platform. It wants to sell you everything you need and deliver it to your door. It even will automatically ship its customers consumable products on a regular schedule. Amazon has also built a strong brand based on fast shipping and low prices. And because Amazon has so deeply embedded itself into the lives of its customers, delivering remarkable product breadth along with remarkable convenience, Amazon has achieved -- wait for it -- consumer workflow integration.

This takes me full circle. Does Amazon’s success with B2C workflow integration suggest big opportunities for those with B2B data products that have deep workflow integration to become commerce platforms? I am not convinced. The Amazon journey to success was long and expensive. It also started by delivering something unique and valuable: a universal bookstore. My guess is that most B2B data products, even if deeply embedded, can’t really transition to becoming commerce platforms. Their usage is too specialized, as are their audiences.

Deeply integrated B2B workflow products driven by data may look like platform opportunities if you squint enough. But if you squint too hard or too long, you’ll end up needing glasses, and you can find a great selection of them … on Amazon. 

Fresh Data Sold Here! 

While many successful data publishers obsess about continually adding new features and functionality to their data products, there are lots of good reasons to be regularly evaluating your data as well.

Don’t get me wrong: new features and functionality are critically important, particularly if you have a data product that offers a workflow solution.

But adding new, well-selected data elements can add significant value and appeal as well. Here’s a few examples:

Morningstar just enhanced its suite of investment analysis tools by introducing a single new data element: a Carbon Risk Score. This score assesses how vulnerable a company is financially to the transition away from a fossil-fuel-based economy to a lower-carbon economy. Not only does the score hold significant value in its own right, but as an individual and consistently presented data element, it can be used for discovery and filtering by investment analysts. Moreover, as a proprietary piece of information, it gives Morningstar additional differentiation and strengthens its competitive edge.

Data-driven real estate listings sites such as Realtor.com, Zillow and Trulia have moved away from tussling over who has the most complete listings to trying to outdo each other with deeper datasets. Various combinations of these three sites now give detailed information and ratings on local schools, crime data, traffic data, neighborhood data, walkability data … even data on whether or not a particular home is likely to be a good candidate for solar panels! And in a move I particularly admire, they have gotten major cable and companies to pay to indicate if a particular house is eligible for their services. In the hotly competitive world of real estate data sites, it’s a relentless battle at the data element level, all with the goal of providing the most attractive one-stop shop for prospective homebuyers.

Consider too the intensely competitive market of hotel booking databases. Think of services such as Expedia, TripAdvisor, Oyster and Hotels.com. Having exhausted themselves by all claiming to offer the lowest rates, they’re now seeking to differentiate themselves at the data element level. Using filters, site visitors can draw on specific data elements to locate hotels with free wi-fi, that accept pets, that have handicapped access, that are green or sustainable, that are LGBT-welcoming and even hotels that have a party atmosphere.

Features and functionality matter, but a single new and well-chosen data element can add tremendous value, while simultaneously providing competitive advantage and product differentiation. Keep your data fresh of course, but always be on the lookup for fresh new data elements as well.

Data Flipping

One of the best things above government databases is that even when the government agency makes the database available on its website for free, it isn’t very useful. That’s because government agencies put these databases online for regulatory or compliance reasons.  They’re designed to search for known entities because the expectation is that you are checking the license status of a company, or perhaps its compliance history.

Occasionally, a government agency will get ambitious and permit geographic searches, but in these cases, there are real limitations. That’s because the underlying data were collected for regulatory, not marketing purposes. So, for example, a manufacturer with 30 plants around the country may only appear in one ZIP code because the government agency wants filings only from headquarters locations.

Taking a regulatory database and changing it into, say, a marketing database, is something I call “flipping the file,” because while the underlying data remains the same, the way the database is accessed is different. Sometimes this is as simple as offering more search options; sometimes it involves normalizing or re-structuring the data to make it more useful and accessible. As just one example, a company called Labworks built a product called the RIA Database. It started with an  investment advisor database that the SEC maintains for regulatory purposes, and then flipped the file to make the same database useful to companies that wanted to market toinvestment advisors.  There are hundreds of data publishers doing this in different markets, and as you might expect, it’s a very attractive model since the underlying data can be obtained for free.

In addition to simply flipping a file, you can also enhance a database. The shortcoming of many government databases is that they focus on companies, not people, so while there may be a wealth of information on the company, data buyers typically want to know the names of contacts at those companies. Companies such as D&B and ZoomInfo do a brisk business licensing their contact information to be appended onto government databases of company information.

This is one of the truly magical aspects of the data business. Databases built for one reason can often be re-purposed for an entirely different use. And re-purposing can involve something as little as a new user interface. This magic isn’t limited to government data of course. Another great place to look for flipping opportunities is so-called “data exhaust,” data created in the course of some other activity, and thus not considered valuable by the entity creating it. You can even license data from other data providers and re-purpose it. There are a number of mapping products, for example, that take licensed company data and essentially create a new user interface by displaying data in a map context.

Increasingly, identifying the data need is as important as identifying the data source. With data, it’s all in how you look at it.