Comment

Regulating by the Numbers

While so many large financial institutions were teetering during the Great Recession, regulators trying to bring stability to the global financial system quickly learned a startling, shocking fact: there was really no way to net out how much money one financial institution owed to another.

The reason for this is that the complex financial trades that banks were engaged in weren’t straightforward bank-to-bank deals. JP Morgan didn’t just do trades with Citibank, for example. Rather, they were done through a web of subsidiaries, many of them set up specifically to be opaque and obscure. And that’s just the banks. Add in hedge funds and other investors, and their offshore companies and subsidiaries that also were designed to be opaque, and you quickly get to mind-numbing complexity. 

 With an eye to better regulation and better information during a future financial crisis, an idea was proposed during a 2011 meeting of the G-20 countries to create a numbering system called the Legal Entity Identifier (LEI). The simple idea was that if every legal entity engaged in financial transaction had a unique number, and the record of that legal entity also contained the number for its parent company, it would be easy to roll up these records to see the total financial exposure of any institution.

While you may never have heard of it, the LEI system actually exists, and most financial institutions now have LEI numbers. There is a push in some countries (in the United States, the Treasury Department is leading the charge) to require all companies to obtain a LEI number, it’s been slow going so far.

If this discussion has you wondering about the DUNS number from D&B, not to worry: it’s alive and well. It’s also far more evolved and comprehensive than the LEI system. However, as a privately maintained identifier system, D&B not unreasonably wants to be paid for its use. This rankles some government agencies that are paying substantial sums to D&B for access to the DUNS system, and more than a few are pushing for broad expansion of the LEI system as a replacement for the DUNS system. Suffice to say there is a lot going on behind the scenes.

There are a number of free lookup services for LEI records, and the information is in the public domain. Some data publishers may find immediate uses for LEI data, but its fundamental weakness at this point is that it’s hit and miss as to what companies have registered. Still, it’s a database to know about and watch, particularly if you have an interest in company relationships. Over time, its likely its coverage and importance will grow.

Comment

Comment

Just in Time Data

Databases are tricky beasts because their content is both fluid and volatile. There are likely no databases that are 100% comprehensive and 100% accurate at the same time. This problem has only been exacerbated by increasingly ambitious data products that continue to push the envelope in terms of both the breadth and depth of their coverage.

Data publishers have long had to deal with this issue. The most widely adopted approach has been what might be called “data triage.” This is when the publisher quickly updates a data record in response to a subscriber request.

I first encountered this approach with long-time data pioneer D&B. If you requested a background report on a company for which D&B had either a skeleton record or out of date information, D&B adroitly turned this potential problem into a show of commitment to its data quality. The D&B approach was to provide you with whatever stale or skimpy information it had on file in order to provide some data the subscriber might find useful. But D&B would also  indicate in bold type words to the effect of, “this background report contains information that may be outdated. To maintain our data quality standards, a D&B investigator will update this report and an updated report will be sent to you within 48 hours.”

Data triage would begin immediately. D&B would have one of its more experienced researchers call the company and extract as much information as possible. The record was updated, the new information was sent to the subscriber, and anyone else requesting that background report would benefit from the updated information as well.

A variation on this approach is to offer not updates to existing records, but rather to create entirely new records on request. Not in our database? Just let us know, and we’ll do the needed research for you pronto. Boardroom Insiders, a company that sells in-depth profiles of C-suite executives, does this very successfully, as does The Red Flag Group

 The key to succeeding with data triage? First, you have to set yourself up to respond quickly. Your customers will appreciate the custom work you are doing from them, but they still want the information quickly. Secondly, use this technique to supplement your database, not substitute for it. If you are not satisfying most of your subscribers most of the time with the data you have already collected, you’re really not a data publisher, you’re a custom research shop, and that’s a far less attractive business. Finally, learn from these research requests. Why didn’t you already have the company or individual in question in your database? Are the information needs of your subscribers shifting? Are there new segments of the market you need to cover? There’s a lot you can learn from custom requests especially if you can find patterns in these requests. 

Data triage is a smart tactic that many data publishers can use. But always remember, no matter how impressive the service, the subscriber still has to wait for data. Ultimately, this nice courtesy becomes a real inconvenience if the subscriber encounters it too often. What you need to do is both satisfy your customers most of the time, and be there for them when you fall short.

Comment

LinkedIn: A D&B For People?

I joined LinkedIn in 2004. I didn’t discover LinkedIn on my own; like many of you, I received an invitation to connect with someone already on LinkedIn, and this required me to create a profile. I did, and became part of what I still believe is one of the most remarkable contributory databases ever created.

Those of you who remember LinkedIn in its early days (it was one of our Models of Excellence in 2004), remember its original premise: making connections – the concept of “six degrees of separation” brought to life. With LinkedIn, you would be able to contact anyone by leveraging “friend of a friend” connections.

It was an original idea, and a nifty piece of programming, but it proved hard to monetize. The key problem is that the people most interested in the idea of contacting someone three hops removed from them were salespeople. People proved remarkably resistant to helping strangers access their friends to make sales pitches. LinkedIn tried all sorts of clever tweaks, but there clearly wasn’t a business opportunity in this approach.

What saved LinkedIn in this early phase was a pivot to selling database access to recruiters. A database this big, deep and current was an obvious winner and it generated significant revenue. But there are ultimately only so many recruiters and large employers to sell to, and that was a problem for LinkedIn, whose ambitions had always been huge.

Where things got off the tracks for LinkedIn was the rise of Facebook, Twitter and the other social networks. Superficially, LinkedIn looked like a B2B social network, and LinkedIn was under tremendous pressure to accept this characterization, because it did wonders for both its profile and its valuation. LinkedIn created a Twitter-like newsfeed (albeit one without character limits), and invested massive resources to promote it. Did it work? My sense is that it didn’t. I never go into LinkedIn with the goal of reading my news feed, and I have the same complaint about it as I have about Twitter: it’s a massive, relentless steam of unorganized content, very little of which is original, and very little of which is useful. 

Today, LinkedIn to me is an endless stream of connection requests from strangers who want to sell me something. LinkedIn today is regular emails reminding me of birthdays of people I barely know because I, like everyone else, have been remarkably undisciplined about accepting new connection requests over the years. LinkedIn is also just one more content dump that I barely glance at, and it’s less and less useful as a database as both its data and search tools are increasingly restricted in order to incent me to become a paid subscriber.

Am I predicting the demise of LinkedIn? Absolutely not! What LinkedIn needs now is another pivot, back to its database roots. It needs to back away from its social media framing, and think of itself more like a Dun & Bradstreet for people. LinkedIn has to use its proven creativity and the resources of its parent to embed itself so deeply into the fabric of business that one’s career is dependent on a current LinkedIn profile. LinkedIn should create tools for HR departments to access and leverage all the structured content in the LinkedIn database so that they will in turn insist on a LinkedIn profile from all candidates and employees. Resurrect the idea of serving as the internal company directory for companies (and deeply integrate it into Microsoft network management tools). Most exciting of all to me is the opportunity to leverage LinkedIn data within Outlook for filtering and prioritizing email – big opportunities that go far beyond the baby steps we’ve seen so far.

I think LinkedIn’s future is bright indeed, but it depends on management focusing on its remarkable data trove, rather than being a Facebook for business. 

Good Ideas Any Publisher Can Use

A recent article in Forbes offers a very thoughtful interview with Marvin Shanken, founder of the eponymous M. Shanken Publications, a company best known for its titles such as Wine Spectator and Cigar Aficionado.

Marvin Shanken is more than a successful publishing entrepreneur. He’s also a true industry innovator. He has started publications that were mocked at launch because nobody thought they had a chance, before they went on to achieve remarkable success. He blends B2B and B2C publishing strategies in ways that few have tried. He’s stayed focused on print more than his peers and continues to profit handsomely from doing so. 

Shanken attributes his success to the quality of his content, and there is no doubt he produces smart, passionate content for smart, passionate audiences. But as the article notes, that alone is not enough these days. So what’s his secret? I think it’s a series of things. Interestingly, many are concepts we’ve held out to data publishers over the years. Let’s review just a few:

First and foremost, Shanken makes his publications central to their markets. His primary technique: rankings and ratings. By offering trusted, independent ratings on a huge number of wines, Wine Spectator in particular began to drive sales because its audience relied on it so heavily. This in turn caused retailers to promote the ratings to drive more sales. That in turn forced wine producers to highlight the ratings, and in many cases, to advertise as well. Wine Spectator is a central player and made itself a real force in the wine business. This drives both readership and advertising.

Secondly, Shanken gets data the way few B2C publishers do. You can’t spend much time on the Wine Spectator website without getting multiple offers to subscribe to the Wine Spectator database – reviews and ratings on a remarkable 378,000 wines. Content never ends up on the floor at M. Shanken Publications – it’s systematically re-used to create not the typical, mediocre searchable archive offered by most publishers, but rather a high-value searchable database. It’s more work but it’s work that yields a lot of revenue opportunity.

Third, Shanken believes in premium pricing because it reinforces the quality of his content. There is something of a universal truth here, provided you don’t go crazy. I can think of few data publishers who charge for their content “by the pound” and are at the same time market leaders.

Finally, Shanken sees the power of what I call crossover markets, where there is an opportunity for a B2B publisher to repurpose its content as B2C.  Indeed, Shanken got into many of his current titles by creating glossy B2C magazines from modest B2B titles.  But he hasn’t exited B2B: he successfully publishes for both business and consumer audiences.

There’s more, much more, but you get the idea. Some of the key success strategies in data publishing work just as well in other forms of publishing because they are so powerful and so fundamental.

 

Crossbeam’s Mission Impossible

I write often about the opportunity for data companies to operate as central information exchanges because they have a central position in their markets, and this neutral market position makes them trustworthy.

Lots of sensitive market information gets exchanged through central data hubs. Companies routinely exchange credit data, pricing data, business metrics and much more. They do this because they know the data they submit will only be released in aggregate or anonymized form. As importantly, they do this because they need the answers that only data exchanges can provide.

This is why I got excited when I heard about a stealthy start-up called Crossbeam. Crossbeam wants to build a database that consists of company customer lists. Yes, they are asking companies to upload their entire customer files to the Crossbeam database!

Mission impossible? Not at all. Consider when companies discuss merging. One big, burning question is always how much customer overlap there is between the two companies. Even in merger situations, companies are reluctant to hand over their crown jewels to what often is a direct competitor. Crossbeam is offering to compare those two customer files on a confidential basis and report out the results, something that demands a neutral market position, and the trust that goes along with it.

You might think that this idea, while interesting, isn’t all that big. Think again. Crossbeam aims to be a business development tool for those in charge of partnering and strategic alliances. Using Crossbeam, a partnership manager can easily search out companies with a large overlap in customers – almost always the key to a successful partnership or business alliance. It’s an efficient, quantitative way to take the guesswork out of developing alliances, affiliates and business partnerships, because you know in advance you are selling to the same customers they are.

Crossbeam never releases customer data of course. It simply flags companies where there is a large overlap between your customer file and theirs. This is a wonderful example of the distilled magic of the central information exchange: companies contribute data that they would ordinarily not share because it provides back information they cannot otherwise get.

In the course of helping to accelerate business partnering, the other data and business insights that Crossbeam will be able to access are potentially staggering. Of course, Crossbeam also has the challenge of protecting all this sensitive data, making sure it can’t be used in unintended ways, and making sure it doesn’t kill the golden goose by mining all the data in its possession too aggressively. Still, those are manageable issues, and all part of the mission Crossbeam has chosen to accept!