Viewing entries in
Companies to Watch

AI in Action

Two well-known and highly successful data producers, Morningstar and Spiceworks, have both just announced new capabilities built on artificial intelligence (AI) technology. 

Artificial Intelligence is a much-abused umbrella term for a number of distinctive technologies. Speaking very generally, the power of AI initially came from sheer computer processing power. Consider how early AI was applied to the game of chess. The “AI advantage” came from the ability to quickly assess every possible combination of moves and likely responses, as well as having access to a library of all the best moves of the world’s best chess players. It was a brute force approach, and it worked.

Machine learning is a more nuanced approach to AI where the system is fed both large amounts of raw data and examples of desirable outcomes. The software actually learns from these examples and is able to generate successful outcomes of its own using the raw data it is supplied. 

There’s more, much more, to AI, but the power and potential is clear.

So how are data producers using AI? In the case of Morningstar, it has partnered with a company called Mercer to create a huge pool of quantitative and qualitative data, to help investment advisors make smarter decisions for their clients. The application of AI here is to create what is essentially a next generation search engine that moves far beyond keyword searching to make powerful connections between disparate collections of data to identify not only the most relevant results, but to pull meaning out of those search results as well.

 At Spiceworks (a 2010 Model of Excellence), AI is powering two uses. The first is also a supercharged search function, designed to make it easier for IT buyers to more quickly access relevant buying information, something that is particularly important in an industry with so much volatility and change.

Spiceworks is also using AI to power a sell-side application that ingests the billions of data signals created on the Spiceworks platform each day to help marketers better target in-market buyers of specific products and services.

As the data business has evolved from offering fast access to the most data to fast access to the most relevant data, AI looks to play an increasingly important and central role. These two industry innovators, both past Models of Excellence m are blazing the trail for the rest of us, and they are well worth watching to see how their integration of AI into their businesses evolves over time.

For reference:

Spiceworks Model of Excellence profile
Morningstar Model of Excellence Profile

 

 

LinkedIn: A D&B For People?

I joined LinkedIn in 2004. I didn’t discover LinkedIn on my own; like many of you, I received an invitation to connect with someone already on LinkedIn, and this required me to create a profile. I did, and became part of what I still believe is one of the most remarkable contributory databases ever created.

Those of you who remember LinkedIn in its early days (it was one of our Models of Excellence in 2004), remember its original premise: making connections – the concept of “six degrees of separation” brought to life. With LinkedIn, you would be able to contact anyone by leveraging “friend of a friend” connections.

It was an original idea, and a nifty piece of programming, but it proved hard to monetize. The key problem is that the people most interested in the idea of contacting someone three hops removed from them were salespeople. People proved remarkably resistant to helping strangers access their friends to make sales pitches. LinkedIn tried all sorts of clever tweaks, but there clearly wasn’t a business opportunity in this approach.

What saved LinkedIn in this early phase was a pivot to selling database access to recruiters. A database this big, deep and current was an obvious winner and it generated significant revenue. But there are ultimately only so many recruiters and large employers to sell to, and that was a problem for LinkedIn, whose ambitions had always been huge.

Where things got off the tracks for LinkedIn was the rise of Facebook, Twitter and the other social networks. Superficially, LinkedIn looked like a B2B social network, and LinkedIn was under tremendous pressure to accept this characterization, because it did wonders for both its profile and its valuation. LinkedIn created a Twitter-like newsfeed (albeit one without character limits), and invested massive resources to promote it. Did it work? My sense is that it didn’t. I never go into LinkedIn with the goal of reading my news feed, and I have the same complaint about it as I have about Twitter: it’s a massive, relentless steam of unorganized content, very little of which is original, and very little of which is useful. 

Today, LinkedIn to me is an endless stream of connection requests from strangers who want to sell me something. LinkedIn today is regular emails reminding me of birthdays of people I barely know because I, like everyone else, have been remarkably undisciplined about accepting new connection requests over the years. LinkedIn is also just one more content dump that I barely glance at, and it’s less and less useful as a database as both its data and search tools are increasingly restricted in order to incent me to become a paid subscriber.

Am I predicting the demise of LinkedIn? Absolutely not! What LinkedIn needs now is another pivot, back to its database roots. It needs to back away from its social media framing, and think of itself more like a Dun & Bradstreet for people. LinkedIn has to use its proven creativity and the resources of its parent to embed itself so deeply into the fabric of business that one’s career is dependent on a current LinkedIn profile. LinkedIn should create tools for HR departments to access and leverage all the structured content in the LinkedIn database so that they will in turn insist on a LinkedIn profile from all candidates and employees. Resurrect the idea of serving as the internal company directory for companies (and deeply integrate it into Microsoft network management tools). Most exciting of all to me is the opportunity to leverage LinkedIn data within Outlook for filtering and prioritizing email – big opportunities that go far beyond the baby steps we’ve seen so far.

I think LinkedIn’s future is bright indeed, but it depends on management focusing on its remarkable data trove, rather than being a Facebook for business. 

Good Ideas Any Publisher Can Use

A recent article in Forbes offers a very thoughtful interview with Marvin Shanken, founder of the eponymous M. Shanken Publications, a company best known for its titles such as Wine Spectator and Cigar Aficionado.

Marvin Shanken is more than a successful publishing entrepreneur. He’s also a true industry innovator. He has started publications that were mocked at launch because nobody thought they had a chance, before they went on to achieve remarkable success. He blends B2B and B2C publishing strategies in ways that few have tried. He’s stayed focused on print more than his peers and continues to profit handsomely from doing so. 

Shanken attributes his success to the quality of his content, and there is no doubt he produces smart, passionate content for smart, passionate audiences. But as the article notes, that alone is not enough these days. So what’s his secret? I think it’s a series of things. Interestingly, many are concepts we’ve held out to data publishers over the years. Let’s review just a few:

First and foremost, Shanken makes his publications central to their markets. His primary technique: rankings and ratings. By offering trusted, independent ratings on a huge number of wines, Wine Spectator in particular began to drive sales because its audience relied on it so heavily. This in turn caused retailers to promote the ratings to drive more sales. That in turn forced wine producers to highlight the ratings, and in many cases, to advertise as well. Wine Spectator is a central player and made itself a real force in the wine business. This drives both readership and advertising.

Secondly, Shanken gets data the way few B2C publishers do. You can’t spend much time on the Wine Spectator website without getting multiple offers to subscribe to the Wine Spectator database – reviews and ratings on a remarkable 378,000 wines. Content never ends up on the floor at M. Shanken Publications – it’s systematically re-used to create not the typical, mediocre searchable archive offered by most publishers, but rather a high-value searchable database. It’s more work but it’s work that yields a lot of revenue opportunity.

Third, Shanken believes in premium pricing because it reinforces the quality of his content. There is something of a universal truth here, provided you don’t go crazy. I can think of few data publishers who charge for their content “by the pound” and are at the same time market leaders.

Finally, Shanken sees the power of what I call crossover markets, where there is an opportunity for a B2B publisher to repurpose its content as B2C.  Indeed, Shanken got into many of his current titles by creating glossy B2C magazines from modest B2B titles.  But he hasn’t exited B2B: he successfully publishes for both business and consumer audiences.

There’s more, much more, but you get the idea. Some of the key success strategies in data publishing work just as well in other forms of publishing because they are so powerful and so fundamental.

 

Being in the Middle of a New Data Product

I’ve written before about the application model called the “Closed Data Pool.” In this model, companies (and many times they are competitors) contribute proprietary data to a central, neutral data company. The data company aggregates the data and sells aggregate views of the data back to the very companies that contributed it. Madness you say? Not really, because these companies get great benefit from those aggregated views (think market share, average pricing and other vital business metrics). It’s the neutral, trusted data provider in the middle who makes it possible. 

But there is another twist on the closed data pool that represents an even more profitable business for the data provider in the middle. Consider a company called The Work Number.

The Work Number came into being because a lot of credit grantors need to be able to quickly verify employment status and income. At the same time, companies hated getting an endless stream of calls from creditors seeking to verify employment data. The Work Number came up with an ingenious solution. It went to big companies and said that they could outsource all these nuisance calls to The Work Number. All the company had to do was supply a feed of its payroll data. 

The Work Number then went to major credit grantors such as banks and said that instead of those painful verification calls they were making, credit grantors could just do a lookup on The Work Number website and instantaneously get the exact data they needed.

The best part? The Work Number was able to charge credit grantors for access to the database because of the big productivity gains it offered. But The Work Number was also able to charge the companies supplyingthe data because it increased their productivity as well by eliminating all these annoying verification calls. Yes, The Work Number charges both to collect the data and provide access to it!

If this sounds like an interesting but one-off opportunity to you, it’s not. Opportunities exist in vertical markets as well. Consider National Student Clearinghouse, which does the same thing as The Work Number, only with college transcripts.

Is there an opportunity in your market? Look for areas where relatively important or high-value information is being exchanged by phone or one-off emails or even by fax. If the information exchange constitutes a serious pain point or productivity drag for either or both parties, you’ve probably got a new data product. 

Workflow Elimination

The power of embedding one’s data product into a customer’s workflow is well understood by data publishers. Simply put, once a customer starts depending on your data and associated software functionality, it’s hard to cancel or switch away from you because the customer’s work has become designed around your product. It’s a great place to be, and it’s probably the primary reason that renewal rates for data products can sometimes verge on 100%.

But should workflow embedment be the ultimate objective of data publishers? This may depend on the industry served, because we are starting to see fascinating glimpses of a new type of market disruption that might be called “workflow elimination.”

Here’s a great example of this phenomenon in the insurance industry. A company called Metromile has rolled out an artificial intelligence system called Ava. What Ava does is stunning.

Auto insurers using Ava require their policyholders to attach a device called Metromile Pulse to their cars. As you may know, virtually all cars now have onboard computers that log tremendous amounts of data about the vehicle. In fact, when your local auto mechanic performs a computerized diagnosis of your car, this is where the diagnostic data comes from. Metromile Pulse plugs into this onboard computer. The device does two things for insurance companies: It allows them to charge for insurance by the mile, since the onboard computer records miles driven and the device transmits them wirelessly to the insurer. That’s pretty cool and innovative. But here’s what’s mind-blowing: if a policyholder has an auto accident, he or she can file an online claim, and Ava can use the onboard data to confirm the accident, re-construct the accident using artificial intelligence software, and automatically authorize payment on the claim if everything checks out, and all this can be done within a few seconds. The traditional claims payment workflow hasn’ just been collapsed, it’s effectively been eliminated.

How does a data publisher embed in workflow if there’s no workflow? That’s a problem, but it’s also an opportunity, because data publishers are well positioned to provide the tools to eliminate workflow. If they do this, and do this first, they’ll be even more deeply embedded in the operations of their customers. And doubtless you’re already thinking about all the subsidiary opportunities that would flow out of being in the middle of so much highly granular data on automobile operation.

“Workflow elimination” won’t impact every industry quickly if at all. But it’s an example of how important it is to stay ahead of the curve on new technology and always seeking to be the disrupter as opposed to the disruptee.