Viewing entries in
Companies to Watch

Big Data: The Power of Plug and Play

InsideSales.com recently announced the launch of a new service called The Predictive Cloud that provides API access to its powerful predictive engine. InsideSales made its name, not surprisingly, by adding predictive capabilities to sales prospects. By aggregating very granular prospect data from its customer base (over 100 billion sales interactions), InsideSales can not only predict who is a top sales prospect, but even what day of the week is best to make contact. The Predictive Cloud throws this impressive analytical and predictive capability open to anyone who wants to use it, even if they want to use it to predict their own top prospects, though what really excites InsideSales is the belief that other companies with lots of data will find non-sales applications for its predictive engine, such as logistics, marketing and even human resources.

While The Predictive Cloud has obvious applicability to many commercial data products, it’s representative of an important trend: the ability for data providers to tap into cloud-based plug-and-play datasets and analytical tools to enhance their own products. It’s a hugely positive development for data companies, because these new tools and datasets allow us to access Big Data in a useful and powerful way without having to become Big Data experts. Similarly, we can now start to tap into analytical toolkits without the expense and complexity or having to build them … or even run them.

I’ve been saying it for several years now: Big Data doesn’t imperil commercial data producers, most of which produce what can be called “Little Data.” Indeed, Big Data can be used to enhance Little Data and make it deeper and more powerful. The analytic tools and capabilities that have come out of the rush to Big Data can also now be profitably employed by Little Data producers as well.

There’s a lot going on out there, but everything I see still tells me that those who control a valuable dataset are still in the driver’s seat, especially if they take full advantage of these plug-and-play opportunities to make their datasets smarter, deeper and ever more useful to their customers.

Evolving From Data Providers to Market Makers

Trucker Path is a young company, founded only in 2013. Yet its mobile app, providing truckers with basic directory information such as location of rest stops, parking, diesel fuel stations, weigh stations and more has already attracted over 250,000 users. Its formula for success is a familiar one to data publishers: collect information that is really needed by a specific niche market but not readily available in one place elsewhere.

truckerpath.jpg

Another mobile app success story to be sure. And Trucker Path could have rested on its laurels. But just a few days ago, it signaled a much more ambitious vision with the launch of a new product called Trucker Marketplace. It is exactly what the name implies: a marketplace where truckers can find and connect with those who need to ship freight, either regionally or nationally.

It’s a simple concept, and it’s also not a new concept. Many companies have sought an intermediary role in this inefficient marketplace, particularly in the area of backhauls, where trucks often return home empty after delivering a load. And the opportunity is huge: more than 75% of all freight in the U.S. is delivered by truck.

Obviously, Trucker Path has a natural point of leverage in that it can offer this service to its existing base of satisfied directory users. But in another twist I find both significant and smart, Trucker Path is embracing freight brokers, not trying to disintermediate them. Rather than embracing the standard tech playbook of trying to blow up an inefficient industry in order to carve out a position, Trucker Path is simply trying to graft a new layer of efficiency onto an existing market. I would argue they’re trading a bit of potential upside for a radically increased chance of success.

Trucker Path does some other tried and true things such as providing credit, insurance and license data to its marketplace participants, a tested way to increase both value and trust.

Trucker Path is a case study for my long-held view that B2B data publishers in market verticals are well positioned to consider the marketplace model. They’ve got a brand, they’ve got the audience, and they know how to use data (e.g., license and credit information) to create the trusted environment that is essential to driving transaction volume. And despite their noisy collapse after the dot com bust (too much, too soon),  I am very optimistic about the future of B2B exchanges. We all now recognize the value of workflow integration: if you’re enabling the flow of work for an entire industry, you’re obviously in a very good place.

Everyone into the (data) Pool

There’s a quiet revolution going on in agriculture, much of it riding under the label of “precision agriculture.” What this means is that farms are finding they can use data both to increase their productivity and their crop yields.

To provide just one vivid example, unmanned tractors now routinely plow fields, guided by GPS and information on how deep to dig in which sections of the field for optimal results. Seeds are being planted variably as well. Instead of just dumping seeds in the earth and hoping for the best, precision machinery, guided by soil data, now determines what seeds are planted and where, almost on an inch-by-inch basis.

It’s a big opportunity, with big dollars attached to it, and everyone is jockeying to collect and own this data. The seed companies want to own it. The farm equipment companies want to own it. Even farm supply stores – the folks who sell farmers their fertilizer and other supplies want to own it. In fact, everyone is clamoring to own the data, except perhaps the farmer.

Why not? Because a farmer’s own soil data is effectively a sample size of one. Not too valuable. Value is added when it  is aggregated to data from other farmers to find patterns and establish benchmarks. It’s a natural opportunity for someone to enable farmers to share their data to mutual benefit. This is a content model we call the “closed data pool,” where a carefully selected group agrees to contribute its data, and pay to receive back the insights gleaned from the aggregated dataset.

One great example of this model is Farmers Business Network. Farmers pool their data and pay $500 per year to access the benchmarks and insights it generates. Farmers Business Network is staffed with data scientists to make sense of the data. Very importantly, Farmers Business Network is a neutral player: it doesn’t sell seeds or tractors. Its business model is transparent, and farmers can get data insights without being tied to a particular vendor. Farmers Business Network makes its case brilliantly in its promotional video, which is well worth watching: https://www.youtube.com/watch?v=IS4KIrcRMMU

Market neutrality and a high level of trust are essential to building content using the closed data pool model. But it’s a powerful, sticky model that benefits every player involved. Many data publishers and other media companies are well positioned to create products using this model because they already have the neutral market position and market trust. Closed data pools are worth a closer look. Google certainly agrees: it just invested $15 million into Farmers Business Network.

Is Your Data "Datanyzed"?

A new product by a cool young company called Datanyze is capitalizing on some well-established infocommerce best practices. Here’s how they did it.

The core business of Datanyze is identifying what SaaS software companies are using (sometimes called a company’s “technology stack”). To do this, Datanyze interrogates millions of company websites on a daily basis, looking for telltale clues as to the specific software they are employing online, and apparently a lot of categories of software can be divined this way. Datanyze aggregates and normalizes these data, then overlays company firmographic data (Alexa website rank, contact information, revenue estimates) to create a complete company profile.

Datanyze links directly to the Salesforce accounts of its customers, so it can add and update prospects on a real-time basis. At a basic level, the use case for this product is straightforward: a marketing automation platform like Eloqua could use it to find companies using a competitor or no marketing automation at all. But wait, there’s more!

Datanyze’s new product essentially flips this service. Now, Datanyze clients can have Datanyze analyze their existing best customers, and Datanyze will build a profile of these customers that can be used to predictively rank all their prospects, current and future. Here are the best practices to note:

  • The transition of Datanyze from a data provider to an analytics provider, something that’s happening industry-wide
  • The shift from passive (we supply the data, you figure out what to do with it), to active (here are top-rated prospects we’ve identified for you), and the associated increase in value being delivered by the data provider
  • The tight integration with Salesforce means that Datanyze customers just need to say “yes” and Datanyze can get to work – no IT involvement, no data manipulation, no delays
  • Datanyze is pouring leads into critical, core systems of its customers, a strong example of workflow integration
  • The use of inferential data. Boil down a lot of the analytical nuance, and Datanyze has discovered that companies that buy expensive SaaS software are better prospects for other kinds of expensive SaaS software. Datanyze doesn’t know these companies have big budgets; but it does know that these companies use software that implies they have big budgets

Datanyze offers a concrete example of how data companies are evolving from generating mountains of moderate value data to much more precise, filtered and valuable answers. Are you still selling data dumps or analytics and answers?

The Award for Outstanding Performance Goes to Internet Movie Database

We awarded the Internet Movie database a Model of Excellence  in 2003, and it is still a standout in terms of innovation and best practices.  

The Internet Movie Database (often called by its acronym IMDB) originally started in the UK as a non-profit undertaking, and it may well be the earliest and most successful example of crowdsourcing – well over a decade before the term was even coined. Very simply, the IMDB was a site for movie buffs worldwide to build an enormously detailed database of every movie ever made. And we are talking about a serious level of detail. Want to know who was the hairstylist for the co-star of an obscure French drama from the 1950s? Well, IMDB was the go-to source. What also made IMDB interesting was that from its inception it was a true database, and despite the inherently unruly nature of crowdsourcing, there were enough committed volunteers to take on the unsexy work of removing duplicate entries and normalizing the data.

In 1998, IMDB was quietly acquired by Amazon and turned into a for-profit company. There are some great best practices to be observed here. Taking over and commercializing a site built by tens of thousands of unpaid, die-hard movie fans was a risky proposition. The backlash could have killed the business in short order. But Amazon left IMDB alone, infusing it with editorial resources so the database got bigger and better every year. Better data, less work and all free. Not much here to get upset about!

But Amazon (surprise!) wasn’t in this to be charitable. First, it started marketing to the substantial audience of IMBD users with links to its site. Like the movie? Great. Amazon can sell you a copy.
Amazon’s next move was sell sponsorships to movie studios eager to promote upcoming releases. From there, Amazon launched a subscription-based Pro version of the database that offered enhanced searching and even deeper content to movie industry professionals for research purposes. The core site remained free, meaning Amazon was a pioneer with the freemium model, well before that term had become popular. 

Is Amazon now resting on its laurels? Absolutely not. To support both its Kindle and Amazon Prime offerings, Amazon has launched a service called X-Ray, powered by IMDB. Amazon also selectively licenses this new data capability. What X-Ray does is link movies to the IMDB database, so users can visually identify actors in the film, find movie trivia, explore the movie soundtrack and much more, right while watching the movie.  It’s not all software magic, by the way. Amazon is doing a lot of the necessary linkages manually, but it already has thousands of movies coded. Also of interest, it’s touting its “X-Ray Enabled” badge that if it plays its card right, could someday become a differentiator for new movie releases.

Endless innovation. Strong support of its core e-commerce platform. Deft handling of often prickly enthusiast community. Endless monetization. This is where data is going!