InfoCommerce Group Blog


The Great e-Book Conspiracy

It’s a standard plotline for a whole category of potboiler novels: a shadowy, global group of evil corporations conspire to bend the marketplace in such a way that it delivers them massive profits (and sometimes global domination as well).

Imagine my surprise then in reading a recent article that suggests that not only is such a conspiracy real and operating as we speak, but it is operating in the world of book publishing. Yes, the people who publish conspiracy novels are engaged in a conspiracy themselves!

Here’s the diabolical plan: newly freed to set the price of e-books, publishers are pricing them purposely high to discourage their sale. This in turn will push consumers to reject the e-book format. With nobody buying e-books, the e-book sellers will go out of business, and the publishers will have achieved their objective. And what might this objective be, you ask? Well, to force consumers to buy books in print again! According to this article, print books have better economics than e-books, so by destroying e-books, the glory days of book publishing will return.

Now there are more than a few prominent blogs read by publishing professionals that are relentless in promoting the belief that a resurgence in print books and magazines is right around the corner. There are no shortage of people in the publishing industry that want to believe. The fact that these blogs derive their advertising support largely from printing companies is itself telling. But to take it to the level of a plan where the major book publishers are collaborating on a plan to crush digital distribution in favor of print is odd, worrisome and not just a little sad.

And that’s just one more reason, by the way, that the data business is such a good business. There’s no longing for print among data publishers because the print format always constrained data. Remember the days of the print giants such as Sweet’s and Thomas Register? Even publishing their 30-volume print products with tens of thousands of pages, these publishers were only scratching the surface of what they might have published. Even the advent of CD-ROM did little for these publishers, who were forced to publish in inconvenient multi-disc sets. Online is the natural home for data. When you’re online, your economics improve, size doesn’t matter and you can update your content faster and more efficiently than ever before.

Best of all, there’s no need to create print-centric global conspiracies, though there’s probably a great novel in there somewhere…



Ad Blocking in Perspective

There has been tremendous anxiety in the media world around Apple’s move to allow ad blocking software on iPhones and iPads. After all, eliminate ads from mobile devices, and you take a big bite out most publishers’ ad revenue. Publishers are describing this move by Apple in near-Apocalyptic terms. But let’s get a grip.

First, we need to be clear that this ad blocking capability applies to the mobile web, not to apps. In that respect, this move by Apple is really just a big kick in the pants to build an app and get your audience onto it as quickly as possible.

Second, this move makes a lot more sense when you consider what’s driving it. Apple doesn’t make money from mobile search advertising; Google does. Apple doesn’t like Google for a variety of reasons, hence this aggressive move cuts into Google’s main source of revenue. We’re all just collateral damage in this war of the titans. But this perspective also helps you understand why apps are (and will likely remain) protected from ad blocking technology. The Apple ecosystem depends on apps, and Apple makes a lot of money from apps. Apple is not really against all mobile advertising; it’s against mobile advertising that benefits Google.

Third, some of these new mobile ad blockers will reportedly strip out some content as well as advertising (not text, but some things such as bloated masthead graphics). Indeed, the new breed of ad blockers are really less focused on eliminating advertising than improving the mobile user experience by speeding up page loads as much as possible.

Fourth, once again, publishers are feeling the pain of a self-inflicted wound. By junking up their websites (and by extension their mobile websites) with all manner of trackers, ad networks, auto-play video, re-targeting ads, overlays, and perhaps most ironic of all, ads to get the user to download the publishers own app, we’ve junked up the mobile experience quite thoroughly. When was the last time you recall having a satisfactory (as in fast and easy) mobile web session?

I certainly agree that a lot of people are using ad blocking software out of a sense of entitlement – they truly believe they should have limitless access to content without fee and ad-free. Of course that’s another self-inflicted wound (a topic I’ve discussed many times over the years). But the more important reason that users are flocking to ad blocking software is that it actually improves their online experiences. That’s a sad statement, but the resolution of the problem is firmly under our control.


Big Data: The Power of Plug and Play recently announced the launch of a new service called The Predictive Cloud that provides API access to its powerful predictive engine. InsideSales made its name, not surprisingly, by adding predictive capabilities to sales prospects. By aggregating very granular prospect data from its customer base (over 100 billion sales interactions), InsideSales can not only predict who is a top sales prospect, but even what day of the week is best to make contact. The Predictive Cloud throws this impressive analytical and predictive capability open to anyone who wants to use it, even if they want to use it to predict their own top prospects, though what really excites InsideSales is the belief that other companies with lots of data will find non-sales applications for its predictive engine, such as logistics, marketing and even human resources.

While The Predictive Cloud has obvious applicability to many commercial data products, it’s representative of an important trend: the ability for data providers to tap into cloud-based plug-and-play datasets and analytical tools to enhance their own products. It’s a hugely positive development for data companies, because these new tools and datasets allow us to access Big Data in a useful and powerful way without having to become Big Data experts. Similarly, we can now start to tap into analytical toolkits without the expense and complexity or having to build them … or even run them.

I’ve been saying it for several years now: Big Data doesn’t imperil commercial data producers, most of which produce what can be called “Little Data.” Indeed, Big Data can be used to enhance Little Data and make it deeper and more powerful. The analytic tools and capabilities that have come out of the rush to Big Data can also now be profitably employed by Little Data producers as well.

There’s a lot going on out there, but everything I see still tells me that those who control a valuable dataset are still in the driver’s seat, especially if they take full advantage of these plug-and-play opportunities to make their datasets smarter, deeper and ever more useful to their customers.

Another Kind of Data Harvesting

I have written before about the data-driven revolution that’s taking place in agriculture today that will allow farms to radically increase their productivity and crop yields. Data collected from farm equipment and soil sensors allow farms to plant exactly the right seeds at exactly the right depth to maximize yields, all handled automatically by high tech farm equipment guided by GPS that can run itself autonomously. It’s an exciting future.

One of the key points of my earlier article is that a farmer’s data, by itself, isn’t that valuable. Knowledge comes from building a large enough sample of planting data from other similar farms in similar geographies in order to find benchmarks and best practices. Thus if you want data from your own farm to benefit your own farm, you need to pool your data.

But what if a farmer doesn’t want to make the needed investment to benefit from data-driven agriculture? Are there other markets for the data?

Well it turns out that there are. As an article in the Wall Street Journal makes clear, field level data doesn’t just benefit the farmer, there are others who will happily pay for it. For example, seed companies can get extremely detailed insights into what’s being planted and what’s growing best and where. They can use such data to inform both their R&D and their marketing and forecasting activities. There’s a Wall Street angle as well, with commodities traders looking for an edge by trying to get an early insight into what the forthcoming growing season will bring.

But even here, there’s a need for aggregation. The experience of one farm doesn’t help seed companies or traders very much. But the more farm data you can aggregate, the more valuable your dataset. The race is already on with companies such as Grower Information Services Cooperative, Farmobile and Granular Inc. are already duking it out to sign up the most farmers as quickly as possible.

The simple lesson here is that even though the same farm data can be monetized in multiple ways, there is a valid, indeed critical, role for an aggregator. We see also that first-mover advantage is critical in data plays like this. And as always, market neutrality is an important advantage: you’ll have a much harder time collecting this kind of data if you are a seed company as opposed to an independent information company.

Is Faster Better?

When it comes to information, is faster always better? As information users, we all want to have the freshest possible information possible on which to base our decisions. But, as many data publishers have learned the hard way, while everyone wants up-to-the-second accuracy and currency in their data, not everyone is willing to pay for it. Indeed, we’ve noted with concern the growing trend towards “good enough data,” where users are willing to sacrifice some amount of accuracy and currency in favor of a significantly reduced price. So, on a practical basis, a data publisher could be excused for concluding that the most accurate and current data shouldn’t be a top priority.

Things, however, are a bit more complicated than that. The speed of information updates does matter, a lot, in specific applications and markets and people in those markets will happily pay a stiff premium to get hold of such data. The obvious place to look for proof of this is the world of finance. If you have information that can move the price of a stock or the entire market, speed matters. Consider that Thomson Reuters used to charge a premium for those who wanted access to an important consumer sentiment survey just two seconds before everyone else.

There are more mundane examples of this in non-financial markets. Consider sales leads. While every second may not matter when it comes to sales leads, there is added value in delivering them quickly, particularly if based on a real-time assessment of a prospect’s online browsing pattern.

Given all this, it would seem that a new service called Now-Cast Data has a winner on its hands. That’s because the company, run by economists, is preparing to offer a real-time economic forecasting service. Real-time delivery is actually something of a breakthrough in the world of economic forecasting, which is used to monthly, quarterly and even annual reporting. Clearly, by accelerating forecasting, the financial types will gain an information advantage for which they will pay handsomely. Or so it would seem.

But as an article in the Wall Street Journal notes, Now-Cast Data has some convincing to do. The core issue is that while Now-Data is certainly accelerating forecasting, at the end of the day, it is still offering forecasts. It can’t be sure what will or won’t happen, or whether specific events (e.g., inflation) will persist. As an economist in the article notes, “When a big outside event disrupts the economy, those are hard things to forecast. By definition you can’t build them into your forecasting model because they haven’t happened yet.” In short, we’re guessing faster, but we’re still guessing.

So where do I come out on speed? Is faster data always better? At least for now, I don’t think it is. Right now, it’s only really valuable in a specific, limited set of applications. Keep in mind too that we’re already drowning most of our customers in data. Getting the fire hose to pump faster just makes things more unmanageable for them. And speed is a relative concept as well. If a company changes its address, that’s a valuable, time-sensitive piece of actionable information. But if you already pass that information to your customers the same day you learn about it – say 8 hours at most – accelerating that to 8 minutes won’t improve either customer sales results or your bottom line. As data publishers, we want to be continually looking for ways to obtain and move information faster, but speed is something that’s ultimately defined by your customers and your competitors.