Viewing entries in
Publishing Trends

Is Time Up for 230?

In 1996, several Internet lifetimes ago, Congress passed a bill called the Communications Decency Act (officially, it is Title V of the Telecommunications Act of 1996). The law was a somewhat ham-handed attempt at prohibiting the posting of indecent material online (big chunks of the law were ultimately ruled unconstitutional by the Supreme Court). But one of the sections of the law that remained in force was Section 230. In many ways, Section 230 is the basis for the modern Internet.

Section 230 is short – only 26 words – but those 26 words are so important there has even been an entire book written about their implications. Section 230 says the following: 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.”

 The impetus for Section 230 was a string of court decisions where the owners of websites were being held liable for things posted by users of their websites. Section 230 stepped in to provide near-absolute immunity for website owners. Think of it as the “don’t shoot the messenger” defense. Without Section 230, websites like Facebook, Twitter and YouTube probably wouldn’t exist. And most newspapers and online publications probably wouldn’t let users post comments. Without Section 230, the Internet would look very different. Some might argue we’d be better off without it. But the protections of Section 230 extend to many information companies as well.

 That’s because Section 230 also provides strong legal protection for online ratings and reviews. Without Section 230, sites as varied as Yelp, TripAdvisor and even Wikipedia might find it difficult to operate. Indeed, all crowdsourced data sites would instantly become very risky to operate.

 The reason that Section 230 is in the news right now is that it also provides strong protection to sites that traffic in hateful and violent speech. That’s why there are moves afoot to change or even repeal Section 230. Some of these actions are well intentioned. Others are blatantly political. But regardless of intent, these are actions that publishers need to watch, because if it becomes too risky to publish third-party content, the unintended consequences will be huge indeed.

 

 

Just Do It for Me

The app economy, as many have noted, is primarily based around creating convenience, not delivering true innovation. Despite this, its impact has been profound and pervasive. Consumers have come to expect that they can manage almost every task and life activity via a smartphone. 

The competitive pressures of the app economy lead inevitably to apps trying to one-up each other. Ready availability of risk capital encourages this trend.

Consider something as basic as food. We’ve seemingly solved all the issues that used to make home delivery of groceries such a daunting challenge. We then moved to meal kits, where your food ingredients come pre-measured, cleaned and chopped. From there, the move was to fully-cooked meals delivered to you via a subscription meal plan. 

There is a clear trend towards task automation, often in the extreme. And this trend is migrating from the consumer world to the B2B world. 

 You can see the trend at work in the wonderful world of sales leads. First came software to let users better manage their prospects. Then came services to let users add or augment their prospects by seamlessly importing new names. When users discovered that their prospects names needed to be maintained and updated, services emerged for this task. When users discovered they had too many prospects to manage effectively, services emerged to rank and score these prospects. Then came “purchase intent” services that tried to turn cold leads into hot leads using automation tools. And now we see a raft of services that offer to do actual appointment setting.

 For data publishers the implication is clear: your customers are finding the idea of purchasing just data less and less compelling. Providing them with tools to act on your data was the next obvious evolutionary step, and this has worked out well for most data providers. But there is a new evolutionary phase underway: task automation services that do more and more of the customers’ work for them.  It’s well underway in the lead gen world, but it’s coming to your data neighborhood soon. How this plays out will vary by market and product, but the general direction is that customers will pay more to offload some of their work. And that means opportunity for those who can figure out how to take it on.

AI in Action

Two well-known and highly successful data producers, Morningstar and Spiceworks, have both just announced new capabilities built on artificial intelligence (AI) technology. 

Artificial Intelligence is a much-abused umbrella term for a number of distinctive technologies. Speaking very generally, the power of AI initially came from sheer computer processing power. Consider how early AI was applied to the game of chess. The “AI advantage” came from the ability to quickly assess every possible combination of moves and likely responses, as well as having access to a library of all the best moves of the world’s best chess players. It was a brute force approach, and it worked.

Machine learning is a more nuanced approach to AI where the system is fed both large amounts of raw data and examples of desirable outcomes. The software actually learns from these examples and is able to generate successful outcomes of its own using the raw data it is supplied. 

There’s more, much more, to AI, but the power and potential is clear.

So how are data producers using AI? In the case of Morningstar, it has partnered with a company called Mercer to create a huge pool of quantitative and qualitative data, to help investment advisors make smarter decisions for their clients. The application of AI here is to create what is essentially a next generation search engine that moves far beyond keyword searching to make powerful connections between disparate collections of data to identify not only the most relevant results, but to pull meaning out of those search results as well.

 At Spiceworks (a 2010 Model of Excellence), AI is powering two uses. The first is also a supercharged search function, designed to make it easier for IT buyers to more quickly access relevant buying information, something that is particularly important in an industry with so much volatility and change.

Spiceworks is also using AI to power a sell-side application that ingests the billions of data signals created on the Spiceworks platform each day to help marketers better target in-market buyers of specific products and services.

As the data business has evolved from offering fast access to the most data to fast access to the most relevant data, AI looks to play an increasingly important and central role. These two industry innovators, both past Models of Excellence m are blazing the trail for the rest of us, and they are well worth watching to see how their integration of AI into their businesses evolves over time.

For reference:

Spiceworks Model of Excellence profile
Morningstar Model of Excellence Profile

 

 

Standard Stuff Is Actually Cool

In the not-too-distant past, there was something close to an agreed-upon standard for the user interface for software applications. Promoted by Microsoft, it is the reason that so much software still adheres to conventions such as a “file” menu in the upper left corner of the screen.

The reason Microsoft promoted this open standard is that it saw clear benefit in bringing order out of chaos. If most software functioned in largely the same way, users could become comfortable with new software faster, meaning greater productivity, reduced training time and associated cost, and greater overall levels of satisfaction.

Back up a bit more and you can see that the World Wide Web itself represented a standard – it provides one path to access all websites that function in all critical respects in the same way. Before that, companies with online offerings had varying login conventions, different communications networks, and totally proprietary software that looked like nobody else’s software. Costs were high, learning curves were steep and user satisfaction was low.

There are clear benefits to adhering to high-level user interface standards, even ones that bubble up out of nowhere to become de facto standards. Consider the term “grayed out.” By virtue of this de facto user standard, users learned that website features and functions that were “grayed out” were inaccessible to them, either because the user hadn’t paid for them, or because they weren’t relevant to what the user was currently doing within the application. Having a common understanding of what “grayed out” meant was important to many data publishers because it was a key part of the upsell strategy.

That’s why I am so disappointed to see the erosion of these standards. On many websites and mobile apps now, a “grayed out” tab now represents the active tab the user is working in, not an unavailable tab. And virtually all other standards have evaporated as designers have been allowed to favor “pretty” and “cool” over functional and intuitive. I could go on for days about software developers who similarly run amok, employing all kinds of functionality mostly because it is new and with absolutely no consideration for the user experience. What we are doing is reverting to the balkanized state of applications software before the World Wide Web.

And while I call out designers and developers, the fault really lies with the product managers who favor speed above all, or who themselves start to believe that “cutting edge” somehow confers prestige or competitive advantage. Who’s getting left out the conversation? The end-user customer. What does the customer want? At a basic level the answer is simple: a clean, intuitive interface that allows them to access data and get answers as quickly and painlessly as possible. Standard stuff, and the best reason that being different for the sake of being different isn’t in your best interest.

Search is Easy; Data is Hard

The New York Times magazine has just published a fascinating article about Google, discussing whether Google has become an aggressive monopolist in the area of search, and if so, whether or not it needs to be broken up under anti-trust law. The article, which is well worth a read, cites case after case where Google ostensibly derailed other companies that had seemingly developed better search tools than Google.

Better search tools than Google? Is that even possible? That’s where I take some slight exception to the article. Possibly in order to make this topic more accessible to a mass audience, it labels all these competitive search providers as “vertical search” companies.

Those of us with some history in the business remember back to around 2006 when “vertical search” was a thing, a thing that has long since faded. At the time, the concept of vertical search was a full-text search engine, much like Google, but one that was focused on a single vertical market or specific topic area. The thinking was that if publishers curated the content that was being indexed, the search results would be stronger, more accurate, more contextual and more precise. A prime example at the time was the word “pumps.” As a search term, it’s a tough one – the user could be looking for a device that moves fluids … or shoes. A vertical search engine, which would be oriented towards either equipment or fashion, would reliably return more relevant responses. Vertical search failed as a business not because it was a bad idea, but because (and let’s be honest here) most of the publishers rushing to get into it were too lazy to do the up-front curation work. And without quality, up-front curation, vertical search quickly becomes just plain bad search.

Vertical search as used in the article really refers to vertical databases. The difference is important because the article also states that parametric search is hard. That statement is simply more proof that vertical search as used in this article means databases. Parametric search is not hard: collecting and normalizing data so it can be searched parametrically is hard. Put another way, searching a database is easy, providing there is a database to search.

Google never wanted to do the work of building databases. It sometimes bought them (example: a $700 million acquisition of airline data company ITA) or “borrowed” them (pulled results from third-party databases into its own search results, effectively depriving the database owner of much of its traffic – think Yelp). What Google did instead was devote unfathomable resources to develop software code to try to make unstructured data as searchable as structured data. While it made some impressive strides in this area, overall Google failed.

With this context, you can clearly see why data products are so important and valuable. Data collection is hard. Data normalization is hard. But there’s still no substitute for it, something Google has learned the hard way. It may be disheartening to see survey after survey where we learn that users turn to Google first for information. But this is the result of habituation, not superior results. For those who need to search precisely, and for those who really depend on the information they get back from a search, data products win almost every time … provided that users can find them. Read this article and judge for yourself  just how evil Google may be…