Viewing entries in
Publishing Trends

Blockchain: The Next Big Thing

We all lived through the heights of the social media craze when every new product needed a social aspect in order to succeed (success is defined as getting funding). My personal favorite was the backyard grill thermometer that posted the temperatures of what you were cooking to Facebook and Twitter. (Okay, there was a little more to it than that, but not much).

But as an Internet fad, social is starting to cycle down, meaning that another Internet fad needs to take its place. My nomination: blockchain.

You have doubtless heard of blockchain, although the odds are you don’t know exactly what it is or what it does. Most people don’t. My understanding of it is sketchy. But when it comes to the Internet, complexity is a benefit because everyone salutes when they hear about a new service using blockchain, without being able to ask any tough questions about how or why.

A great example of this is a restaurant review site called Munchee. Munchee plans to disrupt sites such as Yelp and Zagat in part by using blockchain technology. Think about that for a while. Or better yet, don’t think about it. You’ll get a headache.

Munchee has a few interesting twists to it. First, it’s meant to be more granular than sites like Yelp, by focusing on the individual dishes a restaurant serves, based on the belief that all dishes served by a particular restaurant are unlikely to be of equal quality. You might doubt the need, but it’s a plausible idea.

Munchee also wants to correct for sample bias in reviews. It’s well understood that people are more likely to post a review when they are dissatisfied. Munchee wants to get around this problem be rewarding all reviews with tokens that can be redeemed at restaurants or even sold to other Munchee participants for cash. If you are getting paid for every review, the reasoning goes, you’re as likely to create a positive review as a negative one. Again, an interesting idea.

To get even more accuracy, Munchee wants all reviews to be peer-reviewed by other Munchee users. Munchee intends to recruit peer reviewers by using (buzzword alert) machine learning to find the other Munchee users best qualified to pass judgment on the review. Still again, the notion of peer review is an interesting one.

So where exactly does blockchain come in? Does it, for example, somehow definitively tie the reviewer to the restaurant, in order to eliminate false reviews? Well, no. Instead, those award tokens that Munchee offers are actually crypto-tokens that are tied to the Ethereum blockchain. That’s it.

Munchee actually has some fresh approaches to review platforms, but it apparently couldn’t resist the temptation to bolt on a tenuous blockchain application to sound even cooler and more cutting-edge. Unfortunately, that works to obscure the more basic ideas it has that are likely to be where the real value is created. We all need to be careful not to fall into the trap of rushing to adopt new technologies just because they get a buzz around them. You’ll only end up confusing your customers … and yourself … about the true ways you offer value.

 

Data Marketplaces: Almost There

There has been much excitement about the recent launch of the Salesforce Data Studio, a new data-sharing platform within the Salesforce Marketing Cloud.

The idea of the Data Studio is simple: marketers can, on a fully automated basis, identify, order and integrate datasets that others are offering for sale. In its early implementation, the Data Studio seems mostly like a cool way for marketers to buy email lists. But the vision is much bigger and more interesting: to allow marketers to augment and overlay existing email lists with more data so that they become smarter about their lists, target their efforts more effectively, and get better results.

Data Studio at time of launch is heavy on audience data, mostly from larger publishers, but there’s no reason any data publisher couldn’t participate as well, especially if the Data Studio wants to exploit its full potential.

Interestingly, Salesforce is not the only big player that has an interest in data marketplaces. The Amazon Web Services Marketplace sells software through its marketplace – again, a totally automated buying experience – but it also offers a selection of public domain datasets for free. It’s a small jump then for Amazon to start selling databases on behalf of others.

As you can see, neither of these two marketplaces is quite ready for prime time as far as becoming a meaningful sales channel for data publishers, but they’re tantalizingly close. Keep an eye on these marketplaces: they could become very important to data publishers very quickly.

Top Level Domains/Low-Level Trustmarks

If you’re not immediately familiar with the term top level domain (TLD), think of “.com” and “.net” and “.edu” – they are all top-level domains, along with hundreds of others, and by the way, they are not limited to three characters anymore.

In the early days of the Internet, domain names were free for the asking, and I stocked up on quite a few for no other reason than a gut feeling they had some value. I did ultimately sell a lot of them, including several Fortune 500 companies who bought their corporate names back from me. By the time I realized there might be a bigger opportunity here, the rules of the game changed and big companies that had previously shown up with checkbooks now showed up with lawyers. Ah, well!

But for all my domain name hoarding, I couldn’t ever get domains names with the “.edu” TLD because they were reserved for schools. Similarly, “.net” was reserved for Internet Service Providers back then, and “.org” was reserved for non-profits. These distinctions were widely understood back then, and even today, I hear people telling me some organization “must” be a non-profit because it has a “.org” domain name. Old naming conventions die hard. More importantly, people are hungry for trustmarks.

But TLDs were never great trustmarks, for two reasons. First, validating an organization’s credentials before handing out a domain name is hard and expensive work. Second, domain names don’t sell for a lot, so you can only make money with volume. The pickier you are, the less money you make.

Despite this, the non-profit sector is now pushing the “.ngo” TLD. Think of it as a do-over of the “.org” TLD, because the operator of the domain is trying to limit sales to non-profit entities with the explicit hope that the TLD will become a trustmark over time. Similarly, the AICPA, the big association of certified public accountants, is in a fierce battle to control the forthcoming “.cpa” TLD, again with the hope it can restrict its use to certified public accountants and build it into a trustmark.

My view is that TLDs make for poor trustmarks. The economics make it hard to enforce standards, and there are too many sleazy operators in the business that drag down the credibility of TLDs across the board. The need for online trustmarks remains high. Who better than data companies to seize the opportunity?

 

Bigger Is Not Always Better

One key dynamic of the data business is that the strongest businesses serve single, tightly-defined markets, typically a single vertical market. The result is that the market opportunity tends to be smaller, but it is much easier to stay close to and defend.

The problem for data publishers attempting to build products with horizontal coverage across multiple markets, or who want to play in large consumer markets, comes down to a very simple reality: it’s hard to be everything to everybody.

It’s instructive to look at some of the reasons why it’s so hard to achieve long-term success with broad-based data products:

Lowest common denominator: In order to operate efficiently, broad-based data publishers typically have to collect fairly standardized and fairly shallow data across multiple vertical markets. This creates an opportunity for other data publishers to “slice and dice” these publishers, peeling off the largest and most profitable vertical sub-markets, and serving the same need with deeper and more tailored data.

Greater incentive for competitors: If you achieve any level of success with a horizontal, broad-based data product, you’ve not only identified a big market need, you’ve identified a big market opportunity as well. That means it may well be worth it for a competitor to invest significantly to steal market share or push you out entirely. Contrast this with successful vertical market data publishers, where the small scale of the market is one of their best protections. Competitors typically can’t financially justify trying to push their way into small vertical markets.

Turning an ocean liner: In addition to being a juicy competitive target, an established broad-based data publisher typically succeeds because it has built an operation that over time becomes very difficult to change for technical and business reasons. That means it will be at the mercy of such forces as new technology, shifts in user preferences and new business models, and just a few competitive successes can break the momentum and market dominance of the incumbent data provider. Moreover, the incumbent data responder is only able to react slowly, if it can react at all.

Too cool for school: While some broad-based data publishers become exposed because they can’t react quickly, others expose themselves by innovating so aggressively they get ahead of their markets and their customers. In a relentless quest to stay relevant and ahead of the competition, these publishers roll out features and functionality that their customer often don’t understand or even want, adding complexity to the user experience while muddying the core value proposition.

Platform envy: Perhaps encouraged by the spectacular success of Amazon, it’s easy to take the view that your data product can become a data platform, a way to distribute all kinds of data, products, whatever. That’s a big leap technologically, and while platforms are enticing to publishers, they almost inherently mean diffused focus, thus opening opportunities for competitors to enter the market with more focused products.

The most successful data publishers and products I see these days tend to serve one market and serve it extremely well. As long as these businesses stay close to their customers, evolve their products regularly and prudently, and offer good customer support and fair pricing, they can be enormously profitable while remaining largely immune to competition. That’s why in the data business at least, bigger isn’t always better.

Inferring Intent

Today’s Gartner blogpost points to some interesting limitations and opportunities surrounding intent data. Let’s start at the beginning by defining what it is.

Simply put, intent data is an indication that an individual or organization is actively interested in purchasing a specific product or service. You may already be familiar with sales triggers. One classic sales trigger is so-called “new move” data. It’s valuable to know when a company moves offices because it is highly likely that the company will likely make lots of new purchases such as office furniture and the like. Think of intent data as a more sophisticated cousin of the sales trigger.

Media companies are in a great position to generate sales intent data, because much intent data is generated by watching what a person reads and does online. If a reader looks at five articles on 3-D printers in a short period of time, those actions can be viewed as indicating an intention to purchase a 3-D printer. Intent data can get a lot more sophisticated than that, but this gives you the general idea.

You might think that if a sales organization has intent data available to it, that’s probably all the data it needs. After all, intent data is like mind-reading: it’s identifying people who are likely to be purchasing a product before they purchase it. What could be better?

Well, as the Gartner blogpost points out, many companies are filtering sales leads based on intent data with something called “fit analysis.” This is an automated attempt to evaluate if the company is a likely buyer. If your company typically sells to larger, multi-office organizations, a fit analysis will filter out smaller, single location companies because they represent lower grade prospects.

Further, the Gartner blogpost notes that companies selling highly specialized products or brand-new technologies often can’t get enough intent-based sales leads or they get leads that are weak because the intent indicators aren’t sufficiently granular. Finally, some sales departments don’t like intent-based sales leads because they identify prospects too early in the sales process. As you can see, sales leads based on intention are still fairly rudimentary, and there is lots of opportunity to refine them.

But what’s most worthy of note is that Gartner believes that most intent-based sales lead data is focused on the technology industry. But there is no reason that it should. Technology sellers just happen to be free-spending early adopters. I have long preached the virtues of what I call “inferential data,” a term that includes both intent and sales trigger data. I firmly believe that many data publishers have opportunities in this area, and if they happen to be part of larger media companies, they are even greater. In fact, data publishers are natural providers of fit analytics as well. If you look at your data creatively and read between the lines you can make some very lucrative connections.