Viewing entries in
Publishing Trends

The New Privacy Laws: Be Prepared

First came GDPR (General Data Protection Regulation). More recently came CCPA (California Consumers Protection Act). According to experts, twelve more states are currently considering privacy laws of their own. Given the current political environment, there is little hope for a single federal privacy law. Short summary: it’s a big mess that is going to get even messier.

 For data producers, it remains unclear what the impact of these laws might be. After all, most of us are B2B companies, and most of us hold relatively little information on individuals. Does that mean as an industry we are safe? It’s hard to say, as most of the emerging privacy regulations are oriented to consumer companies collecting individual data incidental to their primary business activity. To date, no privacy legislation has specifically addressed B2B companies that collect data as their primary business, so it’s unclear if we’ve slipped past the regulators entirely or whether we may end up as unintended roadkill.

 An interesting set of short videos from law firm Baker McKenzie is well worth watching, if only to illustrate how far-reaching and potentially disruptive this new wave of legislation is likely to be.

 It starts with employee data – are you prepared to show an employee, on request, all the information you maintain on that person? It moves into marketing, where the contents of your CRM system are likely to be open to inspection by any individual requesting it – are you ready to share call notes and other third-party data you’ve collected? It probably reaches into your datasets as well: the more information you collect on individuals, even if public source, the more you need to prepare. You will likely also need to start re-thinking about the terms under which you license data. In this new world, lawyers are suggesting that B2B companies not hold onto data any longer than needed, not a great piece of news in an industry where building deep historical data remains a big opportunity area. In short, in some way and form, every data publisher is likely to be impacted by the emerging data privacy legislation. Worst of all, many states are giving teeth to their privacy laws by allowing private lawsuits. Yes, anyone with a privacy beef will be able to haul you into court and seek damages

We are moving into a brave new world in this area, and it’s going to be disruptive and painful. Like it or not, you’ve got to stay on top of developments here, because preparedness is the best form of protection.

You Will NEVER Replace This

Elon Musk is probably best-known as the founder of Tesla. When Elon isn’t re-inventing the automobile, he’s running SpaceX, a company that builds and launches rockets and spacecraft. To keep busy, he also runs The Boring Company that plans to tunnel highways under major cities to relieve traffic congestion (a company that also generated a reported $10 million selling flamethrowers to consumers – yes you read that correctly!). On the more esoteric end of the scale, he also founded Neuralink, a company focused on developing brain-computer interfaces. Love him or hate him, you can’t deny he’s brilliantly innovative.

Many people know that Elon Musk got rich as one of the founders of PayPal. Far fewer know that his initial business success came as the creator of an online yellow pages company called Zip2 way back in 1996. Seeking to partner with print yellow publishers, he and his brother visited a top executive at the largest yellow pages publisher in Canada. After pitching their vision, the executive responded by picking up one of his thickest directories off his desk, throwing it at them and saying, “You ever think you’re going to replace this?”

 Well, 25 years later, we know the answer to that one. Not only did the Internet replace the print yellow pages business, it largely destroyed the legacy yellow pages industry as well. Not surprisingly, the Musk brothers did well when Zip2 was ultimately sold for $300 million.

 But what caused the death of the huge and fabulously profitable yellow pages industry? At the time, a lot of people (including me) thought the Internet would herald a new era of growth for the industry. The answer, in large part, was hubris. 

Almost without exception, the big yellow pages publishers decided the fastest path to online riches was to take their regional products and go national. Overnight, these companies bought national business databases to roll out national yellow pages products. In doing so, they moved from having deep information on all the companies in their region, to having nothing more than name, address and telephone for all companies nationally. They vastly degraded the information value of their products in the belief that advertisers would flock to their doors. That’s critically important, because with yellow pages and buying guides, the advertising is the content.

That leads to the second miscalculation: these publishers all had regional rather than national salesforces. Good as these salespeople were, these publishers didn’t have the capability to sell nationally. This led to the third big miscalculation: the publishers all had regional brands and couldn’t come to grips with the fact that nobody had heard of them outside their regions. Without strong national brands, prospective advertisers yawned at these new national products that seemingly emerged out of nowhere.

Of course, the other big shift is that search engines got better. While still imperfect, in large part you now can find a plumber in your area with a simple search. And businesses flock to advertise on the search engines because with pay-per-click pricing, their advertising spend is now (at least in theory) more efficient.

The key take-away lessons for data publishers? First, a database that is a mile wide and an inch deep isn’t an effective product strategy these days. Far better to know a lot about a specific group than to know a little about everyone. Second, advertising-driven online data businesses are tougher than ever to pull off. Third, when you start believing your own press releases, things never end well. Fourth, when Elon Musk calls, listen before you throw something!

 

 

 

 

 

Variable Pricing, Data-Style

Variable pricing is a well-known pricing strategy that changes the price for the same product or service based on factors such as time, date, sale location and level of demand. Implemented properly, variable pricing is a powerful tool to optimize revenue.

The downside to variable pricing is that it has a bad reputation. For example, when prices go up at times of peak demand (which often translates into times of peak need), that’s variable pricing. Generally speaking, when you notice variable pricing, it’s because you’re on the wrong end of the variance.

Variable pricing lends itself nicely to data products. But rather than thinking about a traditional variable pricing strategy, consider pricing based on intensity of usage.

Intensity of usage means tying the price of your data product to how intensely a customer uses it – the greater the use, the greater the price. Intensity pricing is not an attempt to support multiple prices for the same product, but rather an attempt to tie pricing to the value derived from use of the product, with intensity of usage a proxy for value derived from the product.

For data producers, intensity-based pricing can take many forms. Here are just a few examples to fuel your thinking:

1.         Multi-user pricing. Yes, licensing multiple users and seats to large organizations is hardly a new idea. But it’s still a complex, mysterious thing to many data producers who shy away from it, leaving money on the table and probably encouraging widespread password sharing at the same time. The key to multi-user pricing is not to try and extract more from larger organizations simply because “they can afford it,” (a contentious and unsustainable approach), but to tie pricing to actual levels of usage as much as possible.

2.         Modularize data product functionality. Not every user makes use of all your features and functionality. Think about identifying those usage patterns and then re-casting your data product into modules: the more modules you use, the more you pay. We all know the selling power of those grayed-out, extra cost items on the main dashboard!

3.         Limit or meter exports. Many sales-oriented data products command high prices in part because of the contact information that they offer, such as email addresses. Unfortunately, many subscribers still view data products like these as glorified mailing lists to be used for giant email blasts. This is a high intensity use that should be priced at a premium. A growing number of data producers limit the number of records that can be downloaded in list format, charging a premium for additional records to reflect this high-intensity type of usage. It’s similarly possible to limit and then up-charge certain types of high-value reports and other results that provide value beyond the raw data itself.

4.         Modularize the dataset. Just as few users will use all the features available to them in a data product, many will not use all the datamade available to them. For example, it’s not uncommon for data producers to charge more for access to historical data because not everyone will use it, and those who do use it value it highly. Consider whether you have a similar opportunity to segment your dataset.

While your first consideration should be revenue enhancement, also keep in mind that an intensity-based pricing approach helps protect your data from abuse, permits lower entry-level price points, creates up-sell opportunities, and properly positions your data as valuable and important.

There are competitive considerations as well. When you are selling an over-stuffed data product in order to justify a high price, the easiest strategy for a competitor is to build a slimmed-down version of your product at a much lower price – Disruption 101. You simply don’t want to be selling a prix fixe product in an increasingly a la carte world (look at the cable companies and their inability to sustain bundled pricing even with near-monopoly positions).

When Data Is Smarter Than Its Users

In my review of the decade past and my predictions for our new decade, the common thread is that the quality of commercial data products has advanced immeasurably, as has their insight and predictive capability. As an industry, we’ve accomplished some truly remarkable things in the past ten years by making data more powerful, more useful and more current.

This said, data buyers remain far less sophisticated than the datasets they are buying. While buyers of data used for research and planning purposes seem to both appreciate and use powerful new data capabilities, marketers – generally speaking – do not. Even worse, this problem is ages-old.

Earlier in my career, I spent several years in the direct marketing business. Even back in the 1980s we were doing file overlays, assessing purchase propensity and building out detailed prospect profiles based on hundreds of individual data elements. It was slower and sloppier and harder back then, but we were doing it. We even had artificial intelligence software, though one project in particular I recall involving a million customer records required that we rent exclusive use of  a mainframe computer for two weeks! And not only did we have the capability, we had the buy-in of the marketing world. There was a fever pitch of interest in the incredible potential of super-targeted marketing.

But what we quickly learned as mailing list providers was that while sales and marketing types talked quality, what they bought was quantity. If you went to any organization of any size and said, “we have identified the 5,000 absolute best prospects in the country for you, all ready, willing and able to buy,” you would get interest but few if any takers. At best, you’d have marketers say that they’d throw these prospects in the pot with all the others – as long as they weren’t too expensive. 

From this experience came my epiphany: marketers had no experience with high quality prospects. They were so used to crappy data they had built processes and organizations optimized to churn through vast quantities of poor quality prospects. As to our 5,000 perfect prospects, we heard things like, “we’d chew through them in a week.” Note the operative word “chew.” 

We have new and better buzzwords now, but the broad problem is the same. Nowadays, when it comes to sales leads, companies are literally feeding the beast in the form of their marketing automation platforms. And everything has to flow through the platform because otherwise reports would be inaccurate and KPIs would be wrong.

Companies today will pay handsomely for qualified sales leads – sometimes up to several hundred dollars per lead. But these top quality leads won’t get treated any better than the mediocre ones. How do I know? Because the marketers spending all these big bucks will insist the leads be formatted for easy loading into their marketing platforms, and I’ve also been told, “we’re not interested unless you can guarantee at least 100 leads per week.” And that’s how far we have progressed in 30 years: marketers have solved the tension between quality and quantity by simply insisting on both. And the pressure to deliver both will necessarily come at the expense of quality. This essential disconnect won’t be solved easily, but when it is, a new golden age of data will arrive.

 

 

 

Is Time Up for 230?

In 1996, several Internet lifetimes ago, Congress passed a bill called the Communications Decency Act (officially, it is Title V of the Telecommunications Act of 1996). The law was a somewhat ham-handed attempt at prohibiting the posting of indecent material online (big chunks of the law were ultimately ruled unconstitutional by the Supreme Court). But one of the sections of the law that remained in force was Section 230. In many ways, Section 230 is the basis for the modern Internet.

Section 230 is short – only 26 words – but those 26 words are so important there has even been an entire book written about their implications. Section 230 says the following: 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.”

 The impetus for Section 230 was a string of court decisions where the owners of websites were being held liable for things posted by users of their websites. Section 230 stepped in to provide near-absolute immunity for website owners. Think of it as the “don’t shoot the messenger” defense. Without Section 230, websites like Facebook, Twitter and YouTube probably wouldn’t exist. And most newspapers and online publications probably wouldn’t let users post comments. Without Section 230, the Internet would look very different. Some might argue we’d be better off without it. But the protections of Section 230 extend to many information companies as well.

 That’s because Section 230 also provides strong legal protection for online ratings and reviews. Without Section 230, sites as varied as Yelp, TripAdvisor and even Wikipedia might find it difficult to operate. Indeed, all crowdsourced data sites would instantly become very risky to operate.

 The reason that Section 230 is in the news right now is that it also provides strong protection to sites that traffic in hateful and violent speech. That’s why there are moves afoot to change or even repeal Section 230. Some of these actions are well intentioned. Others are blatantly political. But regardless of intent, these are actions that publishers need to watch, because if it becomes too risky to publish third-party content, the unintended consequences will be huge indeed.