Viewing entries in
Thoughts and Predictions

The Power to Destroy

In 1819, Supreme Court Chief Justice John Marshall penned the famous phrase, “the power to tax involves the power to destroy.” This insightful commentary came as part of the Court’s ruling in the case of McCulloch v. Maryland. The case involved a move by the State of Maryland to favor in-state banks by taxing the bank notes of the federally chartered Bank of the United States. In a unanimous decision, the Court ruled that Maryland couldn’t try to run the federal bank out of town through clever tax schemes.

This famous phrase pops into my head every time I see data and software companies get too arrogant or too greedy and start to abuse their market dominance. This is because software and data companies that dominate their markets have some of the same coercive power as governments with their ability to make rules and set prices.

I got a direct taste of this earlier in the week when an email arrived from QuickBooks to tell me that they were more or less doubling my annual subscription fee. The rationale for this massive increase? The folks at Intuit (parent company of QuickBooks) feel they work very hard and deserve more money. You may recall that Intuit has recently been hard at work with its fleet of lobbyists trying to get legislation passed to prohibit the IRS from offering an online tax filing service. In its annual report, Intuit specifically calls out the threat of federal and state “encroachment” on its business. A touch of entitlement, perhaps?

My email from QuickBooks was followed by an email from DropBox announcing a 20% price increase. At least DropBox doubled my online storage in exchange, not that I really needed it.

It’s not just in the software industry where market power is being abused. As just one example, StreetEasy, the dominant real estate listing platform in New York City, stopped accepting automated listings feeds from several major real estate brokers in a fit of arrogance and competitive gamesmanship. Try not to laugh when you read StreetEasy’s justification for suspending automated feeds:

“Sending a feed sounds simple and seamless. It’s not. Continuing to receive listings in such an inefficient way wasn’t doing anyone — agents or consumers — any favors. So, we innovated.”

StreetEasy’s innovation? Data entry screens that require brokers to re-enter all their listings … manually. You can’t make this stuff up.

Often what damages or even kills great data and software companies with dominant market positions is the abuse of their market power. They forget why they exist and who the customer is. In many cases they get lazy, finding it easier to raise prices than continuing to innovate. Sometimes these companies impose big price increases, as in the case of QuickBooks, simply because they can.

Market dominance creates coercive power that can destroy. With taxation, the party that can be destroyed is the taxpayer. But with private companies, coercive power comes with the ability to destroy … themselves.

Data.Gone

On May 4, 2019 it’s official: data.com connect is shutting down. You may remember data.com connect in its original incarnation as Jigsaw.com. Salesforce.com acquired Jigsaw in 2010, paying huge dollars to kick-start an ambitious plan to not only be a software platform to manage sales activities, but to help companies maintain and grow their sales leads as well.

Lest you think Salesforce was lacking in ambition, it then acquired the data.com domain name for $1.5 million. Jigsaw moved over to data.com, and Salesforce began to execute on its vision of a data marketplace, where its software users could discover, purchase and seamlessly import third-party data into Salesforce. It was a big, slick and arguably brilliant idea.

But an idea falls far short of a successful strategy, and data.com never appeared to be much more than an idea, or more accurately, a series of ideas. And Salesforce, for all its success, never figured out decisively what it wanted data.com to be when it grew up. Add in competing corporate strategies, office politics, a high-growth core business and a go-go culture, and it’s perhaps not surprising that data.com quickly became a corporate orphan.

 More fundamentally though, we see once again that software companies – despite lots of brave talk – just don’t “get” data. In particular, a good database needs care and feeding using processes and techniques that are messy, imperfect, never-ending and perhaps most importantly of all, impossible to simply automate and forget.

 Jigsaw probably looked like a light lift to Salesforce. After all, the brilliance of Jigsaw was it was crowd-sourced data. The people using the data committed to correcting it and adding to it. On the surface, it probably looked like a perpetual motion machine to Salesforce. But that perception couldn’t be farther from the truth. Crowdsourcing is an intensely human activity, because you have to motivate and incent users to keep working on the database. You have to construct a structure that rewards top producers and pushes out bad actors. You have to relentlessly monitor quality and comprehensiveness. It’s endless fine-tuning, lots of trial and error, and a deep understanding of how to motivate people.

This is where Salesforce failed. It either didn’t understand the commitment required or didn’t want to do the work required. And just as a crowdsource database can grow quickly, it can also decline quickly.

I’ve said it before: I see more success among data providers that develop software around their data than software companies trying to develop their own databases.

1 Comment

Does B+C=E?

When Steve Lucas talks, people listen. After all, Steve was the founder of Marketo, which was sold to Adobe last year for $4.8 billion. Now the SVP of Digital Experience at Adobe, Steve gave a talk at the recent Adobe Summit in Las Vegas to describe his vision of the future, which revolves around B2B and B2C merging to become B2E – Business to Everyone.

 Steve acknowledged that B2C is all about individuals in their personal buying capacities, and that B2B is about accounts – typically groups of buyers and influencers within an organization, so he understands where and how B2B and B2C differ. But he then cites Amazon as a company that is both a B2C seller (think Whole Foods) and a B2B seller (think Amazon Web Services). Steve thinks there are lots of B2C/B2B sellers out there, and that there is a great unmet need for a single, integrated marketing platform. May I politely disagree?

 Sure, there are always a few hubristic companies that are flush with cash and feeling their oats that will be willing to chase this inchoate vision of the future. But are there many of these companies? Most companies are still trying to get their B2B or B2C marketing straight. B2B marketers are only a few years into their embrace of Account-Based Marketing (a new name for a very old concept, but that simply reinforces my point) and B2C marketers are warily picking their way through a growing minefield of privacy and regulatory rules that actually creates a bias against being too cutting edge.

As to the information need, a simple question: why? In a B2E world, Amazon would indeed be able to know that, for example, the VP of Purchasing at Boeing likes organic lemons. But how does this knowledge advance the sale of either Amazon cloud services or Whole Food produce sales? Indeed, B2E creates the opportunity to build such detailed dossiers that the end results will toggle between scary and silly.

There are already B2B data companies that build detailed personal profiles of top corporate executives to help with high-level selling. That makes sense. Trying to do the same thing with even more granular detail and at scale doesn’t make sense.

B2E reeks of “wouldn’t it be cool if…” syndrome. Making databases bigger and more complex doesn’t inherently make them more powerful or useful. I remember a marketing database at AT&T many years back. It was state of the art for its time, centrally tracking every purchase of AT&T equipment, services and merchandise for 20 million customers. Then one day, an AT&T executive had an epiphany: not only should the database track what customers bought, it should also track everything they were offered but didn’t buy. Everyone thought the concept was brilliant, though at the same time nobody could articulate how all this new information might be used. The initiative involved building a huge new data center, and the sheer volume of data being collected and stored brought the system to its knees. The system became so complex and so slow that the marketing organizations stopped using it … for anything. The whole database was quietly abandoned a few years later. 

I can understand why Adobe would find B2E appealing, but it’s a mystery to me why anyone else would.

1 Comment

Sell Your Smarts

In the bygone days of print directories, one sales challenge stood out above all others: convincing buyers to regularly buy the new edition of the directory. The issue was that directory buyers weren’t easily convinced that enough had changed in the course of a year to justify purchasing the new edition. Everyone in the business back then knows about the nemesis that was the every-other-year buying pattern.

 The marketing remedy for this problem (a problem that sounds positively quaint today) was to “quantify change.” This typically meant using actual database counts to prove that enough had changed to justify purchase of a new edition. Rather than saying “thousands of updated addresses,” a vague and not very compelling statement, publishers would say, “23,418 updated addresses.” By quantifying change, publishers offered specificity to describe their updating efforts, something that proved both credible and compelling to data buyers.

 Fast forward to today when the marketing challenge has almost been completely reversed. Data buyers have radically increased their expectations, based on the belief that all online databases are updated in real-time and are completely current. It’s a belief based on wishful thinking and a fundamental misunderstanding of what the word “online” means. Of course, many data providers encouraged that thinking by saying that their databases were “updated daily.” That meant that they were making changes every day. To many data buyers, it meant that the entire database was refreshed and updated daily. It’s a disconnect, and it’s a big one.

 These raised expectations and technological misunderstandings have complicated the marketing strategies of most data publishers. After all, it’s hard to sell your database on the basis of it being complete and current when that’s a basic expectation of your prospects. For that reason, many data publishers now sell on the basis of the qualityof their data. But that’s a tough slog, because as I have said many times, quality is easy to claim and hard to prove. In a crazy way, hedge funds have it right: they buy databases like candy. They try them for a year or two, and if they can’t make money off them, they move on to the next database. That’s easy for hedge funds to do, but not the average business. Selling on quality is further complicated by the fact that some publishers don’t want to talk about the source of their data (for reasons good and bad), and some publishers who license data from reputable sources are contractually prohibited from disclosing this information. 

What’s a publisher to do? I think marketing strategy today has to be based on deep market knowledge. Too many data purveyors these days (especially the disruptive variety) are aggregators or packagers. Aggregators tend to focus on building the biggest datasets, and usually emphasize quantity over everything else. Packagers take a dataset (typically public domain data) and create a fancy user interface to add value. What’s lacking in both cases is the market knowledge that would allow them, for example, to confidently drop out unqualified records as opposed to blindly selling whatever they can get their hands on. It’s also about building user interfaces that meet real industry-specific needs as opposed to generic searching and reporting.

Business is getting more complex and specialized all the time. And I see time and again that the most successful data publishers come out of the industries they serve and build databases to solve real and painful business problems that they experienced themselves. Artificial intelligence and fancy algorithms are all well and good, but their answers can’t be any better than the underlying data on which they operate. Think of data companies such as PitchBook and CompStak.

To succeed in the data business today, you need to sell your smarts. That means you have to demonstrate you know your market and the needs of your market better than anyone, and that you have built a dataset uniquely capable of addressing those needs. Fancy sales triggers are wonderful, but if they are monitoring the wrong companies or the wrong events, they’ll produce more noise than signal, defeating their purpose.

Going forward, it won’t be about having the most companies. It will be about having the rightcompanies. It won’t be about having the most data elements. It will be about having the rightdata elements. And you determine both by deeply understanding how your market works and what business problems need to be solved. Determine how to demonstrate your market knowledge and you’ll have a winning marketing strategy that will be effective for years to come. 

 

Open Data Opens New Competitive Front

Recently signed into law, the Foundations for Evidence-Based Policymaking Act is going to have a big impact on the data business. It contains within it provisions to open up all non-sensitive databases, and make them easily available in machine-readable, non-proprietary formats. Moreover, every federal agency is now obliged to publish a master catalog of all its datasets in order to make them more readily accessible.

Federal government databases are the gift that keeps on giving. Because they are generally the result of regulatory/compliance activity by the government, they are quite complete, and the data quite trustworthy. Moreover, the great shift online has made it easier for government agencies to require more frequent data updates. And with more data coming to these agencies electronically, the notoriously bad government data entry of years past has largely disappeared. Best of all, you can obtain these databases at little or no charge to use as you please.

However, this new push for open formats and is a two-edged sword. Many of the great data companies that have been built in whole or in part on government data got significant advantage from the complexity and obscurity of that data. Indeed, government data has been open for decades now – you just needed to know it existed, what it was called and who to talk to in order to get your hands on it. This was actually a meaningful barrier to entry for many years.

While it won’t happen overnight, increased data transparency and availability is likely to create a new wave of industry disruption. These government datasets are catnip to Silicon Valley start-ups because these companies develop software and don’t have the skills or interest to compile data. “Plug and play” data will assuredly attract these new players, and they will cause havoc with many established data providers.

How do you fight back against this coming onslaught? The key is to understand the Achilles heel of these companies. Not only don’t these companies tend to understand data, most of them actively dislike it. That means that you can find competitive advantage by augmenting your data with proprietary data elements or even other public data that might need to be cleaned and normalized. Think about emphasizing historical data, which is often harder for new entrants to obtain. These disruptive players will win every time if the battlefield is around the user interface or fancy reports. Change the battlefield to the data itself, and the advantage shifts back to you.