1 Comment

Does B+C=E?

When Steve Lucas talks, people listen. After all, Steve was the founder of Marketo, which was sold to Adobe last year for $4.8 billion. Now the SVP of Digital Experience at Adobe, Steve gave a talk at the recent Adobe Summit in Las Vegas to describe his vision of the future, which revolves around B2B and B2C merging to become B2E – Business to Everyone.

 Steve acknowledged that B2C is all about individuals in their personal buying capacities, and that B2B is about accounts – typically groups of buyers and influencers within an organization, so he understands where and how B2B and B2C differ. But he then cites Amazon as a company that is both a B2C seller (think Whole Foods) and a B2B seller (think Amazon Web Services). Steve thinks there are lots of B2C/B2B sellers out there, and that there is a great unmet need for a single, integrated marketing platform. May I politely disagree?

 Sure, there are always a few hubristic companies that are flush with cash and feeling their oats that will be willing to chase this inchoate vision of the future. But are there many of these companies? Most companies are still trying to get their B2B or B2C marketing straight. B2B marketers are only a few years into their embrace of Account-Based Marketing (a new name for a very old concept, but that simply reinforces my point) and B2C marketers are warily picking their way through a growing minefield of privacy and regulatory rules that actually creates a bias against being too cutting edge.

As to the information need, a simple question: why? In a B2E world, Amazon would indeed be able to know that, for example, the VP of Purchasing at Boeing likes organic lemons. But how does this knowledge advance the sale of either Amazon cloud services or Whole Food produce sales? Indeed, B2E creates the opportunity to build such detailed dossiers that the end results will toggle between scary and silly.

There are already B2B data companies that build detailed personal profiles of top corporate executives to help with high-level selling. That makes sense. Trying to do the same thing with even more granular detail and at scale doesn’t make sense.

B2E reeks of “wouldn’t it be cool if…” syndrome. Making databases bigger and more complex doesn’t inherently make them more powerful or useful. I remember a marketing database at AT&T many years back. It was state of the art for its time, centrally tracking every purchase of AT&T equipment, services and merchandise for 20 million customers. Then one day, an AT&T executive had an epiphany: not only should the database track what customers bought, it should also track everything they were offered but didn’t buy. Everyone thought the concept was brilliant, though at the same time nobody could articulate how all this new information might be used. The initiative involved building a huge new data center, and the sheer volume of data being collected and stored brought the system to its knees. The system became so complex and so slow that the marketing organizations stopped using it … for anything. The whole database was quietly abandoned a few years later. 

I can understand why Adobe would find B2E appealing, but it’s a mystery to me why anyone else would.

1 Comment

Sell Your Smarts

In the bygone days of print directories, one sales challenge stood out above all others: convincing buyers to regularly buy the new edition of the directory. The issue was that directory buyers weren’t easily convinced that enough had changed in the course of a year to justify purchasing the new edition. Everyone in the business back then knows about the nemesis that was the every-other-year buying pattern.

 The marketing remedy for this problem (a problem that sounds positively quaint today) was to “quantify change.” This typically meant using actual database counts to prove that enough had changed to justify purchase of a new edition. Rather than saying “thousands of updated addresses,” a vague and not very compelling statement, publishers would say, “23,418 updated addresses.” By quantifying change, publishers offered specificity to describe their updating efforts, something that proved both credible and compelling to data buyers.

 Fast forward to today when the marketing challenge has almost been completely reversed. Data buyers have radically increased their expectations, based on the belief that all online databases are updated in real-time and are completely current. It’s a belief based on wishful thinking and a fundamental misunderstanding of what the word “online” means. Of course, many data providers encouraged that thinking by saying that their databases were “updated daily.” That meant that they were making changes every day. To many data buyers, it meant that the entire database was refreshed and updated daily. It’s a disconnect, and it’s a big one.

 These raised expectations and technological misunderstandings have complicated the marketing strategies of most data publishers. After all, it’s hard to sell your database on the basis of it being complete and current when that’s a basic expectation of your prospects. For that reason, many data publishers now sell on the basis of the qualityof their data. But that’s a tough slog, because as I have said many times, quality is easy to claim and hard to prove. In a crazy way, hedge funds have it right: they buy databases like candy. They try them for a year or two, and if they can’t make money off them, they move on to the next database. That’s easy for hedge funds to do, but not the average business. Selling on quality is further complicated by the fact that some publishers don’t want to talk about the source of their data (for reasons good and bad), and some publishers who license data from reputable sources are contractually prohibited from disclosing this information. 

What’s a publisher to do? I think marketing strategy today has to be based on deep market knowledge. Too many data purveyors these days (especially the disruptive variety) are aggregators or packagers. Aggregators tend to focus on building the biggest datasets, and usually emphasize quantity over everything else. Packagers take a dataset (typically public domain data) and create a fancy user interface to add value. What’s lacking in both cases is the market knowledge that would allow them, for example, to confidently drop out unqualified records as opposed to blindly selling whatever they can get their hands on. It’s also about building user interfaces that meet real industry-specific needs as opposed to generic searching and reporting.

Business is getting more complex and specialized all the time. And I see time and again that the most successful data publishers come out of the industries they serve and build databases to solve real and painful business problems that they experienced themselves. Artificial intelligence and fancy algorithms are all well and good, but their answers can’t be any better than the underlying data on which they operate. Think of data companies such as PitchBook and CompStak.

To succeed in the data business today, you need to sell your smarts. That means you have to demonstrate you know your market and the needs of your market better than anyone, and that you have built a dataset uniquely capable of addressing those needs. Fancy sales triggers are wonderful, but if they are monitoring the wrong companies or the wrong events, they’ll produce more noise than signal, defeating their purpose.

Going forward, it won’t be about having the most companies. It will be about having the rightcompanies. It won’t be about having the most data elements. It will be about having the rightdata elements. And you determine both by deeply understanding how your market works and what business problems need to be solved. Determine how to demonstrate your market knowledge and you’ll have a winning marketing strategy that will be effective for years to come. 

 

Open Data Opens New Competitive Front

Recently signed into law, the Foundations for Evidence-Based Policymaking Act is going to have a big impact on the data business. It contains within it provisions to open up all non-sensitive databases, and make them easily available in machine-readable, non-proprietary formats. Moreover, every federal agency is now obliged to publish a master catalog of all its datasets in order to make them more readily accessible.

Federal government databases are the gift that keeps on giving. Because they are generally the result of regulatory/compliance activity by the government, they are quite complete, and the data quite trustworthy. Moreover, the great shift online has made it easier for government agencies to require more frequent data updates. And with more data coming to these agencies electronically, the notoriously bad government data entry of years past has largely disappeared. Best of all, you can obtain these databases at little or no charge to use as you please.

However, this new push for open formats and is a two-edged sword. Many of the great data companies that have been built in whole or in part on government data got significant advantage from the complexity and obscurity of that data. Indeed, government data has been open for decades now – you just needed to know it existed, what it was called and who to talk to in order to get your hands on it. This was actually a meaningful barrier to entry for many years.

While it won’t happen overnight, increased data transparency and availability is likely to create a new wave of industry disruption. These government datasets are catnip to Silicon Valley start-ups because these companies develop software and don’t have the skills or interest to compile data. “Plug and play” data will assuredly attract these new players, and they will cause havoc with many established data providers.

How do you fight back against this coming onslaught? The key is to understand the Achilles heel of these companies. Not only don’t these companies tend to understand data, most of them actively dislike it. That means that you can find competitive advantage by augmenting your data with proprietary data elements or even other public data that might need to be cleaned and normalized. Think about emphasizing historical data, which is often harder for new entrants to obtain. These disruptive players will win every time if the battlefield is around the user interface or fancy reports. Change the battlefield to the data itself, and the advantage shifts back to you.

Marketing for Dummies

The composition of my email inbox has changed dramatically over the last several months, and it’s given me fresh insight into how data is being used by marketers. Apparently, contact data has found increased importance as the raw material needed to power marketing automation software.

Every day now, I am accosted not with simple email solicitations, but email campaigns, all relentlessly determined either to trick me into a conversation with a salesperson, or turn me into a customer by grinding me into submission through endless messaging. Marketing automation technology is widely being used as a “fire and forget” weapon. Load in a series of messages, load in a mailing list, and watch the leads roll in.

Marketing automation platforms do in fact offer a sophisticated new approach to marketing. But where things go wrong is that customers are expected to supply the sophistication, not the software. The two main areas of abuse:

  • Trying to fake a relationship in order to encourage a response. You’ve probably seen them: the carefully worded emails written to imply you’ve had previous contact with the sender. Should you fail to respond, you keep getting more emails (each with the full email chain), all written to make you feel as if you dropped the ball at some point, with the hope that concern, confusion or guilt will push you to engage. I have just one question about this: have you really created a qualified prospect by getting someone to contact you under false pretenses? And since for this deception to work, the emails need to look personal, that means no CAN-SPAM compliant opt-out link. You’re going to receive these emails until the sender gets tired of sending them. 

  • Blasting out repeated messages to an unqualified list. Do I really need to repair the roof on my office building? There are plenty of clues (starting with my industry classification code) to suggest you are wasting your time. Ditto that for robots to automate my factory. Offer the average marketer 100 perfectly qualified in-market leads or 10,000 lightly qualified contacts, and the sad fact is that the majority will take the big list every time.

My simple point in all this is that even with vastly improved data and state-of-the-art tools, most marketing people use it only to push more stuff out faster. Yes, even in 2019, marketers still talk targeting but buy volume, and this translates to their data buying practices as well. As an industry, we can offer our customer so much more. Unfortunately,  there are still too many people doing marketing for dummies. 

 

Nice Try, Moody's

For over a decade I have watched with interest as a company called CoStar became the largest player in commercial real estate data. It achieved this feat – and a market cap of over $13 billion – by old-fashioned data compilation, well-timed acquisitions, and aggressive litigation to keep competitors at bay.

What resulted is an effective monopoly in the commercial real estate space. CoStar achieved this by never cutting corners on primary data collection. As just one example, at one point it had a fleet of trucks snapping photos of every commercial property in the country. CoStar was never shy about getting on the phone too, collecting valuable leasing data from brokers and property owners nationwide. Marry all that proprietary data with public records data and a strong online platform, and you have a business that is highly profitable and nearly impregnable.

 Data companies in this privileged position do sometimes suffer at the hands of competitors, but nin times out of ten, it’s because of self-inflicted damage. Companies that become data monopolies have to be endlessly vigilant about not becoming arrogant or charging extortionate prices, because being hated by your customers provides an opening for competitive players. So too does complacency, and a failure to invest in the business to enhance the value of its products and keep up with changing market needs.

It doesn’t seem that CoStar has made any of these mistakes, but it is feeling new competitive heat anyway from another information giant, Moody’s (market cap $31 billion).

Moody’s (through its Moody’s Analytics division) has never been a big player in commercial real estate data, but having decided it wants a piece of this market, it has been spending heavily on acquisitions to buy its way in. The centerpiece of its acquisitions was the $278 million purchase of commercial real estate analytics firm REIS last year. Moody’s also made “strategic investments” in a number of other industry data providers.

So is it curtains for CoStar? I think not. Moody’s has spent huge amounts of money to position itself to compete for only a small portion of the market CoStar serves (think banks and real estate investors). Moreover, Moody’s will be in large part dependent on data it doesn’t own, sourced from companies selling into the same market, meaning that a lot of the data Moody’s will offer will come heavily restricted. 

Perhaps most importantly, CoStar’s proprietary data (commercial real estate inventory and listings data) remains proprietary and untouchable. My take is Moody’s has over-spent for the opportunity to enter a bruising battle with an established company whose smarts and street-fighting skills are well established. Moody’s will build a business here, but it will be one much smaller than its ambitions, and one that will take relatively little revenue from CoStar. Data franchises are strong and it usually takes more than a large checkbook to bring them down.