Viewing entries in
Best Practices

Choose Your Customer

From the standpoint of “lessons learned,” one of the most interesting data companies out there is TrueCar.

Founded in 2005 as Zag.com, TrueCar provides consumers with data on what other consumers actually paid for specific vehicles in their local area. You can imagine the value to consumers if they could walk into dealerships with printouts of the lowest price recently paid for any given vehicle. 

The original TrueCar business model is awe-inspiring. It convinced thousands of car dealers to give it detailed sales data, including the final price paid for every car they sold. TrueCar aggregated the data and gave it to consumers for free. In exchange, the dealers got sales leads, for which they paid a fee on every sale.

 Did it work? Indeed it did. TrueCar was an industry disruptor well before the term had even been coined. As a matter of fact, TrueCar worked so well that dealers started an organized revolt in 2012 that cost TrueCar over one-third of its dealer customers.

The problem was with the TrueCar model. TrueCar collected sales data from dealers then essentially weaponized it, allowing consumers to purchase cars with little or no dealer profit. Moreover, after TrueCar allowed consumers to purchase cars on the cheap, it then charged dealers a fee for every sale! Eventually, dealers realized they were paying a third-party to destroy their margins, and decided not to play any more.

TrueCar was left with a stark choice: close up shop or find a new business model. TrueCar elected the latter, pivoting to a more dealer-friendly model that provided price data in ways that allowed dealers to better preserve their margins. It worked. TrueCar re-built its business, and successfully went public in 2014.

A happy ending? Not entirely. TrueCar, which had spent tens of millions to build its brand and site traffic by offering data on the cheapest prices for cars, quietly shifted to offering what it calls “fair prices” for cars without telling this to the consumers who visited its website. Lawsuits followed.  

There are four important lessons here. First, you can succeed in disrupting an industry and still fail f you are dependent on that industry to support what you are doing. Second, when it comes to B2C data businesses, you really need to pick a side. Third, if you change your revenue model in a way that impacts any of your customers, best to be clear and up-front about it. In fact, if you feel compelled to be sneaky about it, that’s a clue your new business model is flawed. Fourth, and I’ve said it before, market disruption is a strategy, not a business requirement. 

Getting From A to B

When I started in the data publishing business decades ago, information products were largely paper-based (think directories), and the selling of information products was largely paper-based as well (think direct mail). Fast forward to today, and now we’re mostly selling online subscriptions via online marketing, and everyone is better off for it, or so it would seem.

Yet in the great shift from offline to online marketing, what didn’t seem to shift over were all the people who really understood offline marketing. These people tended to know their stuff, for the simple reason that direct mail was expensive. Too many mistakes and you would be out of a job … or out of business.

As a result, the development of online marketing canon was a tabula rasa exercise.  I still vividly remember sitting in a seminar for online marketers in 1999 as the speaker described an extraordinary new marketing concept: in order to find the best price for his product, he had split his list in two and sent each half the same offer but with different price points. He said the concept could be used dozens of different ways, and because it was new there wasn’t even a name for it. As dozens of online marketers from household name companies furiously scribbled notes, I remember thinking that one possible name the group might want to consider was “A/B testing.” These young marketers were so convinced that what they were doing was so new and so different it never occurred to them to explore what had been learned before they arrived on the scene.

Sure, online marketing has come a long way in the last 20 years, and there are now aspects of online marketing that don’t have any offline parallel. But the basics live on.

In talking to the pricing research experts at TRC, folks whose deep knowledge of market research never fails to impress, I learned of a recent study conducted by researchers at Stanford and the University of Chicago. It sought to quantify the value of adding personalization to email messages. The results were stunning: the research found a 21% lift in email opens, a 31% lift in the number of inquiries, and as a bonus, a 17% drop in the number of unsubscribes. Online gold! But, just for the record, personalization delivered magical results in offline direct mail as well, so while these research results are good news, at the same time they’re not really new news. 

Yet, one recent study finds that while 81% of online marketers claim they send personalized email, only 3% of consumers feel they regularly receive personalized email. The discrepancy comes from the difference between personalizing an email and effectively personalizing an email. The best online marketers know that there’s more to it than just dropping a name in the email somewhere.

How do you figure out what’s effective? Testing, endless testing, having a good research methodology (such as not testing multiple things in one email), and monitoring and recording results carefully. Not sure where to start? Well, you might consider this new thing — it’s called an A/B test.

Form Follows Function

Numerous online marketing trade associations have announced their latest initiative to bring structure and transparency to an industry that can only be called the Wild, Wild West of the data world: online audience data. Their approach offers some useful lessons to data publishers.

At their brand-new one-page website (www.datalabel.org) this industry coalition is introducing its “Data Transparency Label.” In an attempt to be hip and clever, the coalition has modeled its data record on the familiar nutrition labels found on most food packaging today. It’s undeniably cute, but it’s a classic case of form not following function. Having decided on this approach, the designers of this label immediately boxed themselves in as to what kind and how much data they could present to buyers. I see this all the time with new data products: so much emphasis is placed on how the data looks, its visual presentation, that important data elements often end up getting minimized, hidden or even discarded. Pleasing visual presentation is desirable, but it shouldn’t come at the expense of our data.

The other constraint you immediately see is that this label format works great if an audience is derived from a single source by a single data company. But the real world is far messier than that. What if the audience is aggregated from multiple sources? What if its value derives from complex signal data that may be sourced from multiple third parties? What about resellers? Life is complicated. This label pretends it is simple. Having spent many years involved with data cards for mailing lists, during which time I became deeply frustrated by the lost opportunities caused by a simple approach used to describe increasingly sophisticated products, I see history about to repeat itself.

My biggest objection to this new label is that its focus seems to be 100% on transparency, with little attention being paid to equally valuable uses such as sourcing and comparison. The designers of this label allude to a taxonomy that will be used for classification purposes, but it’s only mentioned in passing and doesn’t feel like a priority focus at all. Perhaps most importantly, there’s no hint of whether or not these labels will be offered as a searchable database or not. There’s a potentially powerful audience sourcing tool here, and if anyone is considering that, they aren’t talking about it.

 Take-aways to consider:

·     When designing a new data product, don’t allow yourself to get boxed in by design

·     The real world is messy, with lots of exceptions. If you don’t provide for these exceptions, you’ll have a product that will never reach its full potential

·     Always remember that a good data product is much more than a filing cabinet that is used to look up specific facts. A thoughtful, well-organized dataset can deliver a lot more value to users and often to multiple groups of users. Don’t limit yourself to a single use case for your product – you’ll just be limiting your opportunity.

Just in Time Data

Databases are tricky beasts because their content is both fluid and volatile. There are likely no databases that are 100% comprehensive and 100% accurate at the same time. This problem has only been exacerbated by increasingly ambitious data products that continue to push the envelope in terms of both the breadth and depth of their coverage.

Data publishers have long had to deal with this issue. The most widely adopted approach has been what might be called “data triage.” This is when the publisher quickly updates a data record in response to a subscriber request.

I first encountered this approach with long-time data pioneer D&B. If you requested a background report on a company for which D&B had either a skeleton record or out of date information, D&B adroitly turned this potential problem into a show of commitment to its data quality. The D&B approach was to provide you with whatever stale or skimpy information it had on file in order to provide some data the subscriber might find useful. But D&B would also  indicate in bold type words to the effect of, “this background report contains information that may be outdated. To maintain our data quality standards, a D&B investigator will update this report and an updated report will be sent to you within 48 hours.”

Data triage would begin immediately. D&B would have one of its more experienced researchers call the company and extract as much information as possible. The record was updated, the new information was sent to the subscriber, and anyone else requesting that background report would benefit from the updated information as well.

A variation on this approach is to offer not updates to existing records, but rather to create entirely new records on request. Not in our database? Just let us know, and we’ll do the needed research for you pronto. Boardroom Insiders, a company that sells in-depth profiles of C-suite executives, does this very successfully, as does The Red Flag Group

 The key to succeeding with data triage? First, you have to set yourself up to respond quickly. Your customers will appreciate the custom work you are doing from them, but they still want the information quickly. Secondly, use this technique to supplement your database, not substitute for it. If you are not satisfying most of your subscribers most of the time with the data you have already collected, you’re really not a data publisher, you’re a custom research shop, and that’s a far less attractive business. Finally, learn from these research requests. Why didn’t you already have the company or individual in question in your database? Are the information needs of your subscribers shifting? Are there new segments of the market you need to cover? There’s a lot you can learn from custom requests especially if you can find patterns in these requests. 

Data triage is a smart tactic that many data publishers can use. But always remember, no matter how impressive the service, the subscriber still has to wait for data. Ultimately, this nice courtesy becomes a real inconvenience if the subscriber encounters it too often. What you need to do is both satisfy your customers most of the time, and be there for them when you fall short.

The Low Hanging Fruit Hiding in Plain Sight

One of the unintended consequences of the rapid shift to sales force automation tools, CRM systems and large-scale lead generation campaigns is that things only work well when you target prospects and they respond to your promotions. It’s an outbound world now. Pity the poor prospect who unprompted calls you to buy something!

I have recently been in that position, having to make sales inquiries to data companies on behalf of clients. At first, I simply bemoaned the quality of salespeople these days. But then I realized it wasn’t the salespeople who were the problem; it was me! None of these companies had put any thought into how to handle an unsolicited lead, probably because they assumed it was a non-issue. But it’s a big issue. I consistently fell through the cracks because none of these companies had made any provision to deal with me. I didn’t fit their workflow.

The first thing you learn about being a buyer in this situation is that you better not be in a hurry. Callbacks to unsolicited leads in my recent experiences ranged from two to four days. And when I did get a response, it was often by a screener, charged with determining if my business was worth a salesperson’s time. Indeed, after being screened by one major data provider, I received a surprisingly curt email informing me that the size of my potential order didn’t merit their attention, but that my name had been passed along to one of their distributors, and I would hear from them in due course. I’m still waiting after three weeks.

I’ve also learned that using the phone doesn’t accelerate the buying process at all. In fact, it makes things worse. Two of the data companies I contacted had automated attendants that would helpfully connect me … but only if I already knew who I wanted to talk to. In one case, I actually reached a live person who answered the company’s main number. When I asked to speak to someone in Sales, I got the response I hear nearly 100% of the time: there are no salespeople in the office. When I asked to leave a message for someone in Sales, I got a long pause, followed by a very hesitant and somewhat dubious “sure, if you really want to.” One receptionist actually made the mistake of connecting me to someone in the sales department. I say “mistake” because the person answering the phone said he “wasn’t allowed to talk to me,” but he’d have someone call me back. When I said I needed some basic product information first, he did in fact provide it, after swearing me to secrecy because “I could get in a lot of trouble for doing this.”

Since companies have clearly abandoned the telephone as means of inbound contact, you think they would pay close attention to incoming leads by email. If only that was true! After submitting my sales inquiries to three companies via the ever popular “contact us” form, proving that I was not a robot, and in some cases being asked the size of my budget (required field), I sat back and waited. And then waited some more. One company responded fairly quickly, but the salesperson was apparently so incredulous that a sales lead would be unsolicited that I had to submit to a grilling via email to confirm my interest and my bona fides.

The second company responded three days later, and apologetically asked for lots of information about my product requirements and me so that he could “get me in the system.” Once properly in the company’s lead stream, I had a satisfactory buying experience.

The third company? Three weeks and I am still waiting on a response.

You surely know where I am going with this: with so much technology and so many resources being devoted to lead cultivation, generation and management, we seem to have forgotten about the most valuable sales lead of all: the unsolicited inquiry. There is apparently no place for them in our automated workflows.

Not your problem? I challenge you: complete the form on your own company’s “contact us” page and sit back and wait, not with a stopwatch but with a calendar. If you want an even more dismal experience, call your own company’s main number and ask to speak to a salesperson. Yeah, it’s that bad ... which means the opportunity for quick increased revenue is that good!