Viewing entries in
Best Practices

Getting From A to B

When I started in the data publishing business decades ago, information products were largely paper-based (think directories), and the selling of information products was largely paper-based as well (think direct mail). Fast forward to today, and now we’re mostly selling online subscriptions via online marketing, and everyone is better off for it, or so it would seem.

Yet in the great shift from offline to online marketing, what didn’t seem to shift over were all the people who really understood offline marketing. These people tended to know their stuff, for the simple reason that direct mail was expensive. Too many mistakes and you would be out of a job … or out of business.

As a result, the development of online marketing canon was a tabula rasa exercise.  I still vividly remember sitting in a seminar for online marketers in 1999 as the speaker described an extraordinary new marketing concept: in order to find the best price for his product, he had split his list in two and sent each half the same offer but with different price points. He said the concept could be used dozens of different ways, and because it was new there wasn’t even a name for it. As dozens of online marketers from household name companies furiously scribbled notes, I remember thinking that one possible name the group might want to consider was “A/B testing.” These young marketers were so convinced that what they were doing was so new and so different it never occurred to them to explore what had been learned before they arrived on the scene.

Sure, online marketing has come a long way in the last 20 years, and there are now aspects of online marketing that don’t have any offline parallel. But the basics live on.

In talking to the pricing research experts at TRC, folks whose deep knowledge of market research never fails to impress, I learned of a recent study conducted by researchers at Stanford and the University of Chicago. It sought to quantify the value of adding personalization to email messages. The results were stunning: the research found a 21% lift in email opens, a 31% lift in the number of inquiries, and as a bonus, a 17% drop in the number of unsubscribes. Online gold! But, just for the record, personalization delivered magical results in offline direct mail as well, so while these research results are good news, at the same time they’re not really new news. 

Yet, one recent study finds that while 81% of online marketers claim they send personalized email, only 3% of consumers feel they regularly receive personalized email. The discrepancy comes from the difference between personalizing an email and effectively personalizing an email. The best online marketers know that there’s more to it than just dropping a name in the email somewhere.

How do you figure out what’s effective? Testing, endless testing, having a good research methodology (such as not testing multiple things in one email), and monitoring and recording results carefully. Not sure where to start? Well, you might consider this new thing — it’s called an A/B test.

Form Follows Function

Numerous online marketing trade associations have announced their latest initiative to bring structure and transparency to an industry that can only be called the Wild, Wild West of the data world: online audience data. Their approach offers some useful lessons to data publishers.

At their brand-new one-page website (www.datalabel.org) this industry coalition is introducing its “Data Transparency Label.” In an attempt to be hip and clever, the coalition has modeled its data record on the familiar nutrition labels found on most food packaging today. It’s undeniably cute, but it’s a classic case of form not following function. Having decided on this approach, the designers of this label immediately boxed themselves in as to what kind and how much data they could present to buyers. I see this all the time with new data products: so much emphasis is placed on how the data looks, its visual presentation, that important data elements often end up getting minimized, hidden or even discarded. Pleasing visual presentation is desirable, but it shouldn’t come at the expense of our data.

The other constraint you immediately see is that this label format works great if an audience is derived from a single source by a single data company. But the real world is far messier than that. What if the audience is aggregated from multiple sources? What if its value derives from complex signal data that may be sourced from multiple third parties? What about resellers? Life is complicated. This label pretends it is simple. Having spent many years involved with data cards for mailing lists, during which time I became deeply frustrated by the lost opportunities caused by a simple approach used to describe increasingly sophisticated products, I see history about to repeat itself.

My biggest objection to this new label is that its focus seems to be 100% on transparency, with little attention being paid to equally valuable uses such as sourcing and comparison. The designers of this label allude to a taxonomy that will be used for classification purposes, but it’s only mentioned in passing and doesn’t feel like a priority focus at all. Perhaps most importantly, there’s no hint of whether or not these labels will be offered as a searchable database or not. There’s a potentially powerful audience sourcing tool here, and if anyone is considering that, they aren’t talking about it.

 Take-aways to consider:

·     When designing a new data product, don’t allow yourself to get boxed in by design

·     The real world is messy, with lots of exceptions. If you don’t provide for these exceptions, you’ll have a product that will never reach its full potential

·     Always remember that a good data product is much more than a filing cabinet that is used to look up specific facts. A thoughtful, well-organized dataset can deliver a lot more value to users and often to multiple groups of users. Don’t limit yourself to a single use case for your product – you’ll just be limiting your opportunity.

Just in Time Data

Databases are tricky beasts because their content is both fluid and volatile. There are likely no databases that are 100% comprehensive and 100% accurate at the same time. This problem has only been exacerbated by increasingly ambitious data products that continue to push the envelope in terms of both the breadth and depth of their coverage.

Data publishers have long had to deal with this issue. The most widely adopted approach has been what might be called “data triage.” This is when the publisher quickly updates a data record in response to a subscriber request.

I first encountered this approach with long-time data pioneer D&B. If you requested a background report on a company for which D&B had either a skeleton record or out of date information, D&B adroitly turned this potential problem into a show of commitment to its data quality. The D&B approach was to provide you with whatever stale or skimpy information it had on file in order to provide some data the subscriber might find useful. But D&B would also  indicate in bold type words to the effect of, “this background report contains information that may be outdated. To maintain our data quality standards, a D&B investigator will update this report and an updated report will be sent to you within 48 hours.”

Data triage would begin immediately. D&B would have one of its more experienced researchers call the company and extract as much information as possible. The record was updated, the new information was sent to the subscriber, and anyone else requesting that background report would benefit from the updated information as well.

A variation on this approach is to offer not updates to existing records, but rather to create entirely new records on request. Not in our database? Just let us know, and we’ll do the needed research for you pronto. Boardroom Insiders, a company that sells in-depth profiles of C-suite executives, does this very successfully, as does The Red Flag Group

 The key to succeeding with data triage? First, you have to set yourself up to respond quickly. Your customers will appreciate the custom work you are doing from them, but they still want the information quickly. Secondly, use this technique to supplement your database, not substitute for it. If you are not satisfying most of your subscribers most of the time with the data you have already collected, you’re really not a data publisher, you’re a custom research shop, and that’s a far less attractive business. Finally, learn from these research requests. Why didn’t you already have the company or individual in question in your database? Are the information needs of your subscribers shifting? Are there new segments of the market you need to cover? There’s a lot you can learn from custom requests especially if you can find patterns in these requests. 

Data triage is a smart tactic that many data publishers can use. But always remember, no matter how impressive the service, the subscriber still has to wait for data. Ultimately, this nice courtesy becomes a real inconvenience if the subscriber encounters it too often. What you need to do is both satisfy your customers most of the time, and be there for them when you fall short.

The Low Hanging Fruit Hiding in Plain Sight

One of the unintended consequences of the rapid shift to sales force automation tools, CRM systems and large-scale lead generation campaigns is that things only work well when you target prospects and they respond to your promotions. It’s an outbound world now. Pity the poor prospect who unprompted calls you to buy something!

I have recently been in that position, having to make sales inquiries to data companies on behalf of clients. At first, I simply bemoaned the quality of salespeople these days. But then I realized it wasn’t the salespeople who were the problem; it was me! None of these companies had put any thought into how to handle an unsolicited lead, probably because they assumed it was a non-issue. But it’s a big issue. I consistently fell through the cracks because none of these companies had made any provision to deal with me. I didn’t fit their workflow.

The first thing you learn about being a buyer in this situation is that you better not be in a hurry. Callbacks to unsolicited leads in my recent experiences ranged from two to four days. And when I did get a response, it was often by a screener, charged with determining if my business was worth a salesperson’s time. Indeed, after being screened by one major data provider, I received a surprisingly curt email informing me that the size of my potential order didn’t merit their attention, but that my name had been passed along to one of their distributors, and I would hear from them in due course. I’m still waiting after three weeks.

I’ve also learned that using the phone doesn’t accelerate the buying process at all. In fact, it makes things worse. Two of the data companies I contacted had automated attendants that would helpfully connect me … but only if I already knew who I wanted to talk to. In one case, I actually reached a live person who answered the company’s main number. When I asked to speak to someone in Sales, I got the response I hear nearly 100% of the time: there are no salespeople in the office. When I asked to leave a message for someone in Sales, I got a long pause, followed by a very hesitant and somewhat dubious “sure, if you really want to.” One receptionist actually made the mistake of connecting me to someone in the sales department. I say “mistake” because the person answering the phone said he “wasn’t allowed to talk to me,” but he’d have someone call me back. When I said I needed some basic product information first, he did in fact provide it, after swearing me to secrecy because “I could get in a lot of trouble for doing this.”

Since companies have clearly abandoned the telephone as means of inbound contact, you think they would pay close attention to incoming leads by email. If only that was true! After submitting my sales inquiries to three companies via the ever popular “contact us” form, proving that I was not a robot, and in some cases being asked the size of my budget (required field), I sat back and waited. And then waited some more. One company responded fairly quickly, but the salesperson was apparently so incredulous that a sales lead would be unsolicited that I had to submit to a grilling via email to confirm my interest and my bona fides.

The second company responded three days later, and apologetically asked for lots of information about my product requirements and me so that he could “get me in the system.” Once properly in the company’s lead stream, I had a satisfactory buying experience.

The third company? Three weeks and I am still waiting on a response.

You surely know where I am going with this: with so much technology and so many resources being devoted to lead cultivation, generation and management, we seem to have forgotten about the most valuable sales lead of all: the unsolicited inquiry. There is apparently no place for them in our automated workflows.

Not your problem? I challenge you: complete the form on your own company’s “contact us” page and sit back and wait, not with a stopwatch but with a calendar. If you want an even more dismal experience, call your own company’s main number and ask to speak to a salesperson. Yeah, it’s that bad ... which means the opportunity for quick increased revenue is that good!

Thinking About Privacy and Data? Good.

We have heard a lot in the past few weeks about the travails of Facebook, as it became widely known that many millions of its user profiles had been,  for lack of a better term, hacked. That in turn brought Facebook’s advertising microtargeting capabilities into focus, creating more widespread privacy concerns.

But does the average data publisher have to worry about privacy? The short answer is yes.

Data publishers, including B2B data publishers, often control a wealth of extremely valuable data. Many data publishers don’t fully appreciate what valuable insights they could glean from their own data. Fortunately, data thieves haven’t figured it out either … yet.

The highest value data in a typical commercial database isn’t the data itself, it’s the information on what users are doing with the data. Knowing, for example, that the head of acquisitions at a public company was doing deep research on another public company, could be extremely valuable to certain people. Knowing that an executive suddenly started looking at job openings could be valuable. Knowing that five venture capital firms in three days had looked up information on a particular start-up could be extremely valuable. You get the idea.

We already sell some types of information about how users interact with data, and we do this with very little thought about how it might blow up in our faces. Other of our data is clearly quite sensitive and we’d never sell it, but what if somebody stole it?

Going back to 2013, Bloomberg came in for tough public scrutiny after it was revealed its reporters had used Bloomberg terminal access data to track an individual in order to write a story. That’s pretty tame compared to the recent Facebook revelations, but it shows there is often tremendous inferential data hiding in the intersection between our databases and how our customers interact with it. Monetize where appropriate. Protect where appropriate. But whatever you do, don't ignore it.