Viewing entries in
Best Practices

This Score Doesn't Compute

This week the College Board, operators of the SAT college admissions tests, made a very big announcement: in addition to its traditional verbal and mathematic skills measurement scores, it will be adding a new score, which it is calling an “adversity score.”

In a nutshell, the purpose of the adversity score is to help college admissions officers “contextualize” the other two scores. Primarily based on area demographic data (crime rates, poverty rates, etc.) and school-specific data (number of AP courses offered, etc.) this new assessment will generate a score from 1 to 100, with 100 indicating that the student has experienced the highest level of adversity.

Public reaction so far has been mixed. Some see it as an honest effort to help combat college admission disparities. Other see it is a desperate business move by the College Board, which is facing an accelerating trend towards college adopting test-optional admission policies (over 1,000 colleges nationwide are currently test-optional).

I’m willing to stipulate that the College Board had its heart in the right place in developing this new score, but I am underwhelmed by its design and execution.

My first concern is that the College Board is keeping the design methodology of the score secret. I find that odd since the new score seems to rely on benign and objective Census and school data. However, at least a few published articles seemed to suggest that the College Board has included “proprietary data” as well. Let the conspiracy theories begin!

Secondly, the score is being kept secret from students for no good reason that I can see. All this policy does is add to adolescent and parental angst and uncertainty, while creating lots of new opportunities for high-priced advisors to suggest ways to game the score to advantage. And the recent college admissions scandal shows just how far some parents are willing to go to improve the scores of their children.

My third concern is that this new score is assigned to each individual student, when it is in reality a score of the school and its surrounding area. If the College Board had created a school scoring data product (one that could be easily linked to any student’s application) and sold it as a freestanding product, there would likely be no controversy around it. 

Perhaps most fundamentally though, the new score doesn’t work to strengthen or improve the original two scores. That’s because what it is measuring and how it measures is completely at odds with the original two scores. The new score is potentially useful, but it’s a bolt-on. Moreover, the way this score was positioned and launched opens it up to all the scrutiny and criticism the original scores have attracted, and that can’t be what the College Board wants. Already, Twitter is ablaze with people citing specific circumstances where the score would be inaccurate or yield unintended outcomes.

Scores and ratings can be extremely powerful. But the more powerful they become, the more carefully you need to tread in updating, modifying or extending them. The College Board hasn’t just created a new Adversity Score for students. It’s also likely to have a caused a lot of new adversity for itself.

Choose Your Customer

From the standpoint of “lessons learned,” one of the most interesting data companies out there is TrueCar.

Founded in 2005 as Zag.com, TrueCar provides consumers with data on what other consumers actually paid for specific vehicles in their local area. You can imagine the value to consumers if they could walk into dealerships with printouts of the lowest price recently paid for any given vehicle. 

The original TrueCar business model is awe-inspiring. It convinced thousands of car dealers to give it detailed sales data, including the final price paid for every car they sold. TrueCar aggregated the data and gave it to consumers for free. In exchange, the dealers got sales leads, for which they paid a fee on every sale.

 Did it work? Indeed it did. TrueCar was an industry disruptor well before the term had even been coined. As a matter of fact, TrueCar worked so well that dealers started an organized revolt in 2012 that cost TrueCar over one-third of its dealer customers.

The problem was with the TrueCar model. TrueCar collected sales data from dealers then essentially weaponized it, allowing consumers to purchase cars with little or no dealer profit. Moreover, after TrueCar allowed consumers to purchase cars on the cheap, it then charged dealers a fee for every sale! Eventually, dealers realized they were paying a third-party to destroy their margins, and decided not to play any more.

TrueCar was left with a stark choice: close up shop or find a new business model. TrueCar elected the latter, pivoting to a more dealer-friendly model that provided price data in ways that allowed dealers to better preserve their margins. It worked. TrueCar re-built its business, and successfully went public in 2014.

A happy ending? Not entirely. TrueCar, which had spent tens of millions to build its brand and site traffic by offering data on the cheapest prices for cars, quietly shifted to offering what it calls “fair prices” for cars without telling this to the consumers who visited its website. Lawsuits followed.  

There are four important lessons here. First, you can succeed in disrupting an industry and still fail f you are dependent on that industry to support what you are doing. Second, when it comes to B2C data businesses, you really need to pick a side. Third, if you change your revenue model in a way that impacts any of your customers, best to be clear and up-front about it. In fact, if you feel compelled to be sneaky about it, that’s a clue your new business model is flawed. Fourth, and I’ve said it before, market disruption is a strategy, not a business requirement. 

Getting From A to B

When I started in the data publishing business decades ago, information products were largely paper-based (think directories), and the selling of information products was largely paper-based as well (think direct mail). Fast forward to today, and now we’re mostly selling online subscriptions via online marketing, and everyone is better off for it, or so it would seem.

Yet in the great shift from offline to online marketing, what didn’t seem to shift over were all the people who really understood offline marketing. These people tended to know their stuff, for the simple reason that direct mail was expensive. Too many mistakes and you would be out of a job … or out of business.

As a result, the development of online marketing canon was a tabula rasa exercise.  I still vividly remember sitting in a seminar for online marketers in 1999 as the speaker described an extraordinary new marketing concept: in order to find the best price for his product, he had split his list in two and sent each half the same offer but with different price points. He said the concept could be used dozens of different ways, and because it was new there wasn’t even a name for it. As dozens of online marketers from household name companies furiously scribbled notes, I remember thinking that one possible name the group might want to consider was “A/B testing.” These young marketers were so convinced that what they were doing was so new and so different it never occurred to them to explore what had been learned before they arrived on the scene.

Sure, online marketing has come a long way in the last 20 years, and there are now aspects of online marketing that don’t have any offline parallel. But the basics live on.

In talking to the pricing research experts at TRC, folks whose deep knowledge of market research never fails to impress, I learned of a recent study conducted by researchers at Stanford and the University of Chicago. It sought to quantify the value of adding personalization to email messages. The results were stunning: the research found a 21% lift in email opens, a 31% lift in the number of inquiries, and as a bonus, a 17% drop in the number of unsubscribes. Online gold! But, just for the record, personalization delivered magical results in offline direct mail as well, so while these research results are good news, at the same time they’re not really new news. 

Yet, one recent study finds that while 81% of online marketers claim they send personalized email, only 3% of consumers feel they regularly receive personalized email. The discrepancy comes from the difference between personalizing an email and effectively personalizing an email. The best online marketers know that there’s more to it than just dropping a name in the email somewhere.

How do you figure out what’s effective? Testing, endless testing, having a good research methodology (such as not testing multiple things in one email), and monitoring and recording results carefully. Not sure where to start? Well, you might consider this new thing — it’s called an A/B test.

Form Follows Function

Numerous online marketing trade associations have announced their latest initiative to bring structure and transparency to an industry that can only be called the Wild, Wild West of the data world: online audience data. Their approach offers some useful lessons to data publishers.

At their brand-new one-page website (www.datalabel.org) this industry coalition is introducing its “Data Transparency Label.” In an attempt to be hip and clever, the coalition has modeled its data record on the familiar nutrition labels found on most food packaging today. It’s undeniably cute, but it’s a classic case of form not following function. Having decided on this approach, the designers of this label immediately boxed themselves in as to what kind and how much data they could present to buyers. I see this all the time with new data products: so much emphasis is placed on how the data looks, its visual presentation, that important data elements often end up getting minimized, hidden or even discarded. Pleasing visual presentation is desirable, but it shouldn’t come at the expense of our data.

The other constraint you immediately see is that this label format works great if an audience is derived from a single source by a single data company. But the real world is far messier than that. What if the audience is aggregated from multiple sources? What if its value derives from complex signal data that may be sourced from multiple third parties? What about resellers? Life is complicated. This label pretends it is simple. Having spent many years involved with data cards for mailing lists, during which time I became deeply frustrated by the lost opportunities caused by a simple approach used to describe increasingly sophisticated products, I see history about to repeat itself.

My biggest objection to this new label is that its focus seems to be 100% on transparency, with little attention being paid to equally valuable uses such as sourcing and comparison. The designers of this label allude to a taxonomy that will be used for classification purposes, but it’s only mentioned in passing and doesn’t feel like a priority focus at all. Perhaps most importantly, there’s no hint of whether or not these labels will be offered as a searchable database or not. There’s a potentially powerful audience sourcing tool here, and if anyone is considering that, they aren’t talking about it.

 Take-aways to consider:

·     When designing a new data product, don’t allow yourself to get boxed in by design

·     The real world is messy, with lots of exceptions. If you don’t provide for these exceptions, you’ll have a product that will never reach its full potential

·     Always remember that a good data product is much more than a filing cabinet that is used to look up specific facts. A thoughtful, well-organized dataset can deliver a lot more value to users and often to multiple groups of users. Don’t limit yourself to a single use case for your product – you’ll just be limiting your opportunity.

Just in Time Data

Databases are tricky beasts because their content is both fluid and volatile. There are likely no databases that are 100% comprehensive and 100% accurate at the same time. This problem has only been exacerbated by increasingly ambitious data products that continue to push the envelope in terms of both the breadth and depth of their coverage.

Data publishers have long had to deal with this issue. The most widely adopted approach has been what might be called “data triage.” This is when the publisher quickly updates a data record in response to a subscriber request.

I first encountered this approach with long-time data pioneer D&B. If you requested a background report on a company for which D&B had either a skeleton record or out of date information, D&B adroitly turned this potential problem into a show of commitment to its data quality. The D&B approach was to provide you with whatever stale or skimpy information it had on file in order to provide some data the subscriber might find useful. But D&B would also  indicate in bold type words to the effect of, “this background report contains information that may be outdated. To maintain our data quality standards, a D&B investigator will update this report and an updated report will be sent to you within 48 hours.”

Data triage would begin immediately. D&B would have one of its more experienced researchers call the company and extract as much information as possible. The record was updated, the new information was sent to the subscriber, and anyone else requesting that background report would benefit from the updated information as well.

A variation on this approach is to offer not updates to existing records, but rather to create entirely new records on request. Not in our database? Just let us know, and we’ll do the needed research for you pronto. Boardroom Insiders, a company that sells in-depth profiles of C-suite executives, does this very successfully, as does The Red Flag Group

 The key to succeeding with data triage? First, you have to set yourself up to respond quickly. Your customers will appreciate the custom work you are doing from them, but they still want the information quickly. Secondly, use this technique to supplement your database, not substitute for it. If you are not satisfying most of your subscribers most of the time with the data you have already collected, you’re really not a data publisher, you’re a custom research shop, and that’s a far less attractive business. Finally, learn from these research requests. Why didn’t you already have the company or individual in question in your database? Are the information needs of your subscribers shifting? Are there new segments of the market you need to cover? There’s a lot you can learn from custom requests especially if you can find patterns in these requests. 

Data triage is a smart tactic that many data publishers can use. But always remember, no matter how impressive the service, the subscriber still has to wait for data. Ultimately, this nice courtesy becomes a real inconvenience if the subscriber encounters it too often. What you need to do is both satisfy your customers most of the time, and be there for them when you fall short.