Viewing entries in
New Data Products

A Healthy New Year

We’re in the midst of a transformational shift in the healthcare industry. Likely you have experienced it yourself, and it’s probably already hit you in the pocketbook. It’s the shift to what is called consumer-directed healthcare.

While on the surface consumer-directed healthcare may seem like nothing more than an attempt by employers to shift some of their spiraling healthcare costs onto their employees, there is much more going on behind the scenes. There is a lot of public policy driving this shift. The general idea is that healthcare costs are out of control because those buying healthcare services traditionally haven’t been the ones paying for them. By shifting healthcare costs to the consumer, the reasoning goes, consumers will demand better value for their money by becoming smart healthcare shoppers, and healthcare costs will begin to decline.

It all makes sense on paper, but there is one huge stumbling block in making this approach work: it’s hard to be a smart shopper when none of the things you are buying have price tags on them.

Data entrepreneurs have already seen this opportunity. Companies like Healthcare Blue Book and ClearCost Health have made real strides, but it’s a big and enormously complicated problem to solve. In part, that’s because hospitals don’t like to disclose their prices and insurers are often contractually prohibited from sharing what they pay specific hospitals for specific procedures.   

 Recognizing the issue, the federal government had mandated that as of January 1 of this year, hospitals must post their pricing for common procedures on their websites in an easily downloadable format.

 There’s a quick opportunity here to put your website scraping tools to work to gather all this pricing data in one place and normalize it. Certainly, there is an analytical product in there somewhere. But it’s less of an opportunity than it seems because what hospitals are generally posting are their list prices – and virtually nobody pays these prices. 

The challenge in hospital pricing is to find out what a specific insurance plan pays a specific hospital for, say, a hip replacement. This could be an ideal opportunity to turn to the crowd.

 One approach might be to aggregate all the pricing data that hospitals are now required to publish and use it as a data backbone – essentially a starting point. Then you could turn to consumers and ask them to anonymously submit their hospital bills and insurance statements. Take those images, use optical character recognition to get them into raw data format, then develop software to extract the valuable pricing data. When specific price data isn’t available, you could back off to list price data that would at least show if a hospital is relatively more or less expensive.

 Obviously it will take a long time to build a comprehensive database consisting of millions of price points, but there are a lot of consumer groups and other constituencies that would be very interested in your success and would work with you to increase the number of bills submitted. Hospitals won’t like this a bit, but as is so often the case, if one group doesn’t want the data out there, you have immediate confirmation that the data are valuable to some other group. Ironically, hospitals submit their price quotes for medical devices to a fascinating data company called MDBuyline to make sure they aren’t over-paying for their purchases.

 Sure, there is lots of complexity hiding under this simple framework. Also, it’s obvious that it will take a long time to build a comprehensive database. But the bromide “don’t let the perfect be the enemy of the good” nicely describes a key to success in the data business. As long as your database is the best available, it doesn’t have to be either complete or perfect. In almost every case, data is so important to decision-making that buyers will take what they can get, warts and all. This is not an invitation to be lazy or sloppy. Rather, it is recognition that you’ll have a marketable product long before you have a complete and perfect product. Just one more reason data is such a great business. Should hospital price data be on your New Year’s resolution list?

Relationship Scoring

No, this is not about online dating.  I am referring to the growing use of consumer scores to help companies determine how much time and energy to invest with individual customers.

We’re all familiar with credit scores that yield a single number meant to reflect how dependably you pay your bills. A high credit score can mean easy access to credit, often at lower interest rates that reflect your low re-payment risk. A poor credit score can mean limited access to credit and loans, in addition to higher interest rates.

The folks behind the credit scores have been relentless in their work to find new markets for their product. With the notion that a credit score is also a reflection of someone’s level of personal responsibility as well, credit information is increasingly used in hiring decisions. You’ll also find credit scores used to determine pricing for such things as automobile insurance, the insurance companies having concluded that if you pay your bills on time, you likely drive carefully as well.

But credit scores are not the only consumer scores out there. In parallel with credit scores, a number of companies have been building out consumer scores based on Customer Lifetime Value (CLV). The CLV concept has been around forever. What’s changed recently is increasingly easy access to a wide variety of input datasets (a/k/a/ “signals”) that work to increase the precision of these scores, along with increasing computer power that makes it possible to access and act on these scores in real-time.

And how are these scores used? A recent Wall Street Journal articles suggests that CLV scores are increasingly used by companies to determine how they will interact with their customers. A higher scoring customer may actually get faster and better customer service. Companies will offer bigger incentives and better deals to their best customers in order to retain them. CLV scores start with numeric calculations of the likely dollar value of a customer over the entirety of the projected relationship (and yes, your score typically declines as you get older because … less lifetime). More recently, these relatively simple calculations have been enhanced with demographic overlays and a wide array of lifestyle and even behavioral data points. For example, customers who complain too much or call customer service too often may have their scores reduced as a result.

Currently, companies implement their own CLV scoring systems, sometimes with the help of third-party vendors. CLV scores as a data-driven way to make sure better customers are treated better sounds benign. Where it could take a more worrisome turn is if a third-party vendor tries to centralize all of this information to build a single CLV score for all consumers. This would be a fraught undertaking, especially since it would likely not be subject to any regulatory scrutiny and control. Such a scoring system would also look uncomfortably similar to the social credit system recently introduced by the Chinese government, the implications of which are not yet fully understood but are likely to be profound.

Inexhaustible Data Opportunities

A new product from LexisNexis Risk Solutions monitors newly listed homes for sale on behalf of home insurance companies to alert them when a customer is preparing to move. The insurer can use this advance notice to contact these customers to help retain their business. 

This is a great idea. For a long time now, data companies have offered so-called “new mover” databases, identifying people who have recently moved into a new home. These are prime prospects because they’re in the market for all sorts of things, sometimes urgently, meaning the first offer they get stands a strong chance of being accepted.

This LexisNexis product shows how to combine databases to up your game. What could be a better prospect than a new mover? How about a pre-mover! While LexisNexis is focused on insurance companies, there are all sorts of companies that would be very interested to have at-risk current customers identified for them so that they can focus their customer retention efforts.

What makes this big leap in sales targeting possible isn't cutting edge technology in this case. It’s having the insight to see that data produced by one type of organization (in this case real estate agents) is valuable to another type of organization (in this case, insurance companies). Add in some additional value by matching the database of one organization to the database of another, and you almost assuredly have a nice business opportunity for the taking.

That’s what is so exciting and fun about the data business today: with so many new databases coming together, opportunity is everywhere. The key is to look at every new database you see and ask, “who else could use these data, and what could I do to these data to make them even more valuable to others?”

The people who create databases are almost always trying to solve a specific, single problem or need. Flip, spin, match or sometimes simply re-sort these databases, and you can often solve someone else’s problem or need. Am I talking about what’s known as data exhaust? To some extent yes, but some of the biggest and most interesting opportunities are right in front of us in plain sight – far less complex and challenging than most of the data exhaust opportunities I have seen.



Getting Inside the Head of a Sales Prospect

B2B prospect identification and targeting has come a long way in the last few years. Things that once seemed impossible are now taken for granted. We can now identify with some precision when someone in a company is actively in the market for a new product. We can take this purchase interest information and bump it against company firmographic data to help qualify and score this individual as a lead. We can easily review the business contacts of this person to see if we know people in common. We can view the work history of this person, and even order a deep background report based on public domain data. We can order an organization chart for this person’s company to understand where he or she fits in the hierarchy of the business, as well as to identify other possible purchase influencers. Pretty impressive, right?

But what if we could go further? What if we could get something close to a psychological profile of the prospect to better understand how to interact with that person to advance the sales conversation. You probably won’t be surprised to hear that there is a company working on it.

The company is called CaliberMind. By mining public domain data, email exchanges with the prospect and even recorded telephone conversations with the prospect (prior consent to recording is required by Caliber Mind), CaliberMind can provide a salesperson with deep and unique insights into the personality and motivations of the prospect, along with recommendations on how to engage them most productively.

Yes, there is an inherently creepy aspect to this, but CaliberMind stresses it only works with public information and freely shared information between the salesperson and prospect. What it does, besides mining these information nuggets, is interprets them in order to build a deep profile of the prospect and specific tips on how to accelerate the sales process. Not surprisingly, the company was founded by former intelligence agents.

This is cutting-edge stuff from a young company, but in many respects it seems to be the logical culmination of the various selling tools that have been introduced over the past few years. CaliberMind is leveraging both increased computing power and the explosion of public domain information to help inform and accelerate the B2B sales selling process. CaliberMind also represents just one more piece of evidence that data opportunities are everywhere – and that the tools needed to collect, process and apply data continue to get more and more powerful.


Make the Product, Not Just the Raw Material

Twitter exhausts me. Even though I feel I have been very selective in who I choose to follow, the volume is overwhelming. Every time I go to review my Twitter feed, I waste far too much time in an exercise to separate the wheat from the chaff to find useful nuggets of news or insight. Twitter ought to be incredibly valuable, but in its current design, users find that to overcome the sheer volume of tweets to get noticed, they have to pump out an increasing number of tweets themselves. It’s an endless game of volumetric one-upmanship that is ultimately self-defeating.

A recent article in the Wall Street Journal takes the view that Twitter is very good as a raw content creation platform, but a failure at making that content useful or even intelligible. We know that Twitter content has value: consider the number of companies looking for trends, breaking news and other signals to gain an edge and generate profits. But it is companies other than Twitter that are adding the value and making the money.

This got me to thinking. Many data publisher still focus on the quantity of the data they provided, not its value. And this inevitably leads to a mentality of selling data by the pound. These publishers deliver lots of data, and their customers figure out what to do with it. For a long time, this was a good business approach for publishers, but hardly an optimized one.

By wrapping their content in software, publishers have added value by allowing customers to act on their data more powerfully. But while data-software integration has been a boon for data publishers, there may still be entirely new products and even entirely new businesses hiding in your data. There are clues to this. Do you have lots of consultants buying your data year after year? Do they renew easily, rarely complaining about price increases? Chances are at least a few of them are productizing your data in some way. Get familiar with their specialties and their services, and you can often come away with new product ideas.

Have you ever changed your file layouts or stopped delivering a specific data field, only to get immediate panic calls from some of your customers? Chances are, they’ve built software around your content and are doing something very valuable with it. A few casual inquiries about how they’re using your data will often yield tremendous insights. Do you have whole categories of customers where you have no idea why they buy your data? Chances are, it will be worth your time to find out. It’s not unusual to find that markets you never considered are making valuable use of your data.

Data-software integration is great, but in the majority of cases, publishers are simply helping their customers better manipulate their data. But there’s a whole additional of level of value that can be created by turning your data into finished products. And while I am not arguing that you should try to run all your customers out of business, if some of them have found a way to make money by re-formatting, augmenting or manipulating your data to add value to it, I’d argue that such opportunities properly belong to the owner of that data. And your subscriber file is often the first best place to look for clues to such opportunities.