Uncategorized infocomm Uncategorized infocomm

Forcible Entry

One of the hardest parts of implementing any kind of sales force automation system is getting the sales force to automate. That's because automation ultimately comes down to data entry, and the average salesperson either won't do it, can't do it or does it reluctantly (read: poorly). The result is often a nightmare, as those of us who have been called up to cleanse or enhance such databases can readily attest.

Let's face it: salespeople become salespeople, at least in part, because of an aversion to forms and paperwork. They don't see sitting at a desk entering information about things that have already happened to be a good use of their time. And to a large degree, they are correct. Do you really want your top producer spending valuable time on entering call notes? Yet at the same time, who else can do it? The salesperson is the only one who can report out on a client meeting, and it's valuable to both the salesperson and management to have some record of what transpired.

So how's it all working out? Kinda, sorta okay is probably the best summation. Salespeople are frequently coerced into doing necessary data entry by tying it to commission payments. Far less frequently, salespeople are incented to provide the needed input, and management gets by with the result. After all, if a salesperson mistakenly indicates a prospect has a 60% chance of closing instead of an 80% chance, it's annoying and a bit disruptive, but nobody gets hurt.

But what if bad data entry could get somebody hurt? It's not theory. Right now, the government is dangling billions of dollars in front of physician practices and hospitals in order to spur rapid adoption of electronic health record (EHR) systems. And how is patient data being entered into the EHR systems? Well, to a surprising extent, it is by physicians themselves, pecking away at keyboards. And not surprisingly, physicians are about as thrilled with this new data entry work as salespeople.

Highly productive physicians (and most are in our wonderful world of managed care) see this work as slowing them down. Some physicians don't think this is the kind of work they should be doing, and even those who are conceptually supportive of EHRs are often just plain not good at data entry. And when a diagnosis, for example, is entered incorrectly, the impact in the interconnected healthcare system that is emerging could be devastating. And there's another angle as well: patients are beginning to complain that their already conscribed time with the physicians is being further chipped away as physicians stand with their backs turned, entering information.

Of course, the healthcare economy has a solution to this problem that perfectly illustrates why healthcate cost control is so difficult: they are hiring data entry people, called scribes, to follow physicians around and enter information that is called out to them. Scribes are already fairly common in emergency room settings, but it probably won't be long before it gets even cozier in the examining room too. So much for the much-hoped-for costs savings EHRs were supposed to yield!

Object lesson for us all: never forget that when it comes to workflow applications, somebody has to enter data, and that person probably is neither trained nor particularly happy about doing so. The easier you can make it, and the more you can trap errors before they enter the database, the stronger the product and the higher the chance of successful adoption.

 

Read More
Uncategorized infocomm Uncategorized infocomm

Nothing New

The recent dust-up between Google and Microsoft (in short, Google is accusing Microsoft of copying its search results) is more entertaining than informative. What is of particular interest to me is that the web cognoscenti have largely come down on the side of Microsoft, many going so far as to proclaim Microsoft clever and creative for trying to use Google search results to improve its own.

This latest controversy -- building a search engine, in part, by taking results from another search engine -- reminded me of a larger issue I have been pondering for a while: the seeming tilt in favor of aggregating data over creating it.

Consider "Web 2.0." There are thousands of competing definitions for this term, but to me it stands for a period when we celebrated the power of programmers who aggregated large and complex datasets in clean and powerful user interfaces. These Web 2.0 sites were exciting, useful, powerful and innovative, but their existence depended on OPD: Other People's Data. Sometimes that data was licensed, but just as often it was public domain data or simply scraped and re-formatted. To the extent all this data was legally obtained, I take no issue with it. But it does seem to have created a mindset, even among publishers. As we discuss new products with publishers of all different shapes and sizes, it's not uncommon that one of the first questions asked is, "where will we get the data?"

I jump from that thought to some interesting insights from the Media Dealmakers Summit I attended yesterday. A number of speakers brought up the topic of curation, usually with near-glee in their voices. That's because curation looks to be the next big thing, and who better than publishers to offer content made more valuable through curation? But aggregation is curation-lite. By that I mean you add relatively little value simply deciding what sources to aggregate. Real curation, and hence real value, comes from getting under the hood and selecting, standardizing, normalizing and editing individual units of content. Arguably, the highest form of curation is compilation, where you not only create a unique dataset to meet a specific need, but you make highly granular decisions about what to include or exclude.

At the Dealmakers Summit  Dan Lagani, president of Reader's Digest, reminded us that Reader's Digest was created in 1922 specifically to address information overload. What resulted was one of the most successful magazines in the history of publishing. If content curation was that valuable back then, imagine its value today! But again, simple aggregation is the lowest form of curation and compilation is the highest. And if we want to have high value, differentiated products, we must never let our content creation skill atrophy. Aim high.

Read More
Uncategorized infocomm Uncategorized infocomm

Taking Out the Garbage

Last week, I wrote that there are an increasing number of people claiming that the major search engines are getting long in the tooth. The key issue: they have been thoroughly compromised by commercial forces (some ethical, many not) that have compromised search results by forcing marginal or inappropriate sites into the coveted top positions, frustrating searchers with false starts and wasted time.

I noted as far back as 2005 that even the advertising in search engines had been similarly compromised. Some retailers and e-commerce sites were so crazed for traffic they would advertise products they didn't sell or products that didn't even exist.

The net results was that we moved from a situation not too long ago where the search engines only indexed 50% of the web, to a situation where it can be said they now index 150% of the web, the extra 50% being the junk, clutter, scammers and garbage that work to obscure meaningful search results.

A lot of companies have sought to address this growing problem with their own search engines. I wrote, for example, about a new search engine called Blekko, that allows users to powerfully filter search results, or even to use filters built by others. Conceptually, it's a clever idea, but on a practical level, it's a lot of work, and if you rely on the work of others, you never know what you're getting (or missing).

Now there is a lot of buzz around a new search engine called duckduckgo.com, a quirky (quacky?) search engine that tries to improve search by doing less, not more. Its unique selling proposition is that it aggressively filters out garbage search results, and won't track or retain your search results in any way. Does a site like this even have a prayer?

I gave duckduckgo a workout this morning. My first reaction: it's surprisingly good. It's design is so spare you get this unsettling feeling you're missing something, but what really seems to be missing is a lot of the garbage we're become accustomed to in search results. It's rather like the first time you put on your glasses with your new prescription. Things jump out at you that you might not have seen before. It takes only a few searches to become convinced you're probably not missing anything important in the search results it returns. It's worth a look.

What may be happening, finally, is that we are beginning a long-term shift back to basics in search, a shift that recognizes that search engines can't and shouldn't do everything, and that search engines are best when they stay true to their purpose: to index original content, not try to become content. A shift like this can only be good news for those of us who own original content.

Read More
Uncategorized infocomm Uncategorized infocomm

Wake Up and Smell the Curation

In a very short period of time, it appears that it has become acceptable to say in public what was formerly only whispered in darkened rooms, to wit: Google search isn't cutting it anymore. One great example of the genre can be found here:

Boil the criticism down to its essence, and what is being said is that the Google search algorithm has been thoroughly steamrolled by big merchants and spammers with powerful SEO capabilities, pushing themselves into the important early search results and making it much harder to find what you are looking for. The junk -- that Google in its early days did such a stellar job filtering out -- is back.

There are some who believe it is only a matter of time until Google re-engineers its search algorithm, and then all these problems will magically go away. More people seem to believe that this problem is big, profound and permanent.

Consider too the much-publicized statistic from marketing firm iProspect: the typical knowledge worker spends 16 hours a month searching for information and 50 percent of all those searches fail. More evidence that when it comes to search, something is broken.

Intriguingly, the solution being advanced by many is curation: some more active type of selection that isn't totally driven by algorithms. At one extreme, it is hand-assembled lists. At the other extreme, we have the concept of the "social graph," the concept that search results can be driven by what your friends and colleagues like and recommend.

Of course, right in the middle of that continuum sits a group we all know and love: data publishers. What data publishers do, by definition, is curate: they collect, classify and arrange information to make it more useful.

The majority of data publishers have spent years now trying to prove to themselves, their subscribers and their advertisers where they fit in the world of search, and why they still matter. Intriguingly, there is a growing belief that the general search engines, who believed they could do everything and do it better, are actually finding hard limits to their ambitions. And that puts new importance on information providers who cover an area particularly well and make that information easily accessible. Think data publishers.

This does not mean that Google is going to go away. But it is likely to be a very different company, especially given its rapid diversification into so many non-search businesses. Perhaps Google itself woke up and smelled the coffee and now sees the limits of general search?

Read More
Uncategorized infocomm Uncategorized infocomm

News You Can Use

An article in the current issue of Wired discussing a new product from Dow Jones called Lexicon offers up this irresistible line:

"But many of the professional investors subscribing to Lexicon aren't human -- they're algorithms."

Okay, algorithms don't actually call and order up fresh, hot data for delivery like pizza ... but the people in charge of those algorithms do, and that's the real point.

Let me step back and explain Lexicon. It's an XML feed of breaking news stories with an intriguing twist: Lexicon pre-processes each news stories to add fielded sentiment analysis to each story, which is expressed quantitatively. In other words, the tone of the article is reduced to a number. That means that the customers of Lexicon -- institutional traders for the most part -- can more easily feed news content into their computerized models that are used to drive stock trading. Imagine, for example, a story about copper prices with a strongly negative numeric value associated with it.  Traders feed that into their software, which is likely looking at real-time copper prices and who knows what else, and formulate a computer-based buy/sell decision.

There are two elements to this product that really get me excited. First, we have a perfect example of how publishers, by pre-processing data to impute, infer or summarize, can add tremendous value to their content. Second, we have a wonderful example of the blurring lines between content formats. News and data used to live in distinct worlds, and Lexicon illustrates how they are coming together, by analyzing the news and assigning a structured numerical summation to it. As importantly, Lexicon makes the news more amenable to machine processing, and that's at the heart of the value proposition for data products.

Lexicon stands as a great illustration of the increasingly rapid evolution of data-text integration, an InfoCommerce Group "mega-trend" we've been advancing since 2001 (better a little early than a little late!).

Read More