Uncategorized infocomm Uncategorized infocomm

When Is A Click Not A Click?

By now, you should be familiar with the term "click fraud" which refers to the act of repeatedly clicking on paid search links to either fraudulently make money, or to create an expense for those you don't like (such as a competitor).

We've addressed click fraud in columns past, citing the investigative news stories coming out of India suggesting that there are sizable organizations in place that exist solely for the purpose of committing click fraud. Click fraud is an important development for B2B publishers to be aware of. Most don't employ the pay-per-click business model, and thus their advertisers can't fall victim to click fraud, an advantage that many B2B publishers should be quick to emphasize to prospective advertisers.

But how big is click fraud really? It's getting a lot of attention lately, but how prevalent is it?. Reliable statistics on illegal activities like this don't exist, and the search engines have all been taking steps to implement technological solutions to reduce click fraud. So is click fraud a tempest in a teapot? We were starting to think so, until we saw this stunning quote from George Reyes, Google's CFO, speaking at a major investment conference hosted by CSFB this week:

"I think something has to be done about [click fraud] really, really quickly, because I think, potentially, it threatens our business model."

As one insight into the size of the problem, Jessie Stricchiola, the president of Alchemist Media, a paid search consulting firm, told CNN that she estimates that as much as 20 percent of all clicks on paid search ads are fraudulent, and she contends that not all search engines have been as aggressive as they could be about combating click fraud. Of Google she noted, "Google has been the most stubborn and the least willing to cooperate with advertisers" that complain about click fraud.

Published reports indicate that click fraud was a primary topic of conversation at recent conferences sponsored by Jupiter and Majestic Research as well. There are now open discussions in investment circles about how exposed the major search engines may be with regard to click fraud, and some analysts are even suggesting a close watch on companies such as eBay and Amazon, which are heavy buyers of paid search, and thus have a greater exposure to fraud.

In our view, click fraud is simply one more illustration of what continues to be most problematic about "pay for performance" advertising: it looks great as long as you don't look too closely. Yes, search engines only get paid if they perform, but for most B2B marketers, "performance" means having traffic shipped to their sites. If that's all you want, great. But if you're depending on paid search for qualified leads or sales, maybe it's performing, maybe it's not. Nobody really knows for sure. And even if the search engines are able to wipe out the scourge of click fraud, they remain resistant to looking too closely at the remaining legitimate clicks they generate for advertisers. To the search engines, all clicks are created equal, and they need it to stay that way. Because if it becomes clear that not all clicks are of equal value and quality to the advertiser, guess what? Advertisers will want to pay less for all clicks, or only pay for the best quality clicks. Either scenario is bad news for the search engines, for whom your ignorance is their bliss.

Read More
Uncategorized infocomm Uncategorized infocomm

Where Have All The Subscribers Gone?

An article in a recent issue of DM News by Meg Weaver, founder of Wooden Horse Publishing which produces a database covering the magazine industry, concludes that magazine publishers have hit a wall in terms of subscribers, and that the numbers are in significant decline. As she phrases it, this is "... a dirty little secret the magazine industry doesn't want you to know: We have run out of readers in this country."

To document this claim, Weaver cites her analysis of Audit Bureau of Circulation numbers. By looking at cumulative circulations of all audited ABC publications, she notes that the industry grew consistently in terms of circulation until 1990, when things began to plateau at 366 million. The 2003 number? A cumulative circulation of 353 million. Weaver attributes this primarily to uncreative "me too" publishing practices in the magazine industry, which leaves the industry in a position of continuously poaching subscribers from each other rather than creating innovative new magazines that would attract net new subscribers.

But is this the full story? My first concern was reliance on ABC circulation numbers, since ABC is favored mostly by consumer publications. Further, ABC provides circulation numbers only on its membership, and that creates a sample biased towards larger publications that need circulation audits.

My sense is that the story, which probably does begin around 1990, is far more nuanced. What we've seen over the last 15 years is an explosion in the number of increasingly specialized magazine titles, some with circulations in the low thousands. We've also seen increasing amounts of information delivered via the Web, faster, fresher, more accessible and often even more specialized than the most specialized print publications. In short, information is proliferating, and as a result, audiences are fragmenting as they increasingly move to access only the information of most interest to them, shutting out a lot of more general information sources in the process.

We're now in an era of extreme specialization. Subscribers are getting used to mixing, matching and filtering content. And as they read less, and focus most on topics they care most about, guess what? You need to keep up with them editorially, because these empowered readers want depth and substance, and they know immediately when they're not getting it.

This shift has implications for data publishers too, because our businesses are being forced to address the same trends. What was "good enough" in terms of editorial quality isn't good enough any more. The data that our subscribers choose to receive gets scrutinized as never before, by increasingly expert eyes. It's a tough new standard to meet, but there is ample evidence that if you can deliver good quality editorial, there will be an audience willing to pay for it.

Read More
Uncategorized infocomm Uncategorized infocomm

Say Cheese: Databases Go Hollywood

Publishers, hold onto your hats and grab your digital cameras: it's looking like the next big battleground in the directory world is going to be visual.

All of a sudden, photos of businesses and buildings are red hot and getting lots of attention. Several months ago, infoUSA announced it had dusted off its mothballed project to photograph every business in America, and would be adding photos to several of its products, including its beefed-up business credit reports. CoStar, producer of a national commercial real estate databases, has a fleet of trucks running around the country snapping shots of every office building. Now, Amazon.com's new search engine, A9, is generating big buzz with a yellow pages directory with photos of businesses. And, rising above them all is GlobeXplorer (a 2004 InfoCommerce Model of Excellence), which offers aerial photos of America and which are being integrated into several business database products.

Why pictures? Let's face it: directory and databases are very useful, but very useful is not always the same as very interesting. Adding photos makes a database more interesting. In some applications, a picture may well be worth a thousand words. In fields such as real estate, it's hard to conceive of a product without photos. Even in a credit report product, a photo of the business might provide valuable added insight.

But do photos add much to a yellow pages product? I took a quick spin through the A9 yellow pages to try to answer that question. When you do hit a photo of a business (and A9 only has coverage in selected areas right now), it's impressive. A9 offers you a series of snapshots around the business, so you can get a view of the whole street. In a great case of unintended product placement, more than a few of these photos seemed to be of UPS trucks parked at the curb and obscuring the storefront, but overall the project achieves one of A9's stated objectives, which is coolness.

The big question of course is "why?" Is the user better off for having access to these photos? A9 suggests photos help users more quickly get to stores by providing a visual cue in addition to the address. That's certainly valid (although the cost/benefit ratio seems a bit high), but there is a bigger issue. In many cases I would be staring at a well-composed photos of a business with no clue what it did beyond the general yellow pages category in which it was classified. Therein lies the rub: A9 is layering these neat and glitzy photos on top of a very weak dataset. Photos may distract users from the lack of information about the listed companies, but they don’t substitute for basic data.

A search for restaurants in downtown Philadelphia brought me lots of listings, all sporting name, address, phone and photos. Yet the photos, while novel and interesting, still don't tell me about cuisine, menu, hours, credit cards accepted, or any of the more mundane facts that are frankly more useful. When it comes to directories, photos without a strong underlying data record will never offer more than part of the picture.

Read More
Uncategorized infocomm Uncategorized infocomm

Data Disconnect: Don't Let This Happen to You

I'm just back from the SIIA Information Industry summit, and it was refreshing to see so much enthusiasm in this industry again. Online advertising is back with a vengeance, and many in the room are predicting 20% to 80% growth in online ad revenues this year. And for the subscription-based publishers in the room, there seemed to be growing comfort with their products, and in how to market effectively to ever more demanding customers. And all this wasn't just my sense: more than a few speakers and attendees were noting that "it feels like 1998 again."

The only disturbing note was the seeming appetite to address the growing cost and complexity of data compilation by simply skipping the hard stuff. What I mean by this is that there is a lot of activity around the idea of essentially automating the editorial function. Halsey Minor created some buzz when during his luncheon talk he mused about the possibility of building services such as CNET without editors. We heard from companies whose entire businesses were based on re-packaging data gathered on the Web. In several conversations with publishers, it seemed that all of them were seeking opportunities for products that could be built largely, if not entirely, from Web-based data gathered on an automated basis. This thinking stands in stark contrast to one of the main themes I heard hammered home by speaker after speaker: success depends on adding value to your content and building content products that were not only useful, but unavailable elsewhere. Certainly, software is more powerful than ever, and there are examples of products built largely on an automated basis that offer real value. But when it comes to building database and directory products, I believe a lesson I learned early on still holds: if the data you need for your product is easy to collect, your new product is probably a lot less valuable than you think. Re-formatting readily available data or adding a few additional data elements rarely yields "must have" data products, particularly in today's demanding environment. Just as important to remember, if you can get your hands on the raw data easily, so can your competitors. And the software you developed to create your automated product? Every single day, application development tools are becoming cheaper and more powerful, meaning that your "proprietary software" offers little competitive protection either. While you can light up a data publisher's eyes at the thought of eliminating phone calls, faxes and mail, and possibly even eliminating human editors altogether, what we're really seeing is a re-emergence of the perpetual motion machine fallacy on the late 1980's, where a number of half-baked schemes were launched where the database was supposed to somehow maintain itself, the product would be shipped automatically, and the publisher's primary responsibility became checking his daily bank balance from the beach. If only! I am very excited by the potential of data mining tools and user self-updating, and all the wonderful things that can be done by applying software to the wealth of data available on the Web. But I'm concerned by our blind rush towards the world envisioned by computer industry visionary Bill Joy where "the future does not need us." Let's not be too eager to disconnect data quality from human effort just yet. Instead, let's recognize that the human editorial function, which by the way allows us to address the sizable base of businesses that still have no Web presence, is fundamental to the creation of the value added products we need to produce in order to succeed and thrive in the years ahead.

Read More
Uncategorized infocomm Uncategorized infocomm

The Importance of Being Vernacular

I remain amazed at the number of database publishers, particularly those chasing B2C markets for the first time, who haven't seen fit to make their heading structures as friendly as possible to a wide range of audiences.

Consider health sites that offer consumers physician specialties such as "Otolaryngologist." Or legal sites offering categories such as "admiralty law." Or a food ingredients directory with a category for "EVOO" (that's Extra Virgin Olive Oil in case you were wondering).

Technical terms and acronyms may be acceptable as categories in a trade directory (and I say "may" because even within a specialized field, not everyone has the same level of knowledge and expertise), but they're major roadblocks when trying to woo outsiders -- in these cases, consumers.

Kudos then to UK yellow pages publishers Yell, for recently introducing alternate headings based on regional dialects, in recognition that descriptive terms in common use are often not the formal term.

Taxonomies also need to account for common misspellings. One industrial directory found by an analysis of its searches that large numbers of users never got to its category for "throughbolts" because they were typing "thrubolts."

Too much effort you say? Like it or not, we all now operate in a "satisfy them or lose them" environment. Anticipating how users will search for information results in more hits, and more satisfaction. The smartest publishers I know all log and regularly review user searches that generate zero results as a simple way to identify problems. When your site allows users to search your heading structure using free text, such reviews are even more important.

The more paths you can provide to get to your data, the more satisfied users will be and the more successful you will be. When it comes to database taxonomies, if your terminology is correct, and the user's terminology is wrong, then you're wrong too.

Read More