Dude, Make Some New Friends
Topix.net, a specialized search engine for news, made news itself this week, announcing it had acquired the domain name topix.com for a hefty $1 million. Why a company that has branded itself around a domain name it does own suddenly feels compelled to spend big money for a variant domain name is a fascinating topic in its own right, but there's a better angle to this story.
In announcing the purchase of this new domain name, company management publicly fretted to the Wall Street Journal that they were afraid of taking a traffic hit by moving from ".net" to ".com," and suggested that Google (which is the source of 90% of the company’s traffic) should somehow assist companies in the same situation so they would not be penalized in search results rankings. That's not a crazy request in this day and age, but consider Google's remarkable response to this idea: websites shouldn't become overly reliant on traffic from search engines!
How does one even begin to respond to a statement like that, especially since Google is right, provided that your website operates in a parallel universe where people discover websites by ... well, how exactly? Google helpfully provides some ideas, suggesting that sites could, for example, set up user forums, which presumably users would learn about by, well how would they learn about them?
Google has earned itself a $138 billion market capitalization because it was instrumental in helping to make search engines everyone's favored entry point onto the web. Now that everyone is so dependent on search engines for both discovery and navigation, and Google has monetized its leadership position in search six ways from Sunday, guess what? Google's new stance is "dude, you need to make some new friends."
Google wants it both ways. It wants the revenue that comes from operating the biggest toll booth onto the web, but not the responsibility. But the reality is that, because of its dominance, its every move has consequences for other businesses, and they are not all positive consequences. Until Google does the math on this simple equation, I guess our only option is to start getting busy with those new user forums.
InfoCommerce Models of Excellence
We're pleased to announce that Oodle Inc. has been selected for a 2007 InfoCommerce Model of Excellence award.
Labels: google, Infocommerce, oodle, topix, topix.com, topix.net, web traffic
Elsevier Bolsters Chemical Offerings
Reed Elsevier has acquired the Beilstein Database, a prominent database covering the organic chemistry field. The deal further cements the relationship between Elsevier and the database. Elsevier has played an integral role in the production and marketing of the database since 1998. Nearly 5 million compounds have been added or updated by Elsevier during that time.
The Beilstein Database's records date back to 1771. The overall database contains more than 9.8 million compounds, 10 million reactions and 320 million experimental data on chemical properties. The database also includes more than 900,000 original author abstracts from 1980 to the present.
This acquisition really just finalizes a partnership that has apparently well served both Elsevier and the Beilstein Database's former owner, the Beilstein-Institut, for nearly a decade. Look for Elsevier to bolster the contents and functionality of the database even further as the STM publishing giant utilizes resources it already has in the chemical field. Currently, the company's Crossfire and DiscoverGate interfaces allow customers to link between Scopus (Elsevier's abstract and indexing database) and the chemical reactions and compounds housed in the Beilstein Database. The connection among Elsevier's scientific properties will undoubtedly grow as the publisher continues to respond to a user base now accustomed to tools that are seamlessly integrated into the workflow.
R.R. Bowker Releases Analytics Tool
R.R. Bowker last week launched a business intelligence tool for its publishing industry clients. PubTrack Consumer will serve as a source of marketing information publishers have always struggled to find. The new product provides data on consumer book purchases as well as demographic and behavior profiles of the purchasers.
PubTrack Consumer's database is comprised of information Bowker collects from a group of men and women (aged 13 and older) on a weekly basis. These individuals are asked, in a survey, a list of 60 core questions and as many as 15 proprietary questions submitted by PubTrack Consumer clients. Bowker has outsourced the panel and survey functions to a couple of U.S.-based market research firms.
PubTrack Consumer fits neatly into R.R. Bowker's suite of tools geared toward the information needs of the publishing industry. It also meets an ever-increasing demand for sales and marketing analytics tools that enable publishers to more easily and efficiently analyze their own sales data and adjust their strategies accordingly. Targeted customer data is typically difficult to obtain, so publishers will surely benefit from the in-depth look PubTrack Consumer promises to provide. Bowker has already signed on Random House as a subscriber to the new service and other prominent publishing houses are sure to follow.
Dirty Data
The recent pronouncement from the research firm Gartner that "dirty data is a business problem, not an IT problem," puts a spotlight on an important issue: automating your business processes won't help -- and might even hurt -- if the underlying data is old, inaccurate, poorly fielded or inconsistent.
Data publishers fully appreciate that their value is based on well-managed data. But businesses -- our customers -- continue to avoid the issue, which most of them find confusing if not overwhelming. What we consistently hear from executives at end-user companies is that because their data is "in the computer," keeping it clean is an IT problem. Those of us who have worked with corporate IT departments know that IT folks typically go to absurd lengths to avoid directly touching data, ever.
To their credit, IT departments are increasingly investing in data hygiene software to try to clean up dirty databases, and there seems to be increasing understanding that the only long-term solution is to catch bad data at input, before it gets into the system. But initiatives on both these fronts have been limited and slow.
This has created a buregeoning opportunity for data publishers because of a growing need for clean look- up databases, matching services to help separate the good data from the bad, and even manual and automated data scrubbing services. Once these companies get their databases in shape, there are then great opportunities to sell data augmentation services, or even to provide databases on a turn- key basis to companies that don’t have the interest or resources to maintain good databases themselves.
As an industry, there are a lot of ways we can help tackle the dirty data problem at its roots and help make the world of data a lot cleaner, while cleaning up in the process.
Labels: data hygiene, data scrubbing, dirty data, gartner, Infocommerce, lookup files
Is Hyper-Local Just More Hype About Local?
Hyper-local is the latest buzzword making the rounds. Reduced to its essence it describes an intensely detailed focus on specific, small, local market areas. The term may have originated with newspapers, who have apparently decided that their new niche should be reporting news of their local communities rather than reporting the international news that everyone else is reporting. The logic is simple and sound: readers value coverage of their local communities, and in many cases, nobody else is providing that coverage. Eureka!
Not surprisingly, a number of entrepreneurs have rushed in with websites to exploit the hyper-local opportunity as well. These online, hyper-local publishing ventures draw on every trendy new concept there is: community, blogs, user-contributed content, tags, the list goes on and on. The potential revenue streams are just as varied. Two companies getting a lot of attention right now are backfence.com and outside.in.
There is merit to the hyper-local concept. But it can't succeed without significant investment and a lot of hard work, and that's where a lot of these online ventures come up short. They've designed themselves to take the path of least investment and energy because they want their businesses to be intensely local yet scalable, something they can replicate nationwide. And that's the rub.
The more ambitious the business plan, perhaps ironically, the more compromised the offering. Operators of hyper-local sites, by choice or necessity, end up supplying little more than a platform that they expect people to engage with and pour content into. Some have tried to short-circuit the "chicken and egg" aspect of user-contributed content by supplying aggregated local news and business listings and classified ads. It all looks neat and cool, but at the end of the day, it's content readily available elsewhere with little value-add (do I really need to see Google maps of my own town? I already know how to find Main Street, thank you very much).
Many of these sites represent virtuoso programming efforts, but they are just tools, platforms devoid of personality and soul. Nobody is organizing or focusing the conversation. Imagine a local newspaper entirely composed of whatever people sent into it that week. That's about what you get with these online sites, and the result is anything but compelling.
Success in hyper-local really depends on "becoming one with your market," and that's hard to do when your real goal is to be in 100 markets as quickly as possible. This is as true for local consumer markets as it is for vertical B2B markets, and should you doubt this, just think back to a little start-up called VerticalNet. You can't build a sturdy national empire off a shaky local base.
Labels: data, hyper-local, local search, marketing, newspapers, verticalnet