Comment

Is Time Up for 230?

In 1996, several Internet lifetimes ago, Congress passed a bill called the Communications Decency Act (officially, it is Title V of the Telecommunications Act of 1996). The law was a somewhat ham-handed attempt at prohibiting the posting of indecent material online (big chunks of the law were ultimately ruled unconstitutional by the Supreme Court). But one of the sections of the law that remained in force was Section 230. In many ways, Section 230 is the basis for the modern Internet.

Section 230 is short – only 26 words – but those 26 words are so important there has even been an entire book written about their implications. Section 230 says the following: 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.”

 The impetus for Section 230 was a string of court decisions where the owners of websites were being held liable for things posted by users of their websites. Section 230 stepped in to provide near-absolute immunity for website owners. Think of it as the “don’t shoot the messenger” defense. Without Section 230, websites like Facebook, Twitter and YouTube probably wouldn’t exist. And most newspapers and online publications probably wouldn’t let users post comments. Without Section 230, the Internet would look very different. Some might argue we’d be better off without it. But the protections of Section 230 extend to many information companies as well.

 That’s because Section 230 also provides strong legal protection for online ratings and reviews. Without Section 230, sites as varied as Yelp, TripAdvisor and even Wikipedia might find it difficult to operate. Indeed, all crowdsourced data sites would instantly become very risky to operate.

 The reason that Section 230 is in the news right now is that it also provides strong protection to sites that traffic in hateful and violent speech. That’s why there are moves afoot to change or even repeal Section 230. Some of these actions are well intentioned. Others are blatantly political. But regardless of intent, these are actions that publishers need to watch, because if it becomes too risky to publish third-party content, the unintended consequences will be huge indeed.

 

 

Comment

Use Your Computer Vision

Those familiar with the powerhouse real estate listing site Zillow will likely recall that it burst on the scene in 2006 with an irresistible new offering: a free online estimate of the value of every house in the United States. Zillow calls them Zestimates. The site crashed continuously from too much traffic when it first launched, and Zillow now gets a stunning 195 million unique visitors monthly, all with virtually no advertising. Credit the Zestimates for this.

 As you would expect, Zestimates are derived algorithmically, using a combination of public domain and recent sales data. The algorithm selects recent sales of similar comparable nearby houses to compute estimated value. 

As you would also expect, professional appraisers hate Zestimates. They believe that they produce better valuation estimates because they hand select the comparable nearby homes and are thus more accurate. However, with the goal of consistent appraisals, the hand selection process that appraisers use is so prescribed and formulaic that it operates much like an algorithm does. At this level, you could argue that appraisers have little advantage over the computed Zestimate.

However, one area in which appraisers have a distinct advantage is that they are able to assess the condition and interiors of the properties they are appraising. They visually inspect the home and can use interior photos of comparable homes that have recently sold to refine their estimates.

Not to be outdone, Zillow is employing artificial intelligence to create what it calls “computer vision.” Using interior and exterior photos of millions of recently sold homes, Zillow now assesses such things as curb appeal, construction quality and even landscape;  quantifies what it finds;  and factors that information into its valuation algorithm. When it has interior photos of a house, it scans for such things as granite countertops, upgraded bathrooms and even how much natural light the house enjoys, and incorporates this information into its algorithm as well.

 With this advance, appraisers look very much like their competitive advantage is owning “the last mile,” because they are the feet on the street that actually visit the house being appraised. But you can see where things are heading: as companies like Zillow refine their technology, the day may well come that an appraisal is performed by the homeowner uploading interior pictures of her house, and perhaps confirming public record data, such as number of rooms in the house.

There are many market verticals where automated inspection and interpretation of visual data can be used. While the technology is in its infancy, its power is undeniable, so it’s not too early to think about possible ways it might enhance your data products.

The Power to Destroy

In 1819, Supreme Court Chief Justice John Marshall penned the famous phrase, “the power to tax involves the power to destroy.” This insightful commentary came as part of the Court’s ruling in the case of McCulloch v. Maryland. The case involved a move by the State of Maryland to favor in-state banks by taxing the bank notes of the federally chartered Bank of the United States. In a unanimous decision, the Court ruled that Maryland couldn’t try to run the federal bank out of town through clever tax schemes.

This famous phrase pops into my head every time I see data and software companies get too arrogant or too greedy and start to abuse their market dominance. This is because software and data companies that dominate their markets have some of the same coercive power as governments with their ability to make rules and set prices.

I got a direct taste of this earlier in the week when an email arrived from QuickBooks to tell me that they were more or less doubling my annual subscription fee. The rationale for this massive increase? The folks at Intuit (parent company of QuickBooks) feel they work very hard and deserve more money. You may recall that Intuit has recently been hard at work with its fleet of lobbyists trying to get legislation passed to prohibit the IRS from offering an online tax filing service. In its annual report, Intuit specifically calls out the threat of federal and state “encroachment” on its business. A touch of entitlement, perhaps?

My email from QuickBooks was followed by an email from DropBox announcing a 20% price increase. At least DropBox doubled my online storage in exchange, not that I really needed it.

It’s not just in the software industry where market power is being abused. As just one example, StreetEasy, the dominant real estate listing platform in New York City, stopped accepting automated listings feeds from several major real estate brokers in a fit of arrogance and competitive gamesmanship. Try not to laugh when you read StreetEasy’s justification for suspending automated feeds:

“Sending a feed sounds simple and seamless. It’s not. Continuing to receive listings in such an inefficient way wasn’t doing anyone — agents or consumers — any favors. So, we innovated.”

StreetEasy’s innovation? Data entry screens that require brokers to re-enter all their listings … manually. You can’t make this stuff up.

Often what damages or even kills great data and software companies with dominant market positions is the abuse of their market power. They forget why they exist and who the customer is. In many cases they get lazy, finding it easier to raise prices than continuing to innovate. Sometimes these companies impose big price increases, as in the case of QuickBooks, simply because they can.

Market dominance creates coercive power that can destroy. With taxation, the party that can be destroyed is the taxpayer. But with private companies, coercive power comes with the ability to destroy … themselves.

Just Do It for Me

The app economy, as many have noted, is primarily based around creating convenience, not delivering true innovation. Despite this, its impact has been profound and pervasive. Consumers have come to expect that they can manage almost every task and life activity via a smartphone. 

The competitive pressures of the app economy lead inevitably to apps trying to one-up each other. Ready availability of risk capital encourages this trend.

Consider something as basic as food. We’ve seemingly solved all the issues that used to make home delivery of groceries such a daunting challenge. We then moved to meal kits, where your food ingredients come pre-measured, cleaned and chopped. From there, the move was to fully-cooked meals delivered to you via a subscription meal plan. 

There is a clear trend towards task automation, often in the extreme. And this trend is migrating from the consumer world to the B2B world. 

 You can see the trend at work in the wonderful world of sales leads. First came software to let users better manage their prospects. Then came services to let users add or augment their prospects by seamlessly importing new names. When users discovered that their prospects names needed to be maintained and updated, services emerged for this task. When users discovered they had too many prospects to manage effectively, services emerged to rank and score these prospects. Then came “purchase intent” services that tried to turn cold leads into hot leads using automation tools. And now we see a raft of services that offer to do actual appointment setting.

 For data publishers the implication is clear: your customers are finding the idea of purchasing just data less and less compelling. Providing them with tools to act on your data was the next obvious evolutionary step, and this has worked out well for most data providers. But there is a new evolutionary phase underway: task automation services that do more and more of the customers’ work for them.  It’s well underway in the lead gen world, but it’s coming to your data neighborhood soon. How this plays out will vary by market and product, but the general direction is that customers will pay more to offload some of their work. And that means opportunity for those who can figure out how to take it on.

This Score Doesn't Compute

This week the College Board, operators of the SAT college admissions tests, made a very big announcement: in addition to its traditional verbal and mathematic skills measurement scores, it will be adding a new score, which it is calling an “adversity score.”

In a nutshell, the purpose of the adversity score is to help college admissions officers “contextualize” the other two scores. Primarily based on area demographic data (crime rates, poverty rates, etc.) and school-specific data (number of AP courses offered, etc.) this new assessment will generate a score from 1 to 100, with 100 indicating that the student has experienced the highest level of adversity.

Public reaction so far has been mixed. Some see it as an honest effort to help combat college admission disparities. Other see it is a desperate business move by the College Board, which is facing an accelerating trend towards college adopting test-optional admission policies (over 1,000 colleges nationwide are currently test-optional).

I’m willing to stipulate that the College Board had its heart in the right place in developing this new score, but I am underwhelmed by its design and execution.

My first concern is that the College Board is keeping the design methodology of the score secret. I find that odd since the new score seems to rely on benign and objective Census and school data. However, at least a few published articles seemed to suggest that the College Board has included “proprietary data” as well. Let the conspiracy theories begin!

Secondly, the score is being kept secret from students for no good reason that I can see. All this policy does is add to adolescent and parental angst and uncertainty, while creating lots of new opportunities for high-priced advisors to suggest ways to game the score to advantage. And the recent college admissions scandal shows just how far some parents are willing to go to improve the scores of their children.

My third concern is that this new score is assigned to each individual student, when it is in reality a score of the school and its surrounding area. If the College Board had created a school scoring data product (one that could be easily linked to any student’s application) and sold it as a freestanding product, there would likely be no controversy around it. 

Perhaps most fundamentally though, the new score doesn’t work to strengthen or improve the original two scores. That’s because what it is measuring and how it measures is completely at odds with the original two scores. The new score is potentially useful, but it’s a bolt-on. Moreover, the way this score was positioned and launched opens it up to all the scrutiny and criticism the original scores have attracted, and that can’t be what the College Board wants. Already, Twitter is ablaze with people citing specific circumstances where the score would be inaccurate or yield unintended outcomes.

Scores and ratings can be extremely powerful. But the more powerful they become, the more carefully you need to tread in updating, modifying or extending them. The College Board hasn’t just created a new Adversity Score for students. It’s also likely to have a caused a lot of new adversity for itself.