Looking for New Product Ideas? They're Not All in Your Head

Part Three.

For many information and media companies, the preferred way to develop new product ideas is to brainstorm them internally. Get your best minds in a room and talk about the industry and its needs. You can conduct these sessions in a highly structured way or make them completely freewheeling and open-ended. Good, solid ideas can result.

Brainstorming sessions are both convenient and efficient. And if your staff is deeply engaged in your market, bringing them together to discuss new product concepts can yield powerful, even electrifying results. That’s because your staff is essentially reporting back what it is hearing and seeing in the marketplace. Synthesizing their different inputs, finding themes and conceptualizing solutions to problems is a great group activity, and resulting new product ideas can be very strong indeed.

Contrast that with companies that aren’t close to their markets. Their group brainstorming sessions will yield bigger product concepts (arguably bigger opportunities, but also riskier and harder to execute), incomplete concepts (based on lack of detailed market knowledge), and little certainty about market appetite. Perhaps most significantly, these product concepts, because they tend to be bigger, somewhat amorphous and without clarity as to market need, rarely get developed further.

My bottom line view of new product brainstorming is that it works, but the output can’t ever be better than the input. If your staff knows your market, they can effectively act as customer proxies, and the results can be compelling. If your staff doesn’t, brainstorming results in pipe dreams.

Looking for New Product Ideas? Can We Talk?

Part Two.

As I explained in Part 1, the most dependable new product ideas are totally organic in origin, meaning they are originated by people who want the new product as much for their own use as for others. The best ideas come from real personal need, not concepts or abstractions. To this end, I am surprised so few publishers encourage people to bring them their new product ideas: it’s free market research, and the really good ideas tend to be easy to spot.

Of course, you can’t depend entirely on a passive source like this. That’s why many publishers make an effort to talk to their customers. It doesn’t take a lot of conversations to start hearing about marketplace needs and opportunities. While the idea of talking to customers for new product ideas is well-known, your success depends in large part on how you go about it.

It’s surprisingly difficult to get productive conversations going with your customers. First, you have to get a meaningful amount of time from them,  which gets harder every day. Second, you have to enter the conversation without preconceived notions or biases. Third, the conversation needs to be open-ended to allow the customer to take it in any direction. When a customer volunteers something like “but what I could really use is …” you have struck gold. You can have conversations by phone, though in-person conversations are always the best. And please don’t think that sending out an online survey in any way substitutes for customer conversations.

The good news is, yes, customers will tell you what they want, and they’ll do it happily. If multiple customers suggest the same new product idea, you’ve probably got a winner.

 

 

Looking for a New Product Idea? Just Ask.

(Part One- Continues Next Week)

Where do really good ideas for new data products come from? Not surprisingly, I am asked this question a lot. Perhaps surprisingly, the answer isn’t all that complicated.

The best ideas for new data products almost invariably come from personal need. History shows that the data products that succeed most readily tend to be highly specialized in terms of content and user base – and they were typically surfaced by people who would use such a data product themselves, if someone else produced it. The person who sees the opportunities knows just how useful and valuable the new product would be, that nothing else like it currently exists in the market, and that there are many other people in similar roles in other companies who would benefit from it. Right there, you have all the ingredients for a winning data product, and I have seen dozens of them over the years, in almost every case started by someone with no data publishing experience, but who did have a deep understanding of the need for the data. As just one example, a recent news article talks about a professor, frustrated by the lack of information on sustainable building products manufacturers, decided to compile his own directory. Despite being published as a print directory, it’s already in its second edition – the need was out there for this information.

Why did a professor of architectural technology and building science decide to become a publisher? Likely because he didn’t feel he had any options. And that’s not surprising. For despite the intense interest of B2B media companies in new data products, not one that I know tries to reach out to its audience for new product ideas. That’s a shame, because in my experience it’s mid-level executives buried deep in large organizations who are the best source of these new opportunities. All you have to do is ask. 

North Korea Sparks a Trip Down Memory Lane

The latest news from North Korea should make us grateful we are not in the business there.  On word that several North Korean phone directories had been smuggled outside of the country, the country’s leader, Kim Jung-un, ordered that ALL phone numbers in the country be changed … randomly and without warning!

Here’s some nostalgia to put this in perspective. Here in the US, it began with  something called the “fax machine.” This was a device that scanned documents and then transmitted them via phone lines to a distant location. Faxes were the email of their day, but to get the real-time delivery benefits of faxing, you needed a separate phone line for your fax machine so that it was always available to send and receive. This created a huge jump in demand for new phone lines, and thus, new phone numbers.

If fax machines weren’t enough, we also had the advent of mobile phones, each of which demanded its own phone number. Phone companies ran out of available phone numbers in existing area codes, and begin the seemingly endless process of introducing new area codes (73 in just the past ten years), creating endless amounts of new work for data publishers in the process.

Those of you in the trenches for all this fun may also recall that the phone companies initially favored the dreaded area code “splits,” where half the people in an existing area code would be assigned a new area code. After much complaining, particularly from businesses that had to change signage, stationery and more, the phone companies moved to “overlay” area codes, where all new phone number requests in an existing area code simply received numbers with the new area code.

That’s another quaint aspect of area codes in the old days – they used to define specific geographies. But with the growth of both toll-free numbers, VOIP phones and number portability, your phone number no longer necessarily ties you to any geographic area.

Of course, for all the angst and additional work these changes have caused, at least they were systematic.  And if you are looking for expansion opportunities in 2018, North Korea appears wide open. 

Where the Value is In Visual Data

The New York Times recently reported on the results of a fascinating project conducted at Stanford University. Using over 50 million images drawn from Google Street View, along with ZIP code data, the researchers were able to associate automobile ownership preferences with voting patterns. For example, the researchers found that the type of vehicles most strongly associated with Republican voting districts are extended-cab pickup trucks.

While this particular finding may not surprise you, the underlying work represents a programmatic tour de force, because artificial intelligence software was used to identify and classify the vehicles found in these 50 million images. The researchers used automotive experts to identify specific makes and models of cars from the images, giving the software a basis for training itself to find and identify vehicles all by itself, regardless of the angle of the photo, shadows and a host of other factors that make this anything but an easy task.

This project is believed to represent that first time that images have been used on a large scale to develop data. And while this image identification is a technically impressive example of both artificial intelligence and Big Data, most of the really useful insights come from associating the finding with other datasets, what I like to refer to as Little Data.

Think about it. The artificial intelligence software is given as input an image, and the ZIP code associated with that image. The software identifies an automobile make and model from the image, and creates an output record with two elements: the ZIP code and a normalized make and model description of the automobile. With this, you can explore auto ownership patterns by geography. But with just a few more steps, you can go a lot further.

You can use “little data” government and private datasets to link ZIP code to voting districts and thus voting patterns. With this information, you can determine that people living in Republican districts prefer extended-cab pickup trucks.

You can also use the ZIP code in the record to link to “little data” Census demographic data summarized at ZIP level. With this, you can correlate car ownership patterns to such things as income, race, education and ethnicity. Indeed, the study found it could predict demographics and voting patterns based on auto ownership.

And you can go further. You can link your normalized automobile make and model data to “little data” datasets of automobile technical specifications which is how the study determined, for example, that based on miles per gallon, Burlington, Vermont is the greenest city in the United States.

Using artificial intelligence on a Big Data image database to build a normalized text database is impressive. But all the real insights in this study could only be developed by linking Big Data to Little Data to allow for granular analysis.

While Big Data and artificial intelligence are getting all the breathless coverage, we should never forget that Little Data is what’s providing the real value behind the scenes.