Questioning the Unquestionable

From time to time in business, an idea emerges that sounds obviously good — so good, in fact, that it becomes accepted wisdom almost immediately. Product managers quickly incorporate it into their assumptions, marketeers let it shape the way they approach the project, and executives quickly incorporate it into their presentation slides so that it ceases to be mere idea and becomes… (drum roll) fact.

The idea makes the leap from theory to principle without ever being subjected to rigorous testing. Often, the idea eventually gets proven out. But what if the so-called fact turns out to be false? Or what if an idea that once was true becomes false? What happens then? Has it happened to you?

The Cost of a Wrong Decision

In three recent engagements, I’ve has been engaged to investigate troubled product strategies and found faulty assumptions to be the root cause in each case. Perhaps you can see your company in one of these stories:

  • Company A believed that a new set of product features would add great value for the buyer of the product, so they invested heavily in development. Right before launch, during the pricing analysis, they found out that their main customer set was not interested in paying for the added features. Did the customer already have a way to solve that problem? Did the problem shift? Had an alternative crept into the market?
  • Company B launched a significant price promotion in response to poor sales performance of a new software release. But they saw no resulting increase in unit growth. Did they fail to get the word out? Was the problem in the channel?
  • Company C believed that one segment of a complex value chain could adopt an innovative information management platform based on slick new technology, and service another industry with this platform. During the product development cycle, they discovered that the target industry already had an exclusive source for all of its information management purchases. Would the slick new technology fail to “cross the chasm” because there was no initial customer?

These situations are not unusual in Fortune 1000 high-tech firms. They’ve got smart people and good technologies, but they keep having this problem. How can this be fixed? How do we know what we think we know? Do we know how to check our assumptions?

As senior executives, we make decisions every day in the presence of incomplete information. We make these decisions based on a wealth of experience that allows us to make quick decisions that frequently feel almost like instinct. In other words, we feel in our gut which way to make the call. But, what if our gut is wrong? What is the cost of a wrong decision? In the three situations described above, it could mean lost revenue for price discounting, or lost market position, or missed growth. Those can be big costs, especially in these times.

So what’s an executive to do? How can we test our assumptions in a dynamic world? Some principles are so established that it seems that testing them is no different from testing that 2 + 2 = 4. Is it realistic to question everything?
“What if our gut is wrong? What is the cost of a wrong decision to your company?”

Question the Key Decisions

Questioning basic assumptions can be realistic. Rubicon uses a process to confirm that the problem is truly “the” problem. We routinely investigate the details of our clients’ problem statements (this can sometimes be easier for consultants to do than for internal folks), and we find that sometimes the root issue is not the issue that was initially described. This “problem misstatement” happens for large firms just as often as it does for start-ups. Whether product management, sales management, or marketing management is making the statement, often they are simply too close to see the all angles of the situation, or perhaps other options have been viewed as off limits.
However, not every situation warrants a deep “soul searching” effort, for a variety of reasons. The product launch or decision may be clearly incremental and low risk, leveraging off of earlier successes. Or the marketing budget for a particular product or region may be of a size that does not justify a reevaluation. And yet there clearly are cases when business conditions signal that it is time to approach the problem thoroughly. Business decisions that impact the future direction of a product line, or problems that could have a $30M+ impact, or affect more than 10% of the company’s future direction, are examples of strategic challenges that deserve the extra attention. In addition, when a series of tactical solutions fail to meet their mark, it may be worthwhile to take a hard look at the underlying assumptions.

Ways Your Organization Can Approach This Problem

Despite the growing realization that identifying the assumptions for marketing decisions is important for business, companies are far from mastering how to do it, especially at the strategic level, where the data is usually fuzzier and harder to come by. Here is an approach you could take if you want to have a framework for looking at your market differently.

Outline the known assumptions.
First, identify the key points of a case you are making. By this we mean take out all the cool whizbang components of the new product set, and look at fundamental elements. In one recent Rubicon engagement, product success was dependent on an unspoken assumption of customer loyalty to brand. There were two components to the assumption. First, the channel wanted to build and maintain customer loyalty as a core part of their strategy, and, second, end-user customers wanted to multi-vendor loyalty programs. Many innovative technologies were envisioned to enable loyalty management, but the fundamental product decisions were made based on unverified assumptions about what retailers and customers were expected to want.

Confirm key assumptions.
Take a look to confirm whether these key assumptions are the complete list. You might determine whether the core people involved could agree that these are the core assumptions. Every point in the supporting argument behind a decision is either a concrete and verifiable data point (“our unit volume is X”), a prediction (“next year’s industry ASP will be $X”), a (supposedly) logical step (“if this, then that”), a statistic (“X% of our revenue comes from the 18-24 segment”), or an outright assumption (“assuming that customers want XYZ”). The outright assumption and the concrete data points are easy to categorize. What about the rest? What are the implicit assumptions in a prediction (or a trend, which is simply a particular kind of prediction)? Or a statistic? Are we assuming any correlation between groups? Are we assuming no correlation? What about “logical steps?” Are they truly built on “bulletproof” logic (2 x 3 = 6)? Or is there an implicit assumption, such as “if we lower costs over the next six months we will increase profit,” which, as stated, assumes that price will be stable and that any impact on product quality (cheaper material?) or customer experience (slower fulfillment?) will not affect volumes, or that no significant competitive action will occur.

Determine assumptions to test. When all the assumptions have been fully identified, ask what needs to be looked at closely. Could we narrow down what needs testing or further evaluation? Relying on the strength of the team or existing supporting secondary data, can we identify which are “safe” assumptions? Then, ask what assumptions are truly central to making a good decision. For example, is it making sure the buyer’s problem matches this technology set, or that the customer is driven by price to adopt a product? Usually five or fewer issues are central to any complex business problem. Last, are there any assumptions that are regarded as unverifiable? Assumptions about competitive behavior typically fall into this category, but they are not the only candidates. Unverifiable assumptions are not showstoppers; they represent informed risk or articles of faith.

At this point, you have identified the key areas for validation. The process of framing the core set of validating questions is analytic, insightful, and deductive. It often looks easy, but it is actually dependent on a wealth of experience and practice. An approach my team and I use is to try to fit these observations into a few sets of hypotheses to start to test them.

“Work at being what Intel’s Andy Grove referred to as ‘prudently paranoid'”

Build a hypothesis. It is important to gather data diligently. Making sure you have all the facts and, especially, not just the facts you want to hear or those that are handed to you. It’s common knowledge that paranoid people suffer from what psychologists call “confirmation bias” — they search for evidence to confirm their most sinister thesis or feared expectations, and filter out contradictory data. Check yourself. Are you discarding information inappropriately? Work at being what Intel’s Andy Grove referred to as “prudently paranoid” — try to learn and keep a sweep going for new data that shapes thinking for new directions.

Leave room to question your interpretations. Collecting data is important, but the way you filter that information will determine how “prudent” you are. Prudent people leave a margin for error in their interpretations. If anything, be a little suspicious of facts or theories that explain away every question. Nothing is ever that easy. And, if something were, why hasn’t your competition already done it? Chances are there’s a reason.

Avoid jumping to conclusions. Stay in data gathering for a predetermined period of time. If you say that you want to benchmark ten related ISVs during the next three weeks, stick to this plan and stay open to all the data; do not stop after two ISVs because you “already got the data you need.” One common temptation we have found is that executives starting forming an opinion after two points of data even when they really know they need to look at ten. If you initially believe that the problem is big enough to justify three months of data gathering, do it. I have found clients who start asking for the answer two weeks into the project; of course it’s a real temptation to jump to conclusions then, especially on an exciting project. But in the end, this is really just a way to cheat ourselves.

Make a second pass to look for things that refute the suggested direction. Many good product managers I know read things like market research reports selectively. They gather the part of the sentence from an IDC or Gartner study that says “go for it” and then stop right before the “only if…” part of the sentence.

Make the call. Then you’re done. That’s it. One of the toughest challenges is knowing when you have looked at enough information to make a reliable judgment. At what point can you be certain that the direction or trend you have identified is real? It’s easy when 90% of the research points one way and only a few data points conflict. But more often the data is cloudier, and it will come down to judgment. But if you do the complete job, then at least you know what you are judging.

MurderBoarding ™ Methodology

Everything starts with a “what is” review, often called an audit. We talk to all of the core members of the effort, and, as needed, pull from reliable secondary research. We work to frame the problem accurately and ensure that the scope of the work is on target with the problem.

The internal audit serves several important purposes, and we do it prior to any custom qualitative research. The internal audit provides these results:

  • Independent assessment of the problem as presented
  • Important background and environmental data
  • Buy-in within the decentralized client community of the need for the project (which usually turns out to be valuable later for successful implementation)
  • An understanding of the key dynamics, key players and, more important, the key assumptions that are being made

Experience demonstrates that the internal audit is the analytic cornerstone for understanding the broader business environment and being able to help clients identify the root causes of the issues they are facing.

In some cases the internal audit simply validates the problem as presented. In other situations, the audit leads my team and I, and the client to revise the definition of the problem prior to beginning the bulk of the research and interviews. The internal audit ensures that the engagement is both efficient and effective.

After we learned what assumptions need testing, we identified appropriate areas for research. When interviews are needed, we develop a custom diagnostic that targets the areas for primary research.

The internal audit produces a high return even when the original problem statement is fully validated. Benefits include the following:

  • Superior customer and channel interviews because Rubicon is able to work with the client to build clearly targeted research diagnostics for each class of interview. Making sure you are focused on the right set of questions means you will get what you need from research.
  • Better use of data because the internal audit helps guide the organization and presentation of what we learn. More clarity upfront leads to crisper recommendations, action, and impact.
  • Asharper focus on the real issues early on allows effective time allocation during primary research interviews to probe into areas that would not otherwise be explored. Key insights learned then will help you stay ahead of the competition.

A Sample Case
Recently, my team and I worked with a large software publisher on a new server-based application. They hired us to do a standard price-elasticity study to determine the value price point for new software, and whether it should be priced per use or per processor.

The context was the launch of Version 2 of an existing product. After several months on the market, Version 1 had gained but a handful of customers despite thousands of leads. Much had been invested in adding more capability to Version 2, so the company could not afford another flop. Since the product automated a process that potential users currently did manually, there was no question in the company’s mind about the need for the offering. Sales pressed for the lowest-possible price for Version 2, citing a long list of competitive offerings, though there was no hard evidence that competitors were winning against Version 1.

The problem seemed straightforward: understand customer price sensitivity and overlay it with competitive research to construct an appropriate pricing and margin model.

The internal audit findings revealed that the issues were far more extensive, and ran deeper than determining the right price:

  • The internal interviews and some preliminary customer input both suggested that existing customers had already solved the problem economically, or did not see it needing a technology fix.
  • Version 1 and the technology overlaid in Version 2 represented features that could be naturally divided into two distinct products that appealed to very different customers
  • Most customers viewed the features as two separate products and were confused by the vendor using a single product name
  • The leads revealed little about the market potential since the leads were associated with the current customer profile, rather than with buyers who had needs for the new product technology.

The MurderBoarding audit allowed the client to redefine the problem at the beginning, and to consider two products. As it turned out, one represented a more mature technology than the other, and consequently, the go-to-market strategy for each was very different in four key targeted ways:

  • Pricing strategy
  • Channel strategy
  • Marketing mix
  • Buying process and target audience identification within the customer organization

The client said without applying the MurderBoarding methodology, the company might have struggled to create a single strategy to sell a hybrid product to a conceptual hybrid customer segment that did not really exist. Instead, MurderBoarding’s value was to direct the effort, and identify a new market segment — one that was relatively price insensitive and greatly needed the new product offering. The benefits went beyond sales revenue, and included market momentum and sales force enthusiasm for the product line.

Leave a reply

Leave a Reply