Montag, 9. September 2019

The tragedy of fake empiricism: why your forecasts are wrong

“We have a 80% chance of winning the deal,” proclaims the sales guy proudly. He’s proud of his chances and of his cold, hard number. It’s not just optimistic. It’s scientific. He’ll meet his sales goals, and his boss can report that the turnaround is just around the corner.  Maybe it’s all true. Alas, in many companies, such numbers are usually fake, rendering the financial forecasts and risk analyses than rely on them fake as well. But, there’s good news! Real empiricism is not that hard.


Why fake empiricism is a problem


First, let’s examine the problem. Every company, every team and every person constantly face uncertainties big and small, whether it’s the CEO weighing risks in a multi-million euro investment, the sales team delivering its forecast, or a team of developers prioritizing product features. Probabilities are the scientific expression of uncertainty, from which we can also compute the expected value: an 80% chance multiplied by a €100,000 sale means we can expect to make €80,000 on average (across all sales). Mathematically correct, so where’s the problem?

A number seems empirical. It gives the illusion of having a firm grasp on our work. It sounds professional, even scientific. But, in many cases, the probability value simply has no empirical basis. Rather than measurement, such probabilities are based on one of several common sources:

  • Gut feeling: the probability is nothing more than a gut feeling turned into a number. “I give it 20% at best,” states the department head, and everyone listens to him because it’s the HIPPO (highest paid persons opinion).
  • Tool-based assumptions: The leading CRM tool, Salesforce, for example, assigns probabilities based on sales stage. In an early stage, “qualification”, it sets the likelihood at 10%; and once the offer is sent to the customer, it moves to “proposal” and receives a 75% chance (and there are numerous phases in between). The Salesforce defaults can be adjusted to fit the specific company or customer or product line, but many companies just blindly accept the defaults.  But, these defaults have no real measurements behind them. /1/
  • Risk matrices: the classic risk matrix plots probability with impact. On the probability side, a 5-level “very low to very high” scale assigns probabilities assigned to each level, usually something simple like 0-20%, 20-40% etc. Apart from the fact that risks usually have exponential curves, such matrices are actually just gut feeling combined with simplified assumptions. /2/

The first problem with these methods (if we can call them that) is that the numbers are fictional. No one has actually measured similar cases and used the results to inform the chosen probability. When challenged about probability assumptions, most people respond with, “yes, but it would be too hard to get the numbers”, as if that justifies using a fake one. When pressed further, they might adjust the number by rationalizing with assumed facts, but never actually seeking a measured value. /3/

The much bigger problem is that the fictional probabilities are used for further calculations upon which sometimes large decisions are made. To show the danger, I take the point above with the most empirical potential, Salesforce. The Salesforce probabilities are used to create sales forecasts given to boards of directors and investors. So, a mid-sized company might have sent out 20 million worth of offers and forecasted using the default 75% probability, thus generating 15 million in expected value for the coming quarter.  But, if the true return on posted offers is only 65%, and if 25% of those orders are delayed (due to customer indecision), then we have only 9.75 million for the quarter.  And, perhaps 25% of customers only place partial orders. The forecast can easily fall below half what was reported. By the time the true number is realized, it’s too late for anyone to take corrective action.

Apart from bad decisions with real business impact, the tragedy is that talented, honest people have believed that they were managing their business or team by giving their gut feelings a number. The number looks empirical, but its fictional nonetheless.

Correcting the tragedy


Fake empiricism is a tragedy because it’s actually easy to correct. And that’s the good news. Business intelligence tools are already sophisticated at measuring and correlating just about every aspect of a business. But, we don’t need expensive tools to get most of the benefits. There are four main steps to becoming more truly empirical:

  1. Be explicit (and honest) about what you don’t know.  If asked to give a probability, reflect openly about whether you have real data for your belief.  If you don’t have data, you need to start looking for it (see points 2-4 below). Even with minimal data, you may be able to give a broad range (between 50-90% chance). That’s better than just a gut-feeling.
  2. Use the data you have or find adequate proxies. Even without expensive business intelligence systems, useful data abounds in companies. /4/ For example, to correct the Salesfore problem above, we need only two statistics: what percentage of offers are accepted by our customers; what percentage of offers accepted were delayed before they were accepted. In other cases, we need to use a proxy. (Any company using Salesforce for more than 6 months will have sufficient data for these calculations). To take another example, we may not be able to measure the risk of schedule delay due to a specific factor. But, we can calculate the general amount of schedule delay by comparing the original plans of last years’ projects with the actual finishing dates. It’s not perfect, but it’s a start (see point 4 below). Also, simple, but clever internet searches can reveal a lot of data. You may not be able to measure your specific case, but the internet can tell you the ranges. If only 1% of startups are successful, it’s safe to assume you’ll be part of the 99%. That’s a sobering thought, but it can be useful in focusing your attention on your true benefits or USP.
  3. Know the goal behind the measurement. We have only two reasons to measure something: to reduce our uncertainty in the face of a decision and to test whether a goal was reached./5/ So, in face of a project investment decision, only a few numbers will have a substantial effect on the breakeven forecast, usually the top 1-2 benefits, the potential for delay and its affect on the costs. Testing whether a goal was reached is vital for improvement. Thus, if we want to become better or faster at something, we need to measure something that speaks to that goal. In most cases, approximate numbers are fine. Many people believe that lack of precision invalidates the measurements. But, calculating the costs down to the cent is a waste of time when you haven’t even approximately estimated the benefits. 
  4. Take a learning and improving approach.  Measurements can be repeated and refined over time. Thus, the simple knowledge of how many offers were successful should be repeated after improvement measures were implemented. As we move forward, we can refine the measurement (building in new measurement points) and thus the validity of our numbers. Our goal is not to be 100% right with the measurement, but to become better than last time.

So, resist the temptation to give a number to a gut feeling. With a little bit of thought and clever searching, you’ll find data to calculate solid numbers. You’ll still be faced with uncertainty, but at least your uncertainty won’t be compounded with fiction.


Notes


  • /1/ I have found no evidence that Salesforce has any empirical measurements behind its defaults. Furthermore, Salesforce recommends aligning the opportunity categories to a company’s sale process. Thus, any good Salesforce implementation project would seem to require such measurements.  The internet lists many companies selling their services for aligning sales processes and setting up better probability assignments.
  • /2/ Small risks happen more frequently, and large risks more seldom. Thus a straight scale increasing by 20% for each level is prima facie false. In fact, the entire “risk matrix” approach is deeply flawed, with some risk experts concluding that it leads to worse decisions than if no approach we used. See, Louis Anthony Cox, “What’s Wrong with Risk Matrices?,” Risk Analysis 28, no. 2 (2008): 497–512; Douglas W. Hubbard, The Failure of Risk Management: Why It’s Broken and How to Fix It, 1st ed. (Wiley, 2009).
  • /3/ Research has shown that the more intelligent someone is, the more likely they are to fall prey to cognitive biases that mislead us. Their intelligence makes them better at rationalizing, not better at reasoning. See, Annie Duke, Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts (New York: Portfolio/Penguin, 2018) and, of course, Daniel Kahneman, Thinking, Fast and Slow (Farrar Straus & Giroux, 2011).  Since probabilities are an expression of uncertainty, some may see that as the equivalent of a gut feeling. No! Probabilities are a legitimate quantification of uncertainty based on measurements. When a weather forecaster reports at 80% chance of rain, it means that under these conditions, it has rained 80% of the time in the past. If it doesn’t rain, that doesn’t mean the probability was wrong. Rather, this time the 20% happened.
  • /4/ Douglas W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, third ed. (Hoboken, New Jersey: John Wiley & Sons, Inc, 2014).
  • /5/ Hubbard points out that there are two further reasons: sell the data and scientific inquiry. Agreed, but these are exceptions specific to big data companies and to academia.

Keine Kommentare:

Kommentar veröffentlichen