When dealing with policy analysis, nothing is certain.

Use of structured analytic techniques can make estimates more accurate.

Use of structured analytic techniques reduces the probability of error in judgments.

Analytic estimates and judgements on policy issues are inevitably indeterminate. And that’s a primary reason why they should always include some indication of probability of the favoured outcome.

Expression of probability, however, is but an expression of an analyst’s expectation regarding the accuracy of own prediction.  In other words, an expression of one’s self-confidence.

Since expectations are influenced by an analyst’s mind-set, which is highly susceptible to biases, the latter also easily hijack probability estimates.

One subconscious mental trick played upon us by our mind-sets aims to raise one’s perception of the probability of a favoured outcome by creating imaginary similarities with past events.

A more subtle one involves a belief in that what one prefers to be true has a higher probability of happening. The curse of wishful thinking implying that something won’t probably happen because one doesn’t want it to happen is a variation of such a belief.

To complicate things a tad more, the concept of probability is a bit counterintuitive. It means that it cannot be grasped from our daily experiences.

Suppose we say that it won’t rain tomorrow with a probability of 20%. If it does not rain, this forecast was obviously right.

Now, suppose we say that it will rain with a probability of 80% – which is essentially same forecast put differently. If it does not rain, our forecast was once again BY NO MEANS WRONG, which can sound baffling.

The bottom-line is, use of a probability statement makes every estimate correct, albeit to a varying degree of accuracy.

Probability expressed in an analytic judgement transforms a firm promise into nothing more than an expression of an analyst’s preference regarding whether something is likely to happen or not.

Or putting it in other words, choice of probability reflects analyst’s self-confidence.

A key issue here is that people trust confident people more. People equate confidence and competence. This knowledge can tempt analysts to up the projection of their confidence by upping the probability of their estimates. They have nothing to lose. Whatever their estimate, it won’t ever be completely wrong. So, in for a penny, in for a dime.

In a situation when analysts typically do not have any personal stake in the accuracy of their judgments, nor are facing any direct accountability for a flawed estimate probability fixes can easily escalate beyond the limits imposed by professional responsibility, morality or common sense.

One actively debated issue examines precisely possibilities for making analysts’ responsible (at least, to a degree) for the accuracy of their judgments.

structured analytic techniques help overcome cohnitive biasesSuch recognised and respected  thinkers as Nassim Taleb (author of “The Black Swan”, “Antifragility” and several other bestsellers addressing the subject of futures estimates) and Philip Tetlock (author of “Superforecasters: The Art and Science of Prediction”) represent mostly opposing views on controversial issues involved in the subject of forecasting.

It makes it particularly impressing to find Taleb and Tetlock agree on the need to find a way of rating the accuracy of analytic estimates and making analysts accountable for it. Taleb limits his proposal to a passionate plea for “making analysts put skin in the game”. Tetlock’s suggestion is less emotional and focusses on the use of Brier score to assign accuracy ratings to each analyst.

One issue lying at the deeper core of this debate here is whether futures are predictable at all. Taleb decisively postulates that futures are random, and randomness cannot be predicted. Tetlock’s view straddles the opposite end of the continuum. Based on the experience of the Good Judgment Project[i], he argues that while limits on predictability certainly do exist that does not mean that ALL predicting is an exercise in futility. At the very least certain types of futures can be reliably forecast. Tetlock accepts, though, that the accuracy of expert predictions beyond the five-year horizon “declines towards chance”.  That would be a pretty generous admission. In order not to fall completely off the board estimates typically stand to be corrected after six months (at most).

As a start, a historic probability base line can be found nowadays for almost everything. Using this outside number as a benchmark converts a ballpark guess into an estimate. But the method based on using historic figures for grounding the initial estimate lands it on some shaky ground. In foreign affairs in particular, a median probability of many a specific outcome (say, propensity towards use of force) has been fairly capricious over the span of the past five decades or so. Counterfactual thinking involves reasoning about how something might have turned out differently than it actually did. At first glance that might look like a sterile debate devoid of a useful purpose. What has happened is an accomplished fact, right? Right, but the past was a random outcome – as random as our future will be.

structured analytic techniques improve the quality of analysisAccepting infinite plurality of possible pasts helps come to grips with the similarly infinite plurality of possible futures – radical indeterminacy – so that we can prepare and act accordingly.

Among several core threads Tetlock backs continuous monitoring and frequent updating of forecasts. He argues that an updated forecast is likely to be a better-informed forecast and therefore a more accurate one. There is a number of caveats regarding this line of argumentation that are worth exploring.

For one thing, mind-sets filter new information and reject that which contradicts the initial hypothesis. Periodic exposure to new information offers no guarantee that it will in fact be used to update a forecast rather than to seal in an initial one even deeper.

For another, most sources display an absolutely horrendous information-versus-noise ratio. Exposure to more noise is hardly likely to result in improved forecast accuracy. In essence, the issue at hand here is how to detect and monitor development of a particular forecast/scenario amongst many against the backdrop of ambient noise that eclipses potentially useful signals.

Learning to forecast requires practicing. Reading books is a good start. However, it is no substitute for the hands-on experience. There is a caveat, though: when fiddling with thinking and reasoning skills, only informed practice will improve those. To learn from failure one must first know when one has failed. This is the somewhat evident prerequisite for succeeding to understand why one has failed, correcting and trying again. This requirement of the method grows into an even bigger complication when dealing with estimates related to foreign policy analysis or evolution of strategic threats.

Extracting meaningful feedback regarding accuracy of vague forecasts that have due dates far ahead in time is a formidable task. Even more so, when vagueness and resulting elasticity of estimates become intentional. Besides, at evaluation time hindsight bias kicks in to distort the precise parameters of the past state of one’s un-knowledge about the future. That makes it impossible to pass a judgement on the ultimate accuracy of a forecast – or its utility. Probability estimates can be reasonably true when dealing with deterministic problems. But such problems are relatively rare in real life in general and in international policy analysis, in particular. However, where there truly is a will, a way can be opened. Once again, the simple but powerful structured technique of problem decomposition can raise the accuracy of predictive analysis.

If the issue at hand is too big or too complex, its components may well turn out to be wieldier. By means of separating the knowable and the unknowable parts and doing a key assumptions check remarkably good probability estimates can arise from remarkably crude iterations of guesstimates. Looks simple, but only at the first glance.

When using the technique of problem decomposition a key analytic task becomes to figure out:

  • what we know we know,
  • what we don’t know we know,
  • what we know we don’t know and, above all,
  • what we don’t know we don’t know.

One has to reckon with the inevitable circumstance that a plethora of cognitive biases will dull the thinking ability required to accomplish these feats of reasoning. All individuals assimilate and evaluate information through the medium of mental models or mind-sets. These are experience-based constructs of assumptions and expectations. Such constructs strongly influence what incoming information an analyst will accept and what discard. A mind-set is the mother of all biases. Being an aggregate of countless biases and beliefs, a mind-set represents a giant shortcut of the mind. Mind-sets are immensely powerful mechanisms with an extraordinary capability to distort our perception of reality. The key risks presented by mind-sets include[ii]:

  • MIND-SETS MAKE ANALYSTS PERCEIVE WHAT THEY EXPECT TO PERCEIVE. Events consistent with prevailing expectations are perceived and processed easily. Events that contradict these tend to be ignored or distorted in perception. This tendency of people to perceive what they expect to perceive is more important than any tendency to perceive what they want to perceive.
  • MIND-SETS TEND TO BE QUICK TO FORM BUT RESISTANT TO CHANGE. Once an analyst has developed a mind-set concerning the phenomenon being observed, expectations that influenced the formation of the mind-set will condition perceptions of future information about this phenomenon.
  • NEW INFORMATION IS ASSIMILATED INTO EXISTING MENTAL MODELS. Gradual, evolutionary change often goes unnoticed. Once events have been perceived one way, there is a natural resistance to other perspectives.
  • INITIAL REACTION TO BLURRED OR AMBIGUOUS INFORMATION RESISTS CORRECTION EVEN WHEN BETTER INFORMATION BECOMES AVAILABLE. Humans are quick to form some sort of tentative hypothesis about what they see. The longer they are exposed to this blurred image, the greater confidence they develop in this initial impression. Needless to say that it is oftentimes erroneous.

A mindset will distort information and lead to misperception of intentions and events. Such distortions can result in miscalculated estimates, inaccurate inferences, and erroneous judgments.

Structured analytic techniques are specifically intended to foster collaboration and teamwork. Team estimates and judgments reflect a consensus among team members, which blunts and dilutes the intensity of individual biases. Group work is the best – if not the only – known remedy against cognitive biases. On top of that working in a team offers one other advantage. A consensus judgment of a group consistently trumps the accuracy of an average group member.

One caveat here is that it is true only when solving riddles, i.e. those analytic problems that have a single correct answer. A second caveat is that working in a group presents certain challenges of its own. A central one is evaluation of incoming information. In a typical set up, different team members will be responsible for collecting information from different sources. These discrete data sets would then be collated into a data base to which all team members have access. When accessing information obtained by others, team members will have no way of judging its accuracy or reliability of the source from which it has been obtained. Inferences, estimates and judgements based on potentially inaccurate or unreliable information will inevitably turn out to be also potentially inaccurate and unreliable.

Such risks are to some extent mitigated by the mandatory use of uniform grids that separately evaluate accuracy of the information and reliability of the source. Each piece of incoming information becomes thus graded. Every analyst granted access to a piece of information sees specific grading codes that assist in forming a judgement regarding how this information can be used.

Some agencies prefer to use 5×5 grids, others again favour the 6×6 template. It depends mostly on institutional tradition or some other historic precedent.

None of the two models offers an advantage over the other. The only prerequisite that determines good results is that the chosen model be uniformly and consistently applied across the whole organisation.

These and other concepts linked to subjects of PROBABILITY, ANALYTIC ACCURACY, MIND-SETS and TEAMWORK are discussed at some greater length in our course. Besides, its core added value lies in practical case studies that provide participants with templates for detecting and mitigating biases and experience in their use.

[i] Tetlock, Philip and Gardner, Dan. “Superforecasting. The Art and Science of Prediction”, 2015, Crown Publishers, New York.

[ii] condensed from Chapter 2, Heuer Jr., Richards J. “Psychology of Intelligence Analysis”, 1999.

Our instructor’s CV is available on request

CONTACT US TODAY

Name *
person
Fill out this field
Email *
email
Fill out this field
Message *
create
Fill out this field
Just to prove you are a human, please solve the equation: 17 + 6 = ?
extension
Enter the equation result to proceed