Why Intelligence Fails

The most common causes of intelligence failure

What causes intelligence to fail?

Financial losses, loss of life, and military defeat are just some of the more serious consequences that can result from a failure of intelligence.

Operation Barbarosa, 9/11, the Iraq War, British Airways Flight 149, the Texas synagogue hostage situation, the financial crash of 2008 are just some of the numerous examples of this.

So why does intelligence fail? Imagine the intelligence process represented by a chain, and the stages of the intelligence process each represented by a link in that chain.

If any of these links were to break, the entire chain would fail. Similarly, if there is a failure at any stage of the intelligence process, that intelligence process is liable to fail completely.

The most common causes of failure at each stage of the intelligence cycle are as follows:

  • Direction: Failure to ask the correct question or misinterpreting the question;

  • Collection: Inaccurate basic intelligence;

  • Processing: Bias and logical fallacies;

  • Dissemination: Failure to share intelligence; and

  • Post-Dissemination: Failure to act upon intelligence.

Understanding the most common causes of intelligence failure at each stage will enable analysts to put procedures in place to prevent them occurring, or notice them when they do occur.

Direction Failure:
Failure to ask the correct question, or misinterpreting the question

If the customer asks the wrong question when requesting intelligence, or the question they ask is misinterpreted by the analyst, they are likely to receive an intelligence product that is irrelevant to the decision they are making.

For example, imagine a cell phone manufacturer, Turbo Fone, is developing a new phone. They want to make sure it becomes the market leader, so ask for intelligence on the phone being developed by their rival, Cell King. The question they ask the intelligence agency is:

“What are Cell King’s intentions with regard to the new phone they are developing?”

This seems like a reasonable request – Cell Master is Turbo Fone’s main competitor. Therefore, it makes sense that understanding what features Cell Master are developing will help Turbo Fone understand what features it needs to include in its own phone to make it the market leader. However:

1. Cell King might be Turbo Fone’s main competitor, but they are not the ONLY competitor.

Ring King, a minor player in the market, is developing a brand-new feature that neither Turbo Fone or Cell Master has thought of. This feature is so ground-breaking it is likely it will usurp both Cell Master and Turbo Fone as the biggest phone maker on the market.

Because Turbo Fone only asked for intelligence on the new features on Cell Master’s new phone, the intelligence product they receive will not contain any intelligence on Ring King’s new feature, and they will therefore be unable to produce a feature to rival it.

This is an example of a customer asking the wrong question.

2. The intelligence agency may not realise exactly what Turbo Fone wants intelligence on

The Turbo Fone employee briefing their intelligence agency has assumed that the intelligence agency will understand that when he asked what Cell Master’s intentions are with the new phone he obviously meant what features they intended to build into it.

However, that question could actually have multiple interpretations, for instance:

  • How much do Cell Master intend to charge for the new phone?

  • Which countries do they intend to release the new phone in? or

  • When do they intend to put it on sale?

The answer to all of these questions would be useful intelligence, but they would not be the intelligence Turbo Fone wants!

This is an example of the analyst misinterpreting the question.

To avoid either of these errors occurring, it is important that intelligence analysts take the time to ensure the customer is asking the correct question, and that they have understood exactly what the customer expects as an answer.

Collection Failure
Inaccurate or incorrectly-recorded basic intelligence

The phrase “Garbage in, garbage out”, often written as the abbreviation “GIGO (or sometimes “SISO” if you catch my drift), is a term used by computer programmers to explain how inputting badly written code will result in a program that does not operate correctly or at all.

Similarly, if you built a house out of substandard materials, it would soon collapse. Or you would get a job at Persimmon Homes, one of the two.

The same logic can be applied to intelligence: The less accurate the basic intelligence used in the production of intelligence, the less reliable the resulting product is likely to be.

There are two reasons a piece of basic intelligence can be unreliable:

  1. It was inaccurate in the first place; or

  2. It was recorded or interpreted incorrectly during the collection process.

Mitigating the impact of inaccurate basic intelligence

Intelligence is rarely 100% accurate, even when it comes from the most reliable of sources. Therefore, as analysts we need to take steps to mitigate the negative impact inaccurate intelligence has on our assessments and products. We can do this by:

  • Collecting as many pieces of basic intelligence relevant to the task as is practical;

  • Collecting our basic intelligence from as wide a variety of sources as is practical;

  • Applying reliability grades to the basic intelligence and its sources;

  • Giving more weight to basic intelligence with a higher reliability grade when making assessments; and

  • Qualifying our assessments with probabilistic language.

The more pieces of basic intelligence we use in the production of further intelligence, and the wider variety of sources they are collected from, the easier it will be to identify the anomalous basic intelligence which needs to be treated with caution.

Reduce the likelihood of analyst error

Computer programmers have another word that relates to the next cause of inaccurate basic intelligence very well – they call it a PICNIC” problem, or Problem In Chair, Not In Computer”.

They use this to refer to incidents where someone calls them in to complain that their program isn’t working properly, but it turns out that the program is working fine it is just the person operating it isn’t doing so correctly.

Similarly, no matter how accurate or reliable a piece of basic intelligence is, if it is not recorded or interpreted correctly by the person collecting it, it can be rendered inaccurate.

In an ideal world, we would have the luxury of multiple analysts to check each piece of basic intelligence had been recorded and interpreted correctly, but in reality this is very rarely the case.

Therefore, to reduce the likelihood of analyst error, analysts should re-read any piece of intelligence that appears to be anomalous, and if it still appears anomalous on second reading, they should ask a colleague to conduct a sanity check to make sure they have read it correctly.

Processing Failure
Bias and logical fallacies

If we think back to our home-building analogy, if the basic intelligence collected during the collection stage is the equivalent of the bricks and mortar, then the processing stage is where these materials are turned into a house.

No matter how good the quality of those bricks and mortar is, if they are not put together using the correct techniques, the house is still likely to collapse.

The most common sources of error in the processing stage are:

  • Bias; and

  • Logical fallacies.


Bias is the term used to describe a disproportionate weighting that has been applied either in favour of or against someone or something. Bias can originate from both inside and outside an intelligence capability.

External Bias

During the direction stage, a customer may indicate when requesting intelligence that the resulting assessment conforms to a certain narrative, usually so they can use it to justify actions they wish to take or prove a certain point.

It is imperative that you push back against this type of direction and base your assessments purely on the facts and information you have in your possession.

The role of intelligence is to inform the customer’s decision, not to make it for them. If the customer disagrees with your assessment or intelligence, that is their gift, but you should never be pressured into altering it as a result.

“Intelligence should influence policy, and not visa-versa”

An example of external bias allegedly* being applied can be found in allegations by CIA analysts about then-Vice President of the USA Dick Cheney. There are unconfirmed* reports that he would visit their offices after 9/11 and pressure them into producing intelligence that fit with the Bush administration’s policy objectives (a.k.a. find an excuse to invade Iraq).

If you would like to read more about these allegations, there is a Washington Post article about it and a much longer expose by Dr John Prados.

*words definitely not included out of fear for my safety.

Internal Bias

If someone or something possesses characteristics that cause it to be biased, it is said to have “inherent bias”. Humans, objects, and processes can (or more accurately, do) all have inherent bias. For instance:

  • A referee that supports one of the teams in the match they are officiating would be more likely to award decisions in favour of the team they support, and could therefore be said to be biased in favour of the team they support;

  • A die that was loaded so it would land on the number 5 90% of the time it was rolled could be said to be biased in favour of the number 5; and

  • Exams rely on the recall of facts, so they can be said to be biased against students with bad memories.

Internal Bias and Intelligence

Allowing Bias to creep into analysis risks assessments becoming disproportionately skewed in favour of a certain outcome, and therefore less accurate. For instance:

Northpointe’s Reoffending-Likelihood Predictor

A computer program designed by a company called Northpointe to predict the likelihood a criminal would reoffend was used by some arms of American justice, including Wisconsin Department of Corrections, to help make decisions at every stage of the justice system from sentencing to parole hearings.

The problem was that the system was only correct 61% of the time, and that black people who never reoffended were twice as likely to have been labeled as a high risk of reoffending than white people. The program had same ratio in reverse for those labelled as a low reoffending risk who would actually go on to reoffend. For more information on this, click here.


If, during a wargaming session, the analyst playing the role of RED is far more skilled at wargaming than the analyst playing the role of BLUE, the wargaming process is highly likely to result in more scenarios where RED beats BLUE. This would give the impression that RED is stronger than BLUE, even if in reality the sides were evenly matched or BLUE was actually stronger.

Cognitive Bias

Cognitive bias is the name given to inherent bias in humans. It can manifest itself in many different ways, lots of which are listed here. Below are three types of cognitive bias that commonly affect intelligence analysts:

Recency bias: Humans have a tendency to believe things that have happened recently are more likely to happen again in the near future.

The anchoring effect: Humans have a tendency to put more value on the first piece of information they receive about something than subsequent conflicting information (this is why Russian misinformation campaigns work so well: it is much quicker to push a lie about something than waste time verifying the truth).

The IKEA effect: Humans have a tendency to place more value on something they have created. This means analysts are likely to give their own assessments more weight than assessments by others.

Logical Fallacies

A logical fallacy is an argument that can be disproven through reasoning. It is an error in reasoning that occurs when invalid arguments or irrelevant points are introduced, often without any evidence to support them.

If you go on Twitter you will find hundreds of examples of logical fallacies being used, often by people attempting to make serious political arguments (such as “Messi is the GOAT not Ronaldo”)

There are loads of different types of logical fallacy, so if you want to do some further reading on the subject I would suggest checking out yourfallacyis.com which covers them pretty comprehensively.

While they are all able to derail an otherwise perfectly good intelligence assessment, it would take forever to list them here. Instead, here are three of the most pertinent fallacies with regard to intelligence analysis:

The Gambler’s Fallacy: Believing a certain random event is more or less likely based on the outcome of previous random events.

The Black-or-White Fallacy: Presenting two alternative states as the only possibilities when there are others.

The Texas Sharpshooter Fallacy: Failing to take randomness into account when determining cause and effect.

How to audit your assessments for the presence of bias and logical fallacies

First of all, learn the different ways in which they manifest themselves. This will help you to identify them when they inevitably crop up during your analysis.

However, it is unlikely you completely eliminate them from your assessments even if you know them inside out. Unfortunately, bias is part of human nature (without it we wouldn’t be around as a species, but that’s an article for another day).

Therefore, once you have completed your assessment, you need to audit it to ensure none have slipped through the net. To do this, take the following steps:

1: Pretend you disagree with your assessment and want to discredit it
Which points can you identify as having weak reasoning behind them? Go back to those parts and consider whether you employed cognitive bias or logical fallacies when you came up with them.

2: List the main points of your assessment one by one, with the evidence they rely on 
Seeing your points and the evidence they rely on laid out like this may help you identify those points which rely on weak evidence.

3: Go back over some of your old assessments 
If you notice certain types of cognitive bias or logical fallacy have consistently crept in to them, go over your current assessment and check for those types specifically.

4: Broad or absolute claims require more evidence than narrow ones
Terms such as “all”, “every”, “always”, and “never” can be appropriate, but they require more evidence to back them up than terms like “some”, “many”, “often”, and “rarely”. Therefore, if you have used any broad or absolute judgements, double-check the strength of the evidence that lies behind them.

5: Include the reasoning behind any deductions you make within your assessment
Much like when your teacher got you to show your working in maths when you were at school, including the reasoning behind your deductions will make it easier for both you and anyone quality-checking your work to notice if any cognitive bias or logical fallacies have crept in.

A bonus of doing this is that it will add more weight to your assessment - making it look more professional to your customer.

Dissemination Failure
Failure to share intelligence

No matter how good a piece of intelligence is, if it is not seen by the right person or organisation, it may as well not have been produced.

Remedy: Understand the factors which inhibit intelligence sharing and implement policies to combat them.

Factors that can impact the sharing of intelligence:

Over classification: The more individuals and organisations that have access to a piece of intelligence, the more useful it becomes. Over classifying information and intelligence reduces its pool of potential recipients, and can slow down its dissemination to those who are authorised to view it, which reduces its value. To reduce the likelihood of over classification, take the following steps:

  • Write the product so it can be classified at a low a grade as possible. If it contains sensitive information that can be removed without impacting the value the product provides, then remove it;

  • If this is not possible, write two separate products: one at a higher classification containing the sensitive information, and one at a lower classification with the sensitive information redacted.

Sending intelligence in an inappropriate format: Intelligence products need to be disseminated in a format that can be consumed by the customer. Your high-definition video could be the most detailed and insightful piece of intelligence ever created, but if the customer has patchy internet they are unlikely to be able to download it, therefore rendering it useless.

Unwillingness to share: Analysts, departments, and agencies can be reluctant to share information with their colleagues or counterparts. They believe (often correctly), that being the ones to release the product containing the aforementioned information will benefit their reputation, and they don’t want anyone to steal their thunder.

Unfortunately, there is no magic bullet that can completely eliminate this issue: it is human nature. However, by instilling a “one team” ethos across an organisation in which everyone in which the importance of working together to achieve the same goal, and encouraging good quality working relationships (see below), the potential for it to occur can at least be reduced.

Non-existent or poor-quality working relationships: People are much more likely to share information and intelligence with people they have a good working relationship with. To that end, one of the main unsung skills of an intelligence analyst is networking. Regularly make the effort to go and speak with other members of your department and other departments to find out what they are interested in and let them know what you are interested in. It should go without saying that you are much more likely to receive relevant intelligence from people if they are aware it is relevant to you!

Post-Dissemination Failure
Failure to act on intelligence

This is the area over which intelligence analysts and agencies have the least control: If a customer has received intelligence but for whatever reason decides not to act on it, there is not much that can be done.

There are, however, a few things that can be done to reduce the likelihood of intelligence being ignored:

Build relationships with customers: The better the relationship you have with your customers, the more likely they are to take your intelligence on board. For instance, a customer is more likely to trust someone they have a face-to-face meeting with once a week than someone who just sends them a written product by email once a month.

Request feedback on products: If you do not request feedback on your products, you have no idea whether customers are even consuming them - let alone whether they are any good. Getting feedback will also inform you how you can improve your products and make customers more likely to trust and act upon them.

Provide as much evidence as possible to back up your assessments : Customers are more likely to take heed of an assessment if they can evidence behind it.

Share your intelligence with as many people as the classification will allow: The more influence you have as an agency or analyst, the more likely a customer will take your intelligence on board. Even if they don’t take your intelligence on board directly, you may influence another of their sources who they do listen to. The more agencies the customer relies on that produce assessments that align with your own, the more likely the customer is to take that assessment seriously.

Found this useful? Follow me on LinkedIn for more intelligence content

Join the conversation

or to participate.