In a nutshell, Oeberst and Imhoff’s1 thesis is that a great many cognitive biases “share the same ‘recipe’” – this ‘recipe’ being a combination of erroneous “prior beliefs”, subsequently endorsed through confirmation bias (hence “belief-consistent information processing”). As such, they imply that a large number of cognitive biases might be addressed simply through a more rigorous examination of ‘prior beliefs’ using techniques that have been specifically designed to avoid (or at the very least, mitigate) the risk of confirmation bias. Importantly, Oeberst and Imhoff1 do not believe that routine analytical and assessment practices (such as “deliberation” alone) can help avoid such biases, and point out that “more deliberation may even entail more belief-consistent information processing and, thus, more bias…” Instead, they argue that tackling such bias might require a “specific form of deliberation…", and conclude that “Only if people tackle the beliefs that guide – and bias – their information processing and systematically challenge them by deliberately searching for belief-inconsistent information… [might we then] observe a significant reduction in biases.”
There is much in the detailed exposition of their arguments (and the evidence cited in support of these) to commend Oeberst and Imhoff’s
1 persuasive treatise; and given their paper was first published on 17 March 2023 (just 8 months ago at the time of writing) it has garnered intense interest (with more than 6,500 posts on X/Twitter; 5 separate news stories; and 16 independent, and predominantly supportive, citations [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17]). It is nonetheless not without its weaknesses nor its detractors.[
7,
9,
17]
In particular, Oeberst and Imhoff
1 do not offer a coherent explanation as to why ‘confirmation bias’, or what might be interpreted as confirmation bias, appears
so commonplace (or, in their words,
so “ubiquitous”). Instead their formulation of confirmation bias is essentialised as a
“general human tendency”,
“a fundamental principle in human information processing” or simply a common feature of “
conditio humana” (the ‘human condition’). This recourse to the intangible mysteries of human nature, and the common challenges that all humans face, is somewhat ironic – not least because it smacks of the unsubstantiated and untestable ‘beliefs’ on which their novel framework depends.[
7] Indeed, their focus on ‘beliefs’ – which they define as
“hypotheses about the world that come along with the notion of accuracy” – does far more of the heavy lifting than they seem willing to concede, given the self-evident psychological reassurance and comfort that might be derived from ‘belief-consistent information processing’ – that is, setting out to demonstrate your ‘notion of accuracy’ was right all along. Oeberst and Imhoff
1 not only overlook the implicit motivation that such reassurance and comfort might provide, but they also insist that
“belief-consistent information processing… [can occur both]
when people are not motivated to confirm their belief… [and]
when people are motivated to be unbiased or at least want to appear unbiased.” Yet whenever ‘beliefs’ are involved,
alternative,
additional motivations may simply not be
necessary to drive confirmation bias; and whenever ‘beliefs’ are at stake, even the strongest motivations to be, or
appear to be, unbiased may not be
sufficient to resist an unconscious imperative to confirm these beliefs.
As such it seems likely that the root cause of confirmation bias lies not in some essential human tendency for ‘belief-consistent information processing’ but in the very nature of the ‘belief-assigned ideas’ themselves – that is, the inherent value attributed to such ideas that are considered to reflect or invoke some ‘notion of accuracy’. Moreover, as Oeberst and Imhoff1 themselves concede, ‘beliefs’ can be: testable or untestable; tested or untested; true or false; strongly or weakly held; and based on copious or precious little thoughtfulness and deliberation. This means that ‘beliefs’ might retain their ‘notion of accuracy’ simply because no evidence is sought or available that might falsify them. Hence: “people overwhelmingly perceive themselves as making correct assessments [i.e. accurate beliefs, either]… because they are correct, or because they are simply not corrected.”
Under these circumstances,
believing an idea – regardless of one’s ability or willingness to critically evaluate its accuracy – confers psychological and cognitive value far beyond any of the evidence that might (or might not be) available; and that might (or might not) have been used in the formulation or adoption of
this idea rather than any
other idea. From this perspective, might not
any ideas that ‘come along with’ similarly valuable and valued psycho-cognitive assets – such as faith, hope, trust, confidence, authority and so on –
also be capable of triggering and driving confirmation bias? Might this not, for example, explain the extraordinary power of predictions, premonitions and prophesies – ideas that attract an ‘epistemic warrant’ even
before the accuracy of their claims can be confirmed by subsequent events.[
18]
This is why Simon and Read[
17] view ‘belief-consistent information processing’ as compatible with, yet distinct from, their much broader
“coherence-based reasoning” framework. Their framework accepts that the perceived value of
any ideas (regardless of whether these constitute formal ‘beliefs’ or hypotheses accompanied by a ‘notion of accuracy’) can drive confirmation bias and thereby accentuate the impact of
“incorrect, overweighted, or otherwise nonnormative” information or speculation.
What this means for intelligence analysis will be determined by the explicit and implicit ‘value(s)’ attributed to any evidence available, and any insight generated, to reduce decision-makers’ uncertainty and unsubstantiated certainty, and thereby offer them future advantage. Analysts are well aware that different sources of evidence (whether empirical, theoretical or entirely speculative) can assign different types and amounts of ‘prior value’ to the information available for intelligence analysis and assessment. Acknowledging such value as a potential driver of subsequent confirmation bias should help analysts guard against any inherent tendency to preference evidence solely on the basis that this was initially considered most ‘valuable’. Instead they should subject all evidence and all insight – regardless of perceived ‘value’ – to a consistent battery of systematic, rigorous and robust evaluation. This might include, as Oeberst and Imhoff1 recommend, “considering the opposite” to their prior beliefs and value-laden ideas; and carefully exploring any ways in which each of these might plausibly be wrong.
Acknowledgements
We are grateful to our colleagues Matt Jolly, Matt Legg and Ben Durrant for many of the insights that led us to the ideas summarised in this article; and to Aileen Oeberst and Roland Imhoff for sharing sources and engaging in collegiate dialogue despite our modest differences of emphasis and understanding. This article is under review pending possible publication in the UK Journal of Intelligence Analysis.
References
- Oeberst, A.; Imhoff, R. Toward parsimony in bias research: a proposed common framework of belief-consistent information processing for a set of biases. Perspect. Psychol. Sci. 2023, 18, 1464–1487. [Google Scholar] [CrossRef] [PubMed]
- Mattavelli, S.; Béna, J.; Corneille, O.; Unkelbach, C. People underestimate the influence of repetition on truth judgments (and more so for themselves than for others). Cognition 2024, 242, 105651. [Google Scholar] [CrossRef] [PubMed]
- Rothermund, P.; Deutsch, R. Exaggerating differences back and forth: Two levels of intergroup accentuation. Br. J. Soc. Psychol. 2023. [Google Scholar] [CrossRef] [PubMed]
- Cardenas, S.A.; Sanchez, P.Y.; Kassin, S.M. The “partial innocence” effect: False guilty pleas to partially unethical behaviors. Personal. Soc. Psychol. Bull. 2023, in press. [CrossRef] [PubMed]
- Beukeboom, C.J.; van der Meer, J.; Burgers, C. When “sometimes” means “often”: how stereotypes affect interpretations of quantitative expressions. J. Lang. Soc. Psychol. 2023. [Google Scholar] [CrossRef]
- Lakhlifi, C.; Rohaut, B. Heuristics and biases in medical decision-making under uncertainty: The case of neuroprognostication for consciousness disorders. 2023, 52, 104181. [CrossRef]
- van Doorn, M. The skeptical import of motivated reasoning: a closer look at the evidence. Think. Reason. 2023, 1–31. [Google Scholar] [CrossRef]
- Gomez, C.; Cho, S.M.; Huang, C.M.; Unberath, M. Designing AI support for human involvement in AI-assisted decision making: a taxonomy of human-AI interactions from a systematic review. arXiv 2023. [CrossRef]
- Enßlin, T.A.; Kainz, V.; Boehm, C. Simulating reputation dynamics and their manipulation: an agent based model framework. Comput. Commun. Res. 2023, 5, 1. [Google Scholar] [CrossRef]
- Seitz, R.J.; Paloutzian, R.F. Beliefs made it into science: believe it or not. Function 2023, 4, zqad049. [Google Scholar] [CrossRef] [PubMed]
- Singh, S. Mindfulness-based self-management therapy (MBSMT): a positive psychotherapy for well-being and happiness. Indian J. Health Wellbeing 2023, 14, 378–382. [Google Scholar]
- Matsumoto, N.; Nihei, M. Multiple propositional theory for understanding rule and belief updating: An integrative perspective to improve clinical practice. psyarXiv 2023. [CrossRef]
- Nelson, A.R. Confident Data Science: Discover the Essential Skills of Data Science; Kogan Page Publishers: London, UK, 2023; 408p. [Google Scholar]
- Matern, S.; Edward, L. Bernays’ Propagandatheorie Vom Kampf um Wirklichkeiten und Emotionen in der liberalen Demokratie; Verlag Barbara Budrich GmbH: Berlin, Germany, 2023; 340p. [Google Scholar]
- g Barbara Budrich GmbH: Berlin, Germany, 2023; 340p.
- Mann, D.L. A system of fundamental beliefs. Syst. Innov. 2023, 253, 2–4. [Google Scholar]
- Lysne, V. Metodehjørnet [The Method Corner]. Norsk Tidsskrift for Ernæring. Nor. J. Nutr. 2023, 21, 4–11. [Google Scholar]
- Simon, D.; Read, S.J. Toward a general framework of biased reasoning: coherence-based reasoning. Perspect. Psychol. Sci. 2023. [Google Scholar] [CrossRef] [PubMed]
- Douglas, H.E. Reintroducing Prediction to Explanation. Philos. Sci. 2009, 76, 444–463. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).