Previous Article in Journal
Which Sectoral CDS Can More Effectively Hedge Conventional and Islamic Dow Jones Indices? Evidence from the COVID-19 Outbreak and Bubble Crypto Currency Periods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Illusion of Control: How Knowledge and Expertise Misclassify Uncertainty as Risk

by
Alessio Faccia
1,*,
Pythagoras Petratos
2 and
Francesco Manni
3
1
Birmingham Business School, University of Birmingham Dubai, Dubai P.O. Box 341799, United Arab Emirates
2
Westminster Business School, University of Westminster, London NW1 5LS, UK
3
Dipartimento di Economia Aziendale, Università Roma Tre, 00145 Rome, Italy
*
Author to whom correspondence should be addressed.
Risks 2025, 13(10), 188; https://doi.org/10.3390/risks13100188
Submission received: 8 October 2024 / Revised: 1 September 2025 / Accepted: 24 September 2025 / Published: 1 October 2025

Abstract

This study explores the critical yet often misunderstood distinction between risk and uncertainty. The research examines how knowledge and expertise can contribute to an illusion of control in uncertain environments, leading decision-makers to misclassify uncertainty as risk. This misclassification can lead to inadequate management of unforeseen events and suboptimal decision-making outcomes. The study introduces a novel matrix framework that categorises decision-making environments into four distinct quadrants based on knowledge, expertise, risk, and uncertainty. The framework helps decision-makers navigate the trade-off between risk and uncertainty, guiding them in assessing their current position and informing their decisions. Key findings reveal that expertise, while essential, can lead decision-makers to treat uncertainty as risk. The matrix offers guidance on how to better manage risk and uncertainty.

1. Introduction

Decision-making has become increasingly complex due to the growing interconnectedness of global systems, fast technological advancements, and the emergence of new and unforeseen challenges. In such environments, the distinction between risk, which is measurable and where the probabilities are known, and unmeasurable uncertainty, where the likelihood is unknown and rather unpredictable (Knight 1921), has become more confusing than ever. While traditional risk management approaches focus on assessing and mitigating risks based on known probabilities, they often fail to consider the unpredictability that characterises many real-world situations (Taleb 2007). This confusion between risk and uncertainty presents significant challenges for decision-makers across various sectors, including finance, healthcare, engineering, and public policy.
A key factor contributing to this confusion is the role of knowledge, particularly expertise. Experts with deep technical knowledge are often seen as the most qualified individuals to assess and manage risks. However, their confidence in their ability and other behavioural biases can sometimes obscure the inherent uncertainty of certain situations, leading to an illusion of control. This phenomenon occurs when experts overestimate their ability to manage unpredictable outcomes, treating uncertainty as if it were a manageable risk. The consequences of this illusion can be profound, as decision-makers may fail to understand and prepare for unexpected developments that lie outside the boundaries of traditional risk assessments.
Despite the wealth of research on risk management and decision-making, a significant research gap remains in understanding how expertise contributes to the misclassification of uncertainty as risk. Traditional risk management frameworks often assume that experts, due to their specialised knowledge, can always manage risks effectively. Moreover, there is a lack of practical tools and frameworks to help decision-makers recognise when they are transitioning from a risk-based environment to an uncertainty-based environment and to adapt their strategies accordingly.
Addressing these research gaps requires a new approach that acknowledges the limitations of expertise in uncertain environments and provides a more flexible framework for decision-making. This study introduces a novel matrix that helps decision-makers differentiate between risk and uncertainty, taking into account their level of knowledge and expertise. This framework presents several advantages (Sutherland et al. 2022) and offers practical tools for reducing the illusion of control and enhancing the adaptability of risk management strategies. It is worth noting that our discussion primarily concerns substantive expertise (Asare and Wright 1995).
The primary objective and contribution of this research are, therefore, to explain how expertise leads to misclassifying uncertainty as risk and to provide decision-makers with a framework that enables them to manage this complex issue more effectively. Specifically, the study aims to develop a conceptual framework in the form of a matrix that categorises decision-making environments into four quadrants based on risk, uncertainty, knowledge, and expertise, in order to better manage risk and uncertainty. This generic framework has wide applicability and usefulness across various domains.
We begin by defining risk and uncertainty through an analysis of Knight’s (1921) work. We then present other important works up to the 1960s, when behavioural economics influenced decision-making, including examples of behavioural and cognitive biases in different fields and critiques of risk management methods. In the Methods section, we present Classification grids and justify their use. In Section 4, knowledge and expertise are analysed as well as their limitations. Section 5 deals with constructing the Classification grid on risk, uncertainty, knowledge and expertise. In Section 6, we conclude with a discussion of the study’s novelty and contributions, as well as its limitations and future research directions.

2. Literature Review

2.1. Knight and the Risk and Uncertainty Distinction

The distinction between risk and uncertainty has been a focal point in economic theory and decision-making processes for decades. Frank Knight’s work in 1921 is widely regarded as a landmark contribution to this discussion. The main definition is ‘To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one, we may use the term ‘’risk’’ to designate the former and the term “uncertainty” for the latter.’ (Knight 1921). In this section, we provide numerous direct quotes from Knight intentionally to present his arguments more accurately and avoid confusion over terms.
Knight (1921) continues ‘The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of experience), while in the case of uncertainty this is not true, the reason being, in general, that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique.’ He jointly discusses the distribution, outcomes and instances. Discussing a business case, Knight (1921) argues, ‘The essential and outstanding fact is that the “instance” in question is so unique that there are no others or not a sufficient number to make it possible to tabulate enough like it to form a basis for any inference of value about any real probability in the case we are interested in’. Here, ‘unknown’ refers to the probability distribution. Outcomes and instances might be unique, but Knight’s distinction rests on whether probabilities are measurable in a repeatable class.
Nevertheless, in his book, he mainly focuses on probabilities rather than the outcomes and (group of) instances. For example, he also suggests that ‘We can also employ the terms “objective” and “subjective” probability to designate the risk and uncertainty, respectively, as these expressions are already in general use with a signification akin to that proposed’ (Knight 1921). In addition, in another excerpt, Knight (1921) perpetuates the confusion by failing to distinguish between risk, as a measurable probability, and uncertainty, as a non-measurable probability. Another key argument of Knight (1921) is the opposition to previous work that uses the law of large numbers to distinguish between different types of probabilities. Thus, for this paper, we focus on probabilities rather than the outcome and instances.
Knight drew a clear distinction rooted in measurability. Risk covers cases where a reference class exists and probabilities are derived a priori or from statistics. Uncertainty covers cases without a usable reference class, often because instances are unique, so frequencies are unavailable. The point concerns probabilities, not the range of outcomes. Entrepreneurship illustrates this; impacts are described, yet the chance of success is not measurable.
Ultimately, the key to this discussion lies in the fundamental concept of knowledge. Knight (1921) argues that universal (fore) knowledge would not leave space for entrepreneurs, who have the role of improving knowledge. The word ’uncertainty’ seemed best for distinguishing the defects of managerial knowledge from the ordinary “risks” of business activity. Knight (1921) extensively analyses knowledge ‘Fundamental to all else is the problem of knowledge itself’, discusses decision-making ‘intelligent action demands knowledge which only in a limited part is possessed or to be had’, challenges ‘perfect knowledge’, ‘our ignorance of the future is only partial ignorance, incomplete knowledge and imperfect inference, it becomes impossible to classify instances objectively’, connects knowledge with uncertainty ‘dealing with uncertainty, i.e., by securing better knowledge of and control over the future’ and mentions the relationship of knowledge and behaviour.

2.2. Keynes and Uncertainty

Subsequent important authors and works were related to uncertainty and risk. John Maynard Keynes, another influential figure in economic thought, also explored the concept of uncertainty. Keynes focused particularly on financial markets, emphasising that investors often face conditions where the future is fundamentally unknowable. In 1937, Keynes argued that many investment decisions are not reducible to probabilities, in line with Knight’s approach. While Keynes did not provide a formalised theory of uncertainty, he underscored how it pervades economic decision-making (Keynes 1937). Dow (2019) finds that Keynes’s distinction between risk and uncertainty is central to his numerous works, particularly his theory of expectations, theory of investment and his theory of money and interest and concludes, ‘Uncertainty, as unquantifiable risk, was central to Keynes’s philosophy and economics, and continues to be relevant under modern conditions. Additionally, Keynes paid considerable attention to knowledge. In his famous Treatise, he mentions objective probability and refers to subjective probability only in the sense that it is conditional on knowledge (Dow 2019).

2.3. Expected Utility Theory

One of the earliest and most influential models is Expected Utility Theory (EUT), developed by von Neumann and Morgenstern in 1947. EUT assumes that individuals make decisions by evaluating the expected utility of different options, where utility is a measure of satisfaction or value derived from an outcome. Under this framework, risk is treated as something manageable, as it can be quantified by calculating the expected utility based on the probabilities of outcomes. EUT assumes that individuals are rational actors who will choose the option that maximises their expected utility, making it a cornerstone in economic theory (von Neumann and Morgenstern 1947). However, this model largely sidesteps uncertainty, as it assumes known probabilities, leaving it less effective when likelihoods cannot be calculated or estimated.
Building on these foundations, Expected Utility Theory (von Neumann and Morgenstern 1947) sought to formalise decision-making under conditions of risk. However, it left much to be explored regarding decisions made under uncertainty. Knight’s early work remains vital in understanding this gap. His assertion that utility maximisation becomes impractical when probabilities are unknown remains influential, prompting ongoing debate in the literature (von Neumann and Morgenstern 1947).
Another important model in decision theory is Subjective Expected Utility (SEU), which was developed to address some of the shortcomings of EUT. Introduced by Leonard Savage in 1954, SEU allows for subjective probabilities, meaning individuals assign their probabilities to uncertain events based on personal beliefs or limited information. It makes SEU more flexible in handling uncertainty, as it accounts for situations where objective probabilities are unavailable. However, the model still assumes that decision-makers can, in some way, assign probabilities to uncertain outcomes, limiting its application in cases of extreme uncertainty (Savage 1954).

2.4. Rationality

A key contribution related to decision-making and uncertainty is ‘bounded rationality,’ introduced by Herbert Simon (1955). Simon (1955) described the certainty rule in which, given information, the behaviour alternative with the highest payoff is chosen, and these ‘outcomes of particular alternatives must be known with certainty, or at least it must be possible to attach definite probabilities’. He further argues that the decision-maker would ‘be satisfied with a more bumbling kind of rationality, will make approximations to avoid using the information he doesn’t have…without ever making probability calculations…mit, is the kind of rational adjustment that humans find “good enough”’, and therefore chose an outcome than is not the optimum but rather satisfices humans. The seminal work of Herbert Simon was later expanded to obtain maps of bounded rationality, contributing to psychology and behavioural economics (Kahneman 2003). This work is discussed in more detail later concerning behavioural and cognitive biases. The distinction between risk and uncertainty gained further complexity in the 1960s with the work of Daniel Ellsberg. Through the well-known Ellsberg Paradox, Ellsberg demonstrated that individuals tend to avoid situations involving ambiguity, even when the expected returns are identical to those in situations of known risk. This behaviour, known as ambiguity aversion, has become a core element in the study of behavioural economics, contrasting sharply with classical economic theories that assume rational behaviour (Ellsberg 1961). In this context, ambiguity means uncertainty about probabilities. Ambiguity aversion shows that people often have an inherent bias against uncertainty, preferring situations where they feel they have more control, even if the probabilities are not necessarily in their favour. This concept has been incorporated into various behavioural models, showing how deeply uncertainty affects decision-making processes (Ellsberg 1961).
Therefore, the foundational theories of Knight, Keynes, and Ellsberg have fundamentally shaped our understanding of risk and uncertainty. Knight’s clear distinction between the two concepts has been central to further developments in economic theory. In scenarios where information and knowledge are incomplete or unreliable, these early ideas continue to inform current approaches to risk management and decision-making under uncertainty.

2.5. Contemporary Definitions: Modern Perspectives on Risk and Uncertainty

In recent years, scholars across various fields have refined and expanded the concepts of risk and uncertainty. While the foundation laid by Frank Knight remains relevant, contemporary thought has introduced new dimensions, particularly the role that data and predictive models play in differentiating between these two ideas. In particular, researchers argue that uncertainty is more likely to emerge in systems affected by external shocks or sudden changes, where historical patterns do not provide sufficient guidance for future events (Taleb 2007).
Gigerenzer (2002), for instance, has argued that many of the risks we face are presented as quantifiable when, in reality, they are underpinned by deep uncertainty. He suggests that overconfidence in statistical models often leads to misclassifying uncertainty as risk. This misclassification has significant consequences for decision-making, particularly in industries where outcomes cannot be easily predicted despite the availability of data (Gigerenzer 2002).
The modern discourse, therefore, revolves around the extent to which data, mainly historical, and associated models can truly express uncertainty. While advanced algorithms and data analytics have enhanced the ability to quantify risk, they have not eliminated the fundamental unpredictability of certain events. It has led to a growing recognition that uncertainty is not just a lack of information, but an intrinsic feature of many systems, especially those subject to rapid or unforeseen changes.
A significant challenge to EUT came in the late 20th century through bounded rationality (Simon 1955) and later behavioural economics and Prospect Theory, introduced by Kahneman and Tversky (1979). These models diverge from the rational actor model of EUT by demonstrating that people do not always behave rationally when confronted with risk and uncertainty. In addition, Modern Portfolio Theory (MPT) and the Capital Asset Pricing Model. (CAPM), developed by Harry Markowitz and others, including Sharpe, in the 1950s and 1960s, provides a framework for managing risk in financial investments. MPT assumes that investors can diversify their portfolio to reduce overall risk by balancing assets with different levels of risk. However, MPT primarily deals with risk, as it assumes that the probabilities of returns are known or can be estimated, based on historical data. The model is less effective in handling uncertain events.

2.6. Examples of Behavioural and Cognitive Biases in Different Fields

Although knowledge and expertise are often seen as powerful tools for managing risk, they are not immune to behavioural and cognitive biases that can distort risk assessments.
Studies have shown that this bias is common among experts in fields such as finance, medicine, and engineering. For example, Jain et al. (2022) provide a review of behavioural biases in investment decision-making, including more than 200 papers and focusing on terms such as risk perception and risk-taking behaviour. Another review highlights the behavioural biases in catastrophic risks and argues that catastrophic modelling cannot remove uncertainties. The purpose of this section is not to cover the extensive literature on behavioural biases related to risk and uncertainty, but to mention some important arguments to justify this distinction.
Most notably in financial markets, Kahneman and Tversky (1979) found that even experienced traders often display overconfidence, leading them to make decisions based on the belief that they can consistently predict market movements. This overconfidence can lead them to take on excessive risks or misinterpret uncertain situations as manageable (Kahneman and Tversky 1979). It can be argued that in these cases, overconfidence shifts the focus away from acknowledging uncertainty, fostering a misplaced belief in the expert’s ability to control outcomes. This tendency is especially evident in financial markets, where overconfident traders take on undue risks, as demonstrated in the 2008 financial crisis (Taleb 2007).
In the medical field, Paul Slovic (1987) demonstrated that overconfidence among doctors can lead to poor risk assessments, particularly in diagnoses and treatment decisions. Medical professionals, relying heavily on their expertise, may overlook potential complications or downplay uncertainties related to patient outcomes. This overconfidence can lead to suboptimal treatment plans, particularly in cases where the outcome is uncertain or the condition is rare. Slovic’s research shows that the more experienced the physician is, the greater the likelihood of overconfidence, as they are more prone to trust their judgement without accounting for the complexities or uncertainties that may arise (Slovic 1987).
In engineering, overconfidence is particularly problematic in large-scale projects, where engineers must balance safety, cost, and time constraints to ensure project success. Petroski (1994) explored how overconfidence in engineering expertise has sometimes led to catastrophic failures, such as bridge collapses or dam breaches. Experts, relying on their prior knowledge and experience, may overlook uncertainties related to material behaviour or environmental conditions, assuming that their designs will work in all situations. Petroski notes that this overconfidence can transform what should be viewed as uncertainty into a perceived risk, leading engineers to believe that any potential issues can be anticipated and mitigated through their expertise (Petroski 1994). This connects with Flyvbjerg’s evidence on megaproject cost overruns, which reflects the same class of judgement failures under complexity and time pressure (Flyvbjerg 2003).
Another cognitive bias closely related to overconfidence is confirmation bias, where experts tend to seek out information that supports their existing beliefs while overlooking data that contradicts their views. Confirmation bias, a prominent distortion, occurs when individuals favour information that supports their pre-existing beliefs while disregarding conflicting evidence (Klayman and Ha 1987). In healthcare, doctors may favour tests or treatments that confirm their initial diagnosis, leading them to dismiss uncertain or contradictory signs that might indicate a different diagnosis (Klayman and Ha 1987). Slovic (1987) also showed how reliance on initial diagnoses impairs the identification of rare conditions. Similarly, in engineering, unchecked confidence in design assumptions has led to catastrophic failures, including the collapse of the Tacoma Narrows Bridge (Petroski 1994).
Moreover, the literature suggests that the illusion of control—the belief that one can influence outcomes that are, in fact, governed by chance—also plays a significant role in expert decision-making. Langer (1975) demonstrated that this illusion is prevalent among experts, particularly in high-stakes fields such as finance and healthcare. Experts tend to believe that their experience and knowledge give them control over events that are actually unpredictable, which can lead to underestimating uncertainty and overestimating the predictability of risks (Langer 1975). This illusion of control reinforces overconfidence, further distorting risk assessments and confusing the distinction between what can be managed as risk and what remains uncertain.
Groupthink and herding are other cognitive bias that affects risk assessments, particularly in industries that rely heavily on expert consensus. Structured expert judgement reduces these biases. Cooke’s classical model utilises calibration questions, information scores, and performance-based weights to aggregate expert. Experts working in groups may fall into a trap where dissenting views are suppressed in favour of group consensus, leading to a false sense of certainty. This bias has been documented in both financial markets and project management, where the combined overconfidence of a group of experts can amplify the distortion of risks and uncertainties (Janis 1982). When groups of experts reinforce each other’s overconfidence and can therefore create the bias of herding behaviour, they can often underestimate the role of uncertainty, assuming that collective expertise can overcome any potential problems.
Anchoring bias, identified by Tversky and Kahneman (1974), leads individuals to overreliance on initial reference points. For instance, Simonsohn and Loewenstein (2006) found that this bias significantly influences rent decisions, where initial prices anchor subsequent judgments, regardless of market changes. The planning fallacy, another critical bias, reveals how individuals systematically underestimate the time and resources required for tasks. Buehler et al. (1994) demonstrated this in various planning scenarios, where even experienced professionals often failed to account for unforeseen delays, leading to inadequate risk management strategies. Hindsight bias distorts decision evaluations by making past events appear predictable after they have occurred. Fischhoff (1975) demonstrated that individuals tend to reconstruct the predictability of outcomes after they know the result, a bias often observed in post-mortem analyses of failed projects or investments.
The illusion of validity, highlighted by Tversky and Kahneman (1974), causes decision-makers to trust overly consistent yet irrelevant data patterns, especially in predictive models. It is frequently observed in financial modelling, where historical trends are mistaken for future guarantees, as highlighted by Taleb (2007). Finally, the escalation of commitment bias, explored by Staw (1981), leads individuals to continue investing in failing strategies due to prior investments, ignoring emerging uncertainties. This phenomenon is widely documented in corporate decision-making, where firms often persist with failing projects, thereby compounding their losses.

2.7. Critiques of Risk Management Methods

A growing body of literature critiques traditional risk management frameworks for their inadequate treatment of uncertainty. For example, during the COVID-19 pandemic, many organisations following traditional risk management models found themselves unprepared for the scale and impact of the crisis, which exposed the limitations of frameworks that focus primarily on risks that are known or quantifiable (Aven 2016). One prominent critique comes from Nassim Nicholas Taleb, who argues that conventional risk management models, particularly in finance, fail to account for extreme events or “black swan” events. These are rare, unpredictable occurrences that have a massive impact, such as the 2008 financial crisis or the global COVID-19 pandemic. Taleb argues that traditional frameworks are overly reliant on historical data and fail to account for the inherent uncertainty in complex systems (Taleb 2007). In such cases, the frameworks provide a false sense of security, as they assume that past data can reliably predict future risks. Practitioners often overlook extreme events, or ‘black swans,’ when applying probabilistic models such as EUT. EUT assumes known probabilities and finite expected utilities. Under fat tails or model error, practice mis-specifies inputs, which leads to poor decisions. Formal models represent extremes through fat-tailed distributions, extreme value theory, stress testing, and scenario analysis. The failure often lies in specification, short samples, and weak governance, not in the absence of probabilistic tools.
Paul Slovic and other scholars in the field of behavioural risk research have also critiqued traditional risk management models for their emphasis on rational decision-making. They argue that these frameworks often ignore how individuals and organisations actually perceive and respond to risk. It suggests that risk management models that focus purely on objective risk assessments fail to capture the psychological dimensions of uncertainty, particularly in industries where human judgement plays a critical role (Slovic 1987).
Another critique comes from the socio-technical perspective, which argues that risk management frameworks overlook the complexity of social and technological systems. Perrow’s (1984) work on normal accidents theory demonstrates that in highly complex systems—such as nuclear power plants or aviation—accidents are not merely the result of isolated risk factors but are intrinsic to the system’s design. Traditional risk management frameworks that focus on individual risks often miss the broader systemic uncertainties that can lead to catastrophic failures (Perrow 1984).
Furthermore, Aven (2013) has argued that the distinction between risk and uncertainty is insufficiently addressed in current frameworks. He points out that many frameworks treat all uncertainties as if they could eventually be turned into calculable risks. Aven suggests that more attention needs to be given to managing deep uncertainties, where the lack of knowledge is so profound that probabilistic estimates are not meaningful. This critique advocates for new approaches that can better manage situations where the future is fundamentally unpredictable, rather than relying on the assumption that all risks can be quantified (Aven 2013).

3. Methodology

3.1. Linking Knowledge, Uncertainty and Risk

‘When there is uncertainty, information or knowledge becomes a commodity…But the demand for information is difficult to discuss in the rational terms usually employed (Arrow 1966). In his pioneering work, Arrow (1966) links uncertainty with knowledge. At the same time, he emphasises the importance of rationality. Knowledge is essential for decision-making under uncertainty, as Knight (1921) and the above discussion emphasise. In the economics literature, knowledge is often used as a synonym for information. However, our paper adopts a broader approach, drawing from research in psychology and aiming to contribute to various fields beyond psychology and economics, including management, healthcare, and public policy.
The methodology of this study revolves around generating a conceptual framework that links key concepts of risk and uncertainty with knowledge and expertise. The goal is to develop a framework that discusses how confusion between risk and uncertainty can arise and how decision-makers can more effectively address these conditions by leveraging knowledge and expertise. Due to the complexity of the topic and the numerous risks and types of knowledge and expertise involved, we prefer to use a generic framework that can be applied to different circumstances. Classification grids have significant advantages in this context, which are justified in the section below. The framework will address the central research question:
How do knowledge and expertise lead decision-makers to treat uncertainty as risk, and how can they manage decision-making better under both conditions?

3.2. Classification Grid for Decision Contexts

The classification grid for decision contexts emerged in the 1980s and has since been applied to diverse fields and various contexts, becoming the most popular method for presenting risk (Jordan et al. 2018). Cox (2008) suggests that Classification grids are popular in various applications, including terrorism risk analysis, project management, climate change risk management, and enterprise risk management (ERM). National and international standards have stimulated and augmented the adoption of Classification grids by many organisations and risk practitioners.
To demonstrate the popularity and widespread use of Classification grids, Jordan et al. (2018) find that more than 250 papers on risk management and matrices have been published since the 1980s, and approximately 40 standards and guidelines related to them are currently in place. The US Department of Defence MIL-STD-1629. Military Standard 1980: Procedures for Performing a Failure Mode, Effects, and Criticality Analysis is considered the first standard to utilise a matrix. It used a matrix to connect the probability of occurrence and severity classification for failures. Almost a decade later, the Project Management Institute (Wideman 1992) created the Project and Program Risk Management. A Guide to Managing Project Risks and Opportunities, while at the same time, The Royal Society (1992) issued the Risk: Analysis, Perception and Management. Report of the Royal Society Study Group. Other notable advances are the COSO (2004) Enterprise Risk Management—Integrated Framework. Application Guidelines and the International Risk Governance Council IRGC (Renn and Graham 2006) White Paper No 1 Risk Governance- Towards an Integrative Approach, and the ISO 31000: 2009, Risk management Principles and guidelines on Implementation.
These standards are critical for the popularity of Classification grids. They are used by a plethora of practitioners and academics in various industries and organisations, especially with respect to the ERM. Additionally, they are related to other work on risk management, further enhancing the effectiveness of Classification grids. For example, the IRGC report mentions many other relevant approaches to risks. It is worth noting that various terms are used in relation to risk. The most notable terms are safety, hazards and disasters. Therefore, it can be argued that the concepts of risk and uncertainty are utilised in different contexts, further expanding their scope and the applications of associated (risk) matrices. Finally, these risk frameworks have been updated over the years and further extended to include additional phenomena, such as cybersecurity, thereby further augmenting the popularity of Classification grids.
Classification grids have numerous advantages that contribute to their popularity. Classification grids are relatively simple yet effective approaches that provide a clear framework for a systematic review of risks, support rankings of risks and priority setting, and, most importantly, are well understood, offering opportunities for many, often diverse stakeholders to participate and customise them (Cox 2008). Heat maps are widely used for communication. The grid below is not a frequency–severity matrix. It classifies decision contexts based on knowledge of probabilities and decision stakes. Use it for scoping and governance, not for quantitative risk analysis. Classification grids are a widespread means of presenting and visualising risks, and despite the diversity in applications, the rationale of decision-making is omnipresent throughout the literature, connecting Classification grids to decision-making (Jordan et al. 2018). We adopt the Classification grids, and their popularity and widespread use justify our choice. Additionally, the connection between Classification grids and decision matrices provides a suitable rationale for this study.

4. Analysis of Knowledge and Expertise

4.1. The Definition and Relation of Knowledge and Expertise

Charness and Schultetus (1999) define knowledge as acquired information. It aligns knowledge with the economic literature that considers it synonymous with information. This definition is adopted by Lewandowsky et al. (2007), who studied knowledge and expertise and accordingly linked and argued that expertise illustrates the essential characteristics of knowledge. There is a common agreement in the literature that an expert is characterised by superior performance in a particular domain. Expertise is a result of a learned adaptation and is specific to a domain (Lewandowsky et al. 2007). Therefore, obtaining knowledge and subsequently becoming an expert through this learned adaptation can be viewed as an evolutionary process. As we will argue later, this evolutionary process can be successful, but it can also lead to shortcomings and errors.
Holyoak (1991) discusses commonalities between experts in different domains and finds that expertise develops from knowledge, initially through weak methods. It is evident, therefore, that expertise is related and can be argued to develop through the accumulation of knowledge. In addition, experts possess superior memory for information in their domains and are able to search forward from given information (Holyoak 1991), highlighting the important role of information in expertise. Expertise also increases with practice, further supporting its evolutionary character, and can be predicted from knowledge of the rules it employs (Holyoak 1991). In that sense, the development of expertise is considered a predictable process, and it can be parallelised with modelling and historical data, resulting in limitations to manage uncertainty, which are not easily predicted. We later argue that expertise can also enable the management of uncertainties in situations where knowledge is insufficient. Our argument targets gaps in normative expertise. Domain specialists may overestimate the scope and reliability of their knowledge when calibration and base-rate discipline are weak.

4.2. Knowledge and Expertise as a Risk Reduction Tool

Across numerous fields, expertise is frequently employed as a key tool in managing and mitigating risk. Professionals with specialised knowledge and experience are often better equipped to assess potential risks, predict outcomes, and implement strategies to reduce adverse impacts. Whether in finance, engineering, or healthcare, expertise plays a pivotal role in minimising uncertainty and converting it into manageable risks.
In finance, for example, risk management is heavily reliant on expert analysis and judgement. Financial experts use quantitative models, data analysis, and historical trends to predict market behaviour and potential risks. These professionals, such as portfolio managers and financial analysts, apply techniques like value at risk (VaR) and Monte Carlo simulations to estimate the likelihood of losses under various scenarios (Jorion 2006). Expected shortfall, also called conditional value at risk (CVaR), is preferred for tail losses because it reflects the magnitude of extreme outcomes. VaR is widely used, but it is not a coherent risk measure. Their expertise allows them to manage risks by relying on data-driven insights, making informed decisions that aim to mitigate financial exposure. However, while expertise reduces risk, it does not eliminate uncertainty, particularly in times of market volatility or economic crises, where even the most experienced professionals cannot fully anticipate unpredictable events.
In engineering, expertise is essential in ensuring that systems and structures are safe and reliable. Engineers utilise their in-depth understanding of materials, design, and physics to assess the risks associated with large-scale projects, such as constructing bridges, dams, or aircraft. Through the application of safety standards, rigorous testing, and simulation, engineering experts reduce risks associated with potential system failures. For instance, in civil engineering, expertise is applied to anticipate and mitigate risks related to environmental factors, such as earthquakes or floods, ensuring that infrastructure is designed to withstand these events (Hertz and Thomas 1983). While their expertise transforms many uncertainties into calculable risks, a degree of uncertainty remains, especially in novel projects or conditions not fully understood, such as the long-term effects of climate change on infrastructure.
In healthcare, the role of expertise in managing risk is particularly pronounced. Medical professionals rely on years of training, clinical experience, and ongoing education to diagnose illnesses, assess patient risks, and determine treatment plans. By drawing on a wealth of medical knowledge, doctors can identify potential health risks and recommend interventions that mitigate the chance of adverse outcomes. In fields such as surgery or pharmaceuticals, expertise is crucial to ensuring patient safety. For instance, surgeons utilise their expertise to weigh the risks and benefits of various surgical techniques, while pharmaceutical experts develop medications through extensive research and testing to minimise the risks of side effects or ineffective treatment (Reason 2000). However, like other fields, healthcare professionals must still handle uncertainty—such as when treating rare diseases or when unexpected complications arise during procedures.
Expertise also plays a crucial role in aviation, where pilots and engineers must continually manage risks associated with flight. Experienced pilots rely on extensive training to handle emergencies, making split-second decisions that could prevent disasters. Similarly, aerospace engineers develop and maintain complex systems that reduce the risk of mechanical failures. Expertise in aviation is backed by stringent regulations and continuous advancements in safety technology, which reduce many of the risks associated with flying. Yet, despite all this, uncertainties—such as sudden weather changes or human error—remain (Shappell and Wiegmann 2000).
In these fields and many others, expertise serves as a powerful tool for managing and mitigating risk. Specialists reduce risk and estimation error through the application of their knowledge and expertise. They do not convert Knightian uncertainty into risk. However, it is essential to recognise that expertise has limits. Even the most seasoned professionals can only estimate, manage and mitigate risk to a certain extent, as complete control over all variables—particularly in dynamic or unpredictable environments—remains rather elusive. Therefore, while expertise greatly enhances risk management, it does not eliminate uncertainty, leaving room for unexpected events and outcomes in every industry.

4.3. Limitations of Expertise and Knowledge

While expertise is widely regarded as an essential asset in managing risk, numerous studies have demonstrated its limitations, particularly in dynamic or complex systems where uncertainty remains a dominant factor. In such environments, even specialised knowledge often proves insufficient to anticipate or control outcomes, highlighting the inherent unpredictability of certain fields despite high levels of expertise. Lewandowsky et al. (2007) discussed the limitations of expertise. They identified several experts’ shortcomings, many of which are behavioural, such as poor memory and perceptions, subjectivity, discussed above in relation to subjective probabilities and expectations, as well as the fact that knowledge can be fragmented. Even experts may struggle to integrate it. In the discussion that follows, we present additional literature and examples to support our argument. Mitigation requires normative expertise checks, explicit calibration, base-rate anchoring, and structured challenge.
A key study by Dörner (1996) examined how experts struggle to manage complexity in dynamic systems, a phenomenon he termed the “logic of failure.” Dörner’s (1996) work examined how experts in fields such as engineering, ecology, and economics, despite their in-depth knowledge, frequently fail to account for the non-linear relationships, feedback loops, and delayed effects present in complex systems. His experiments revealed that experts often oversimplify these systems, leading to poor decisions that fail to mitigate risks and, in some cases, exacerbate the situation (Dörner 1996). This study highlights the limitations of expertise when specialists encounter environments that defy straightforward analysis or prediction.
Healthcare provides another domain where expert knowledge, though indispensable, is frequently challenged by uncertainty. A study by Groopman (2007) examined diagnostic errors in medicine, revealing that even highly trained physicians often encounter uncertainties that cannot be easily resolved by expertise alone. The study found that in complex cases where symptoms are atypical or diseases are rare, physicians are more likely to make errors due to incomplete information, ambiguous test results, or the unique biological variability of patients. This limitation is particularly evident in areas such as oncology and the treatment of rare diseases, where the pathologies are not fully understood, and the outcomes remain uncertain despite decades of research (Groopman 2007). It demonstrates that in medicine, expertise does not always guarantee accurate risk assessment, particularly when faced with unique or previously unknown conditions.
In ecology and environmental science, the limitations of expert knowledge are also well-documented. Holling (1973) introduced the concept of resilience to describe an ecosystem’s ability to absorb disturbances and reorganise without collapsing. Holling’s research indicated that even ecologists with extensive knowledge of species interactions and environmental systems often fail to predict how ecosystems will respond to sudden changes, such as climate shocks or human interventions. This unpredictability is largely due to the complex, adaptive nature of ecosystems, where small changes can have disproportionate effects, and feedback mechanisms can lead to unexpected outcomes. Holling’s work highlights that, despite the best efforts of experts, uncertainty remains a fundamental aspect of environmental management (Holling 1973).
Another example of the limitations of expertise can be found in project management. Large-scale projects, particularly those involving novel technologies or infrastructures, often encounter uncertainties that cannot be easily mitigated by expert knowledge alone. Flyvbjerg (2003) examined cost overruns and delays in major infrastructure projects and found that even the most experienced project managers could not anticipate the full range of uncertainties inherent in complex, multi-year undertakings. Flyvbjerg’s re-search revealed that unforeseen technical challenges, political influences, and changing economic conditions frequently cause projects to deviate significantly from their original plans. Experts, despite their knowledge and experience, are often caught off guard by these uncertainties, which fall outside the scope of their predictive models (Flyvbjerg 2003). Decision-making frameworks in this field typically rely on forecasting and cost–benefit analysis, assuming that uncertainties can be reduced through careful planning and expert input. However, studies have shown that large infrastructure projects frequently suffer from cost overruns and delays due to unanticipated factors, such as political instability or technological failures. In this case, the tendency to downplay uncertainty leads to overly optimistic projections, as expert opinion is often biassed towards seeing risk as something that can be controlled (Flyvbjerg 2003). These budgeting and forecasting failures form a subset of the wider engineering judgement problems documented by Petroski, where design, materials and context create failure modes beyond ex ante calculation (Petroski 1994).
Moreover, in technology fields such as artificial intelligence (AI) and machine learning, where experts are designing increasingly complex systems, the limitations of specialised knowledge are becoming more apparent. Research by Bostrom (2014) has shown that as AI systems become more autonomous and capable of learning, predicting their behaviour becomes exceedingly difficult, even for the experts who designed them. Bostrom argues that these systems, especially those operating in real-world environments, are prone to uncertainties that cannot be anticipated or controlled, resulting in potential risks that exceed the current scope of expertise. It raises ethical and practical concerns about the limits of expert knowledge in emerging technologies where uncertainty prevails (Bostrom 2014).
In sectors such as healthcare, engineering, and aviation, expert opinion plays a crucial role in decision-making, particularly under uncertain conditions. The reliance on expert knowledge often leads to the assumption that uncertainties can be managed or mitigated through experience and technical understanding. For instance, in healthcare, physicians may use Bayesian decision models that incorporate subjective probabilities based on expert knowledge and experience. While these models acknowledge uncertainty, they often overlook the deeper unpredictability of complex systems, such as the variability in how patients respond to treatments or the emergence of novel diseases (Thompson 2003).

5. Construction of the Matrix

Sutherland et al. (2022) conducted experiments on how people understand Classification grids. They find that, across all measures, general text performed worse than the matrices, suggesting that there is an advantage in using Classification grids to represent risk. Therefore, matrices are more effective than mere text, providing further evidence of their popularity. Moreover, they found that the effect sizes were generally small, and a non-linear scale with non-linear labelling is better than attempting to do so with the shape of the matrix cells (Sutherland et al. 2022). In the experiment, an important feature was basic knowledge, which involved a basic comprehension of the information concerning different risks. Therefore, knowledge is considered an important aspect of Classification grids.

5.1. A Fundamental Method of Knowledge and Classification Grids

Therefore, a key challenge in the literature and the primary aim of our paper is to connect knowledge to risks. Aven (2017) applies knowledge aspects to Classification grids for risk characterisation. More specifically, they analyse a set of practical methods for characterising risk, which include strength of knowledge judgements and risk factor rankings. Their analysis identifies the personnel conducting risk and safety analysis, as well as their competence, insights, and, it can be argued, knowledge, as an important source of risk. Thus, from the outset of the risk management process, the knowledge and expertise of the personnel responsible are crucial.
The risk assessment conducted by expert personnel focuses on various aspects, including events, probabilities, and consequences, as well as an evaluation of assumptions, risk sources, and factors. Therefore, the knowledge of experts can be limited in any of these aspects. Then, a qualitative risk assessment is performed on these assumptions, highlighting deviations from the assumptions, the implications of such deviations, and Judgments of probability and related strength of knowledge. A risk ranking is conducted, and if the risk associated with these assumptions is high, measures are taken to mitigate these risks. A key finding was that high risk was associated with poor knowledge, and this aspect of knowledge was crucial in mitigating these risks (Aven 2017).
Concerning the assigned probabilities, the supporting knowledge could be questioned, and this is exactly what the study by Aven (2017) aims to do: to highlight the criticality of the knowledge that underlies the probability judgements. It is a fundamental issue for our analysis. Since the distinction of risk and uncertainty lies in probabilities, the knowledge that defines them is crucial. It is one of the main reasons why we adopt a similar approach. To further display the complexity of assumptions and factors and importance of knowledge, in order to evaluate the strength of knowledge, it is essential need to address issues like, the amount and relevancy of data/information, the degree of agreement among experts, how much phenomena involved are understood and accurate models exist and finally he degree to which the knowledge has been thoroughly examined and incorporate concepts like the unknown known, etc., (Aven 2017). All these issues are directly related to knowledge, and concepts such as the unknown known, ambiguity, and others should be the subject of future research. It should be emphasised once again that there are differences among experts, and such caveats could result in incorrect probability and risk assessments.
Finally, the recommended approach to characterise the uncertainties requires three necessary elements: (a) subjective probabilities, also referred to as knowledge-based or judgmental probabilities, and related interval (imprecision) probabilities; (b) a judgement of the strength of the knowledge related to these probabilities; (c) the knowledge which comprises beliefs and is often founded on data and information, and expressed with the assumptions above (Aven 2017). Thus, the elements of this approach largely coincide with our approach, in which probabilities, in general subjective, are central to the distinction between risk and uncertainty, and knowledge is, of course, the other key parameter.

5.2. Matrices on Risk and Knowledge

Aven (2017) presented a framework, and the main challenge has been to incorporate the knowledge related to risk. A key finding is that Classification grids have traditionally been represented in a two-dimensional form, based on probability, and a knowledge dimension should be included (Aven 2017). Our paper answers this call and contributes to the inclusion of knowledge in Classification grids. Spiekermann et al. (2015) frame the knowledge challenges and illustrate that advancement is necessary in knowledge development for disaster risk reduction. They describe different types of knowledge and create a disaster risk reduction and knowledge matrix. However, limitations of knowledge are found as knowledge continues to be flawed due to ignorance (Spiekermann et al. 2015), which in our context can be translated as a lack of knowledge about probability, resulting in confusion between risk and uncertainty. Duijm (2015), studying the use and design of Classification grids, states that the aim of any risk evaluation tool should be based on the best available knowledge and that there are notable weaknesses that we should be aware of to ensure that the Classification grids are used to attain the right conclusions. Our paper aims to make this distinction by recognising risk and uncertainty and avoiding flawed decision-making.

5.3. Risk–Uncertainty and Knowledge–Expertise Matrix

Based on the above analysis, we utilise four quadrants to represent the key concepts.
Risk: In this quadrant, both the impact and likelihood of an event are known. It is typically the domain where decision-makers can use historical data, models, and predictive analytics to manage the situation effectively. An example of this is financial market risk models, such as value at risk (VaR).
Uncertainty: Although the impact may be understood, the likelihood remains unknown or unknowable. In such scenarios, decision-makers face unpredictability, and traditional risk assessment models are often ineffective. An example of this is managing the COVID-19 pandemic in its early stages when data was insufficient to predict the spread or impact.
Knowledge: In this context, decision-makers possess sufficient understanding and data to manage risks effectively. Experts use specialised knowledge to evaluate probabilities and impacts effectively. An example of this is engineers designing a bridge with full knowledge of materials
Expertise: Decision-makers are specialists in their field, possessing deep technical knowledge that enables them to manage complex risks effectively. However, expertise can also lead to overconfidence, confusing risk and uncertainty. An example of this is healthcare professionals using established protocols to diagnose diseases.
Figure 1 shows classification of risk and uncertainty based on decision-maker knowledge and likelihood of event.

5.4. Discussion of the Matrix

Ignorance-Led Risk
Characterised by low knowledge but known probabilities, this zone represents the shallow application of risk models, typically using borrowed tools without a thorough understanding of the underlying assumptions. It is often observed in the misuse of standardised Classification grids, particularly in novice project management or regulatory compliance exercises.
Expert-Managed Risk
Reflects environments where both knowledge and probability are known. It relates to the domains of engineering design, quantitative finance, and surgical procedures based on established protocols. The article associates this quadrant with legitimate expert use of risk models and structured decision-making frameworks.
Ignorance-Led Uncertainty
Represents decision environments with both low expertise and unknowable probabilities. The article presents this as the most vulnerable quadrant—where errors, failure to anticipate, and reliance on cross-disciplinary validation become vital. The recommendation here is institutionalised consultation to avoid catastrophic decisions: cross-disciplinary consultation and a normative expertise check, calibration against base rates, and independent challenge.
Expert-Led Uncertainty
This quadrant warns against overconfidence. Overconfidence often signals strong substantive expertise with weak normative expertise. It emerges when experts incorrectly assume their knowledge suffices to handle unpredictable outcomes. The illusion of control (Langer 1975) and behavioural biases (Kahneman and Tversky 1979) dominate here. The paper emphasises that this misclassification results in inadequate preparation for true uncertainty.
The research undertaken demonstrates the relationships between expertise, risk, and uncertainty, revealing how decision-makers often confuse risk, where probabilities can be predicted, with uncertainty, where probabilities cannot. The final framework developed in this study serves to untangle these concepts by establishing a matrix-based, quadrant model that helps organisations and decision-makers manage situations of varying predictability and knowledge.
One of the most significant findings of this research is the dual role of expertise. While expertise is invaluable for managing risk in environments where probabilities can be anticipated, it can become problematic in uncertain situations. Decision-makers, particularly in high-stakes industries like finance and healthcare, often rely too heavily on their specialised knowledge, believing that their expertise equips them to predict outcomes even in environments where the likelihood of events is unknown. This overconfidence, along with other behavioural and cognitive biases, leads to a phenomenon we have termed the illusion of control, where decision-makers misclassify uncertainty as risk, treating situations as if they can be predicted when, in fact, they cannot. The framework’s self-check mechanism, introduced in Quadrant 3 (Expert-Led Uncertainty), encourages experts to pause and question whether their knowledge is being applied beyond its effective range. By instituting this check, decision-makers are better able to recognise the limits of their expertise and avoid the traps of overconfidence. This shift represents a significant advancement in how organisations can manage uncertainty by adopting a more flexible and adaptive approach to decision-making.
Cross-disciplinary consultation emerged as another key theme in this research, particularly in environments characterised by ignorance and uncertainty, as seen in Quadrant 4 (Ignorance-Led Uncertainty). The study reveals that in situations where decision-makers lack both knowledge and expertise, their ability to recognise and manage risks is severely compromised. However, by institutionalising external validation and consultation across different fields, organisations can mitigate the risks posed by ignorance.
The study also highlights the importance of recognising when transitions between quadrants occur. Decision-making is not static, and as new data emerges, situations may shift from environments of uncertainty to those of risk or vice versa. The final framework provides a decision checkpoint system, ensuring that decision-makers regularly reassess their position within the matrix. For example, in the COVID-19 pandemic, decision-makers initially operated in Quadrant 3 (Expert-Led Uncertainty), but as more data became available, they transitioned toward Quadrant 1 (Expert-Managed Risk), where risks became more predictable and manageable. By formally recognising these transitions, the framework allows for greater agility in decision-making, ensuring that strategies are adapted as the situation evolves.
Therefore, this research makes a significant contribution to the understanding of how expertise interacts with risk and uncertainty in decision-making environments. The final framework provides a practical and adaptable tool that decision-makers can use to better solve complex situations, ensuring they are equipped to recognise the limits of their expertise and manage uncertainty more effectively. When embracing process-based accountability, regular reassessment, and cross-disciplinary consultation, organisations can create more resilient decision-making frameworks that mitigate the risks of overconfidence and ignorance. The findings of this study have broad applications across various sectors, including finance, healthcare, engineering, and public policy, making it a valuable resource for decision-makers operating in an increasingly complex and uncertain world.

6. Study Novelty, Limitations and Future Research

6.1. Study Novelty and Contribution

This study presents a novel approach to understanding and treating uncertainty as risk through the lens of knowledge and expertise. The key innovation lies in the development of a quadrant-based framework that combines the dichotomies of risk-uncertainty, knowledge-ignorance, and expertise-awareness. The study provides, through the new framework, a fresh perspective on how decision-makers can misclassify uncertainty as risk, particularly when they overestimate their ability to control outcomes based on expertise. The notion of self-check mechanisms and process-based accountability represents a significant advancement, offering practical ways for experts to avoid the illusion of control in uncertain environments and better manage risk and uncertainty.

6.2. Study Limitations

While the study offers valuable insights, several limitations should be acknowledged. First, the framework was applied to a limited number of examples from three sectors: finance, healthcare, and engineering. Although these examples can be considered useful, the findings may not fully generalise to other industries, such as emerging fields like biotechnology or cryptocurrencies, where the nature of risk and uncertainty may differ. Future studies could expand the application of this framework to a broader array of sectors to test its robustness across different contexts.
Secondly, the quadrant-based framework relies heavily on the assumption that decision-makers can accurately self-assess their position within the matrix. However, in practice, decision-makers may struggle to recognise when they are transitioning between risk and uncertainty, particularly in high-pressure environments. While the self-check mechanism is designed to mitigate overconfidence, it may not fully address deeper psychological barriers that can affect decision-making.
Another limitation is the static nature of the quadrant matrix. While the study introduces a mechanism for transitioning between quadrants, the framework still categorises decision-making environments in a relatively fixed way. In reality, the boundary between risk and uncertainty can shift rapidly, sometimes within a single decision-making process. The quadrant model may oversimplify complex, dynamic environments where the degree of uncertainty fluctuates, necessitating constant recalibration of risk management strategies. Lastly, the study is based on theoretical analysis, and it does not incorporate quantitative validation. The framework could benefit from empirical testing.

6.3. Future Research

Future research can build on this study by exploring several key areas. One promising direction is the application of the framework to different domains. Additionally, future research could focus more on cognitive biases and their impact on how decision-makers perceive risk and uncertainty. Integrating insights from behavioural economics and psychology into the framework could offer a more comprehensive understanding of the psychological barriers that prevent accurate risk assessment. This would include exploring how decision-makers recognise or fail to recognise transitions between quadrants and how biases, such as overconfidence, anchoring, or herd behaviour, might distort their perceptions of risk.
Empirical validation of the framework would also be a critical area for future research. Future studies could also investigate the interaction of cross-disciplinary expertise within the framework. In complex environments, decision-makers often rely on experts from multiple fields, and future research could examine how cross-disciplinary consultation helps mitigate uncertainty. It would involve analysing how different types of expertise complement or contradict each other in environments where knowledge challenges exist. For example, how do financial, legal, and technological experts collaborate to manage the uncertainties of regulatory changes in the tech sector? Understanding this interplay between various forms of expertise could further refine the model and enhance its applicability.

Author Contributions

Conceptualisation, A.F.; methodology, A.F., F.M. and P.P.; software, A.F.; validation, A.F., F.M. and P.P.; formal analysis, A.F., F.M. and P.P.; investigation, A.F.; resources, A.F., F.M. and P.P.; data curation, A.F., F.M. and P.P.; writing—original draft preparation, A.F.; writing—review and editing, A.F. and P.P.; visualisation, A.F.; supervision, A.F., F.M. and P.P.; project administration, A.F.; funding acquisition, A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arrow, Kenneth J. 1966. Exposition of the theory of choice under uncertainty. Synthese 16: 253–69. [Google Scholar] [CrossRef]
  2. Asare, Stephen K., and Arnold Wright. 1995. Normative and substantive expertise in multiple hypotheses evaluation. Organisational Behaviour and Human Decision Processes 64: 171–84. [Google Scholar] [CrossRef]
  3. Aven, Terje. 2013. On the Meaning of a Black Swan in a Risk Context. Safety Science 57: 44–51. [Google Scholar] [CrossRef]
  4. Aven, Terje. 2016. Risk Assessment and Risk Management: Review of Recent Advances on Their Foundation. European Journal of Operational Research 253: 1–13. [Google Scholar] [CrossRef]
  5. Aven, Terje. 2017. Improving risk characterisations in practical situations by highlighting knowledge aspects, with applications to Classification grids. Reliability Engineering & System Safety 167: 42–48. [Google Scholar]
  6. Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [Google Scholar]
  7. Buehler, Roger, Dale Griffin, and Michael Ross. 1994. Exploring the” planning fallacy”: Why people underestimate their task completion times. Journal of Personality and Social Psychology 67: 366. [Google Scholar] [CrossRef]
  8. Charness, Neil, and Richard S. Schultetus. 1999. Knowledge and expertise. In Handbook of Applied Cognition. Edited by Francis T. Durso. Hoboken: John Wiley & Sons Ltd., pp. 57–81. [Google Scholar]
  9. Committee of Sponsoring Organizations of the Treadway Commission (COSO). 2004. Enterprise Risk Management—Integrated Framework. New York: AICPA. [Google Scholar]
  10. Cox, Louis Anthony, Jr. 2008. What’s wrong with risk matrices? Risk Analysis 28: 497–512. [Google Scholar]
  11. Dow, S. 2019. Risk and uncertainty. In The Elgar Companion to John Maynard Keynes. Cheltenham: Edward Elgar Publishing, pp. 255–61. [Google Scholar]
  12. Dörner, Dietrich. 1996. The Logic of Failure: Recognising and Avoiding Error in Complex Situations. New York: Basic Books. [Google Scholar]
  13. Duijm, Nijs Jan. 2015. Recommendations on the use and design of Classification grids. Safety Science 76: 21–31. [Google Scholar] [CrossRef]
  14. Ellsberg, Daniel. 1961. Risk, Ambiguity, and the Savage Axioms. The Quarterly Journal of Economics 75: 643–69. [Google Scholar] [CrossRef]
  15. Fischhoff, Baruch. 1975. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 1: 288. [Google Scholar] [CrossRef]
  16. Flyvbjerg, Bent. 2003. Megaprojects and Risk: An Anatomy of Ambition. Cambridge: Cambridge University Press. [Google Scholar]
  17. Gigerenzer, Gerd. 2002. Reckoning with Risk: Learning to Live with Uncertainty. London: Penguin Books. [Google Scholar]
  18. Groopman, Jerome. 2007. How Doctors Think. Boston: Houghton Mifflin. [Google Scholar]
  19. Hertz, David B., and Howard Thomas. 1983. Risk Analysis and Its Applications. Journal of Risk and Insurance 50: 79–100. [Google Scholar]
  20. Holling, Crawford S. 1973. Resilience and Stability of Ecological Systems. Annual Review of Ecology and Systematics 4: 1–23. [Google Scholar] [CrossRef]
  21. Holyoak, Keith J. 1991. Symbolic connectionism: Toward third-generation theories of expertise. In Toward a General Theory of Expertise: Prospects and Limits. Edited by K. Anders Ericsson and Jacqui Smith. Cambridge: Cambridge University Press, pp. 301–35. [Google Scholar]
  22. Jain, Jinesh, Nidhi Walia, Simarjeet Singh, and Esha Jain. 2022. Mapping the field of behavioural biases: A literature review using bibliometric analysis. Management Review Quarterly 72: 823–55. [Google Scholar] [CrossRef]
  23. Janis, Irving L. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston: Houghton Mifflin. [Google Scholar]
  24. Jordan, Silvia, Hannah Mitterhofer, and Lene Jørgensen. 2018. The interdiscursive appeal of risk matrices: Collective symbols, flexibility normalism and the interplay of ‘risk’and ‘uncertainty’. Accounting, Organizations and Society 67: 34–55. [Google Scholar] [CrossRef]
  25. Jorion, Philippe. 2006. Value at Risk: The New Benchmark for Managing Financial Risk, 3rd ed. New York: McGraw-Hill. [Google Scholar]
  26. Kahneman, Daniel. 2003. Maps of bounded rationality: Psychology for behavioral economics. American economic review 93: 1449–1475. [Google Scholar] [CrossRef]
  27. Kahneman, Daniel, and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47: 263–91. [Google Scholar] [CrossRef]
  28. Keynes, John Maynard. 1937. The General Theory of Employment. The Quarterly Journal of Economics 51: 209–23. [Google Scholar] [CrossRef]
  29. Klayman, Joshua, and Young-Won Ha. 1987. Confirmation, Disconfirmation, and Information in Hypothesis Testing. Psychological Review 94: 211–28. [Google Scholar] [CrossRef]
  30. Knight, Frank H. 1921. Risk, Uncertainty, and Profit. Boston: Houghton Mifflin. [Google Scholar]
  31. Langer, Ellen J. 1975. The Illusion of Control. Journal of Personality and Social Psychology 32: 311–28. [Google Scholar] [CrossRef]
  32. Lewandowsky, Stephan, Daniel Little, and Michael L. Kalish. 2007. Knowledge and expertise. In Handbook of Applied Cognition, 2nd ed. Edited by FrancisT. Durso, Raymond S. Nickerson, Susan T. Dumais, Stephan Lewandowsky and Timothy J. Perfect. Chichester: Wiley, pp. 109–40. [Google Scholar]
  33. Perrow, Charles. 1984. Normal Accidents: Living with High-Risk Technologies. Princeton: Princeton University Press. [Google Scholar]
  34. Petroski, Henry. 1994. Design Paradigms: Case Histories of Error and Judgment in Engineering. Cambridge: Cambridge University Press. [Google Scholar]
  35. Reason, James. 2000. Human Error: Models and Management. BMJ 320: 768–70. [Google Scholar] [CrossRef]
  36. Renn, Ortwin, and Peter Graham. 2006. Risk Governance: Towards an Integrative Approach. White Paper No. 1. Geneva: International Risk Governance Council. [Google Scholar]
  37. Savage, Leonard J. 1954. The Foundations of Statistics. New York: Wiley. [Google Scholar]
  38. Shappell, Scott A., and Douglas A. Wiegmann. 2000. Human Factors Analysis and Classification System: HFACS. Washington, DC: Federal Aviation Administration, Office of Aviation Medicine. [Google Scholar]
  39. Simon, Herbert A. 1955. A Behavioral Model of Rational Choice. The Quarterly Journal of Economics 69: 99–118. [Google Scholar] [CrossRef]
  40. Simonsohn, Uri, and George Loewenstein. 2006. Mistake: The effect of previously encountered prices on current housing demand. The Economic Journal 116: 175–99. [Google Scholar] [CrossRef]
  41. Slovic, Paul. 1987. Perception of Risk. Science 236: 280–85. [Google Scholar] [CrossRef] [PubMed]
  42. Spiekermann, Raphael, Stefan Kienberger, John Norton, Felipe Briones, and Juergen Weichselgartner. 2015. The Disaster-Knowledge Matrix–Reframing and evaluating the knowledge challenges in disaster risk reduction. International Journal of Disaster Risk Reduction 13: 96–108. [Google Scholar] [CrossRef]
  43. Staw, Barry M. 1981. The escalation of commitment to a course of action. Academy of Management Review 6: 577–87. [Google Scholar] [CrossRef]
  44. Sutherland, Holly, Georgina Recchia, Sarah Dryhurst, and Alexandra L. J. Freeman. 2022. How people understand Classification grids, and how matrix design can improve their use: Findings from randomised controlled studies. Risk Analysis 42: 1023–41. [Google Scholar] [CrossRef]
  45. Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House. [Google Scholar]
  46. The Royal Society. 1992. Risk: Analysis, Perception and Management: Report of a Royal Society Study Group. London: The Royal Society. [Google Scholar]
  47. Thompson, Kimberly M. 2003. Variability and Uncertainty Meet Risk Management and Risk Communication. Risk Analysis 22: 647–54. [Google Scholar] [CrossRef]
  48. Tversky, Amos, and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. Science 185: 1124–31. [Google Scholar] [CrossRef]
  49. von Neumann, John, and Oskar Morgenstern. 1947. Theory of Games and Economic Behavior. Princeton: Princeton University Press. [Google Scholar]
  50. Wideman, R. Max. 1992. Project and Program Risk Management. Newtown Square, PA: Project Management Institute. [Google Scholar]
Figure 1. Classification of Risk and Uncertainty Based on Decision-Maker Knowledge and Likelihood of Event.
Figure 1. Classification of Risk and Uncertainty Based on Decision-Maker Knowledge and Likelihood of Event.
Risks 13 00188 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Faccia, A.; Petratos, P.; Manni, F. The Illusion of Control: How Knowledge and Expertise Misclassify Uncertainty as Risk. Risks 2025, 13, 188. https://doi.org/10.3390/risks13100188

AMA Style

Faccia A, Petratos P, Manni F. The Illusion of Control: How Knowledge and Expertise Misclassify Uncertainty as Risk. Risks. 2025; 13(10):188. https://doi.org/10.3390/risks13100188

Chicago/Turabian Style

Faccia, Alessio, Pythagoras Petratos, and Francesco Manni. 2025. "The Illusion of Control: How Knowledge and Expertise Misclassify Uncertainty as Risk" Risks 13, no. 10: 188. https://doi.org/10.3390/risks13100188

APA Style

Faccia, A., Petratos, P., & Manni, F. (2025). The Illusion of Control: How Knowledge and Expertise Misclassify Uncertainty as Risk. Risks, 13(10), 188. https://doi.org/10.3390/risks13100188

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop