Next Article in Journal
Projection of SDGs in Codes of Ethics—Case Study about Lost in Translation
Next Article in Special Issue
Contextualized Behavior for Improving Managerial and Entrepreneurial Decision-Making
Previous Article in Journal
Justice for the Crowd: Organizational Justice and Turnover in Crowd-Based Labor
Previous Article in Special Issue
Cognitive Biases in Critical Decisions Facing SME Entrepreneurs: An External Accountants’ Perspective
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Enhancing Healthcare Decision-Making Process: Findings from Orthopaedic Field

Department of Management and Law, University of Rome Tor Vergata, 00133 Rome, Italy
Author to whom correspondence should be addressed.
Adm. Sci. 2020, 10(4), 94;
Submission received: 29 October 2020 / Revised: 19 November 2020 / Accepted: 20 November 2020 / Published: 25 November 2020
(This article belongs to the Special Issue Managerial and Entrepreneurial Decision Making: Emerging Issues)


In the healthcare field, the decision-making process is part of the broad spectrum of “clinical reasoning”, which is recognised as the whole process by which a physician decides about patients’ treatments and cares. Several clinicians’ intrinsic variables lead to this decisional path. Little is known about the inference of these variables in triggering biases in decisions about the post-discharge period in the surgical field. Accordingly, this research aims to understand if and how cognitive biases can affect orthopaedists in decision-making regarding the follow-up after knee and hip arthroplasty. To achieve this goal, an interview-based explorative case study was run. Three key-decisional orthopaedic surgeons were interviewed through a quality control tool aimed at monitoring the causes and effects of cognitive distortions. Coherently with the literature, eight biases come to light. All the interviewees agree on the presence of four common biases in orthopaedic surgery (Affect heuristic, Anchoring, Halo effect, Saliency). The other biases (Groupthink, Availability, Overconfidence, Confirmation), instead, depending on specific physicians’ intrinsic variables; namely: (i) working experience; (ii) working context. This finding contributes to the debate about the application of cognitive tools as leverage for improving the quality of clinical decision-making process and, indirectly, enhancing better healthcare outcomes.

1. Introduction

Decision-making is a complex and progressively unpredictable process that relies on precise information availability (Simon 1947; Sousa et al. 2019).
In healthcare, the decision-making process called the “clinical decision process” (Higgs et al. 2019) is a part of a more complex process named “clinical reasoning”. According to Norman (2005), the topic of clinical reasoning has been studied for more than 30 years and three fields of research have characterised it. They are related to (i) Understanding the process of clinical reasoning; (ii) Knowledge and memory related to clinical reasoning; and (iii) Mental representation of clinical reasoning.
Up to now, there is no unique definition of clinical reasoning, there are several definitions that differ a lot from each other (Durning et al. 2013). For clearness, in healthcare literature often clinical reasoning and clinical decision-making have been used as synonyms to define “the process by which a healthcare practitioner decide what to think and do with a patient” (Christensen et al. 2017, p. 176), but Durning et al. (2013, p. 446) provides a precise definition of clinical reasoning as “the mental process and behaviours that are shared (or evolve) between the patient, physician, and the environment”. In this manuscript, according to Higgs et al. (2019), we refer to clinical reasoning as the whole process of a physician’s thinking during her/his practice, where clinical decision-making is a step of it, provided in order to emphasize the outputs or decisions. Towards the analysis of every decision-making process, it should be noted that this is a human process and it is not exempt from human characteristics such as several biases, that is, deviations from human rationality, due to human nature (Kahneman 2011). Most of the literature on clinical reasoning (both articles and reviews) leads to bias-triggered diagnostic errors (Cooper and Frain 2016) concerning emergency medicine (Croskerry 2003; Rubio-Navarro et al. 2020; Antonacci et al. 2020); just a few studies, however, regard biases in the clinical and therapeutic decision process (day by day decisions). This first literature gap would deserve to be further investigated; especially, some scholars have stated that clinical outcomes may be improved through recognising, understanding, and modifying those decisions affected by biases. (Antonacci et al. 2020).
Moreover, some authors outside the healthcare field (Baron 1998; Haley and Stumpf 1989; Rashid and Boussabiane 2019) have studied the connection between decision-makers’ personal variables and the decisional output. In line with this literature, Wu et al. (2017) stated that the personal features of managers together with characteristics of their working environment could influence the decision-making process and operating choice. According to the author, the main decision-maker influencers are: (i) the manager’s international experience; (ii) the specific task/role; (iii) the number of team members with which the manager works; (iv) the working atmosphere and pressure. In other words, Wu et al. (2017) recognise that working experience/role and working context play a crucial role in influencing business decisions. Nevertheless, within the healthcare management field, a literature gap was also noted about the specific analyses of the linkage between intrinsic human variables of clinicians and the decisional biases connected to the clinical reasoning. Actually, this second aspect still remains understudied.
This is in accordance with what Ashoorion et al. (2012) have stated. By analysing the correlation between physicians’ personal variables and clinical reasoning on medical students, authors have claimed the need for future studies to confirm their results in other fields. With regards to the working context, only a study by Elvén et al. (2019) has defined those variables as able to effectively influence clinical reasoning. Nevertheless, their sample study has referred only to the physical therapist students and not to physicians; thus, also this aspect would deserve to be further investigated.
Accordingly, this article would like to contribute to filling the two above-mentioned gaps arisen from the literature, with a specific reference to the decision-making process in the surgical field. To achieve this goal, the inquiry intends to analyse both features of decisions in the orthopaedic context and physicians’ personal variables which would trigger cognitive biases and, in turn, mistakes in clinical choices.
Starting from this precondition, this study wants to understand which kind of errors are made by orthopaedic surgeons during their clinical decision process related to the follow-up of hip and knee arthroplasty. As a secondary endpoint, consequent to the main one, this study intends to verify the effective contribution of cognitive tools in recognising biases in the orthopaedic field, whose investigation would improve the quality of clinical decision-making path.
For the sake of clarity, the orthopaedic surgical field is chosen due to its standardised process and its considerable impact on healthcare systems’ expenses. Mainly, the choice of the hip and knee arthroplasty (the most common joint reconstructions) is exactly due to the high frequency and demand that characterise the two surgery procedures (Ministero della Salute 2016). In fact, as reported by Bcc Research (2020) estimates, the world market relating to joint reconstructions should reach a value of 26.81 billion dollars by 2025, with a number of interventions equal to 5198.38 million. These two procedures, for example, represent the largest expense of implant costs in the USA. (Robinson et al. 2012).
Actually, the goal of a healthcare institution is creating higher value for patients at lower costs (Porter 2009); thus, the availability of information about the effectiveness and efficiency of the treatments at every stage of the patient value-chain (from hospital admission to follow-up) becomes fundamental. In order to address this challenge, managers of healthcare organisations need solutions that would allow them to improve decision-making and business processes together with communication among doctors, patients, and administration, as well as effective access to different data (Olszak and Batko 2012).
For this study aim, however, it is fundamental to specify the concept of follow-up to which we refer: “a check on someone who has been examined before in order to assess the process of a disease or the results of treatment” as defined by Dictionary of medical terms (Collin 2009). The moment of the follow-up decision was choosing to analyse a different aspect of clinical decision-making. Most studies tended to focus on clinical decision making within the hospital and in an emergency situation (Flynn et al. 2012; Lo and Katz 2005; Hess et al. 2015); on the contrary, we want to analyse the process related to the long term management of the patients and clinical decision-making process that condition the continuum of care (Jette et al. 2003) in hip and knee arthroplasty.
To sum up, the aim of the study, in relation to the decision-making process of orthopaedic surgeons about the follow-up of hip and knee arthroplasty, is twofold:
  • To analyse possible decisional biases by the use of a cognitive tool;
  • To understand if some decision-maker features influence cognitive biases.
To achieve these goals, the work follows three main stages. First of all, an analysis of the theoretical background on decision-making and its criticalities and connections with the healthcare and surgical fields was carried out. Second, it was run an interview-based qualitative case study on the field of orthopaedic surgery; the interviewees are well-informed physicians representing a good cross-section of the Italian orthopaedic landscape.
Lastly, besides verifying those consolidated biases in healthcare decision-making, the analysis of qualitative findings demonstrates a connection between some specific biases and two features of decision-makers: working experience and working context. This last issue, particularly, represents an aspect of healthcare decision-making still understudied; this would pose the study as a novelty in the field.
The paper proceeds with the following outline: after this introduction, the second section focuses on the main theoretical background. The third section reports the methodology of the study, while the fourth one presents the research findings. Section five discusses the results obtained and provides some considerations about the endpoints of this study. The last section includes the final remarks.

2. Theoretical Background

The clinical decision process is a complex dynamic and under pressure process including a choice between options as categories and diagnosis (Hausmann et al. 2016; Higgs et al. 2019); its complexity is due to the involvement of more people and the gap between information availability and those necessary. Hence, clinicians, often have to face uncertainty and make decisions without definitive information, taking into account the so-called “imperfect information” (Higgs et al. 2019, p. 504) that is available at the early stage of the clinical encounter (Cooper and Frain 2016). At this stage of the decision process, physicians can be satisfied only in deciding what kind of information has to be collected and which aspects of the situation have to be pointed out (Higgs et al. 2019). Thus, they are able to provide only a potential diagnosis.
In this context, the use of big data surely improves clinical output (Atoum and Al-Jarallah 2019); in particular, data availability (Sun and Scanlon 2019) would represent an information source which is able to sustain the decision-making process, by making it more aware (and bias-free) as based on concrete findings drawn from similar clinical situations (Yan et al. 2017).
Within the clinical reasoning sphere, however, it is crucial to refer to a personal understanding of the patient’s condition by the physician and to his/her ability to make a decision. Within this path, indeed, the physician has to evaluate several elements concerning the patient history (e.g., findings from clinical examination, test results) and, above all, they have to make a decision under time pressure that can considerably condition the whole process (Cooper and Frain 2016; Goldsby et al. 2020). According to Del Mar et al. (2006) “doctors have to be good at interpreting, at prioritising, at making compromises, at seeing what matters” (p. vi), with the aim to managing complications and health crisis in a timely way; in a complex environment, in fact, it could often happen that “what seems right in theory would be damaging in the fleshDel Mar et al. (2006, p. vi). In particular, from Zavala et al. (2018) we learned that clinical decision-making can be influenced by factors which increase complexity and uncertainty depending on the specific case; thus, factors which complicate the healthcare context can be summarised as the following: unpredictable workflows, non-replicable conditions, pressures, organisational systems, workload, teamwork, human interactions, and patient complexity.
Moreover, it should also be noted that within the clinical decision process the state and the action spaces of physician (decision-maker) are strictly influenced by the patient. There is a sort of interdependence between the two actors that can lead the entire process and its results (Lippa et al. 2017). In particular, more is the clinician’s self-efficacy in her/his patient management abilities more is the patient reliance on that clinician’s approach (Sizer et al. 2016).
In addition, the clinical decision process could be influenced by several factors like external context (Robinson et al. 2020), environment, the complexity of the task (Higgs et al. 2019) and by the capabilities, confidence, and emotions of the practitioner (Smith et al. 2007). Actually, it is impossible to analyse the clinical decision process without considering the context or situation in which it occurs, as reported by several authors (Cooper and Frain 2016; McBee et al. 2015; Fargen and Friedman 2014) context characteristics or interactions between physician, patient, and environment are fundamental to understanding the whole process of clinical reasoning and can modify it in several ways, sometimes bringing errors.
Hughes and Nimmo (Cooper and Frain 2016) in particular have identified five types of errors that can occur in the diagnostic process and are related to:
  • Missing information
  • Lack of supervision
  • Physicians’ knowledge
  • Misunderstanding of a diagnostic test
  • Cognitive errors
According to Simon, there are several restrictions on human cognition related to the social environment in which decisions occur; in the author’s opinion, an individual has a “bounded rationality” and thus people make “satisficing rather than optimal decisions” (Cristofaro 2017b, p. 172).
Moreover, according to Kahneman (2011), there are two modes or systems of thinking:
  • System one, or the intuitive system, is characterised by quick thinking, unawareness, and little effort;
  • as a contrast, system two, or reflective system is characterised by slow thinking, awareness, deductive reasoning, and more concentration.
By using system one, people incur cognitive shortcuts that can affect the decision-making process (Cristofaro 2017a). Biases are very common in every human situation, so in clinical practice too; knowledge and experience cannot avoid the possibility to make these errors: they are “subconscious deviations in judgement leading to perceptual distortion, inaccurate judgement and illogical interpretation”, in Cooper’s opinion (Cooper and Frain 2016, p. 26); these errors can be related to both systems of thinking.
To help decision-makers to avoid these biases and improve the decision-making process, Kahneman et al. (2011) identified the checklist: a tool to improve the quality of decisions finding defects in the process. This tool provides a set of 12 questions aimed at identifying errors in thinking (biases). To use this tool, a third person is required, independent from the analysed group; indeed, people cannot recognise their own errors as a third person (Kahneman et al. 2011).
Specifically, in the healthcare field, (Antonacci et al. 2020) identifies a twofold direction between (i) biases in emergency care; (ii) biases in clinical and therapy medicine.
Emergency physicians are required to make decisions under an extremely high level of uncertainty, and they have to consider plenty of factors (Croskerry 2003). Indeed, during their decision-making process in an emergency room or similar situation, physicians have to consider not only the physical patient condition (choosing which treatment to exclude, which one has to be initiated and when) but also patients’ preferences, resources availability, cost and time (in particular time and resources are limited in this context) (Hausmann et al. 2016). Another aspect that is important to underline is that in the emergency decision-making process, clinicians have to choose how to allocate their time and effort and which patient to prioritise. This factor along with limited knowledge of the patient’s personal history by the physician makes the emergency room a “natural laboratory of error” (Croskerry 2003; Antonacci et al. 2020; Hausmann et al. 2016). According to Croskerry (2003, p. 776), moreover, “nowhere in medicine is rationality more bounded by relatively poor access to information and with limited time to process it”. Accordingly, emergency medicine is an area full of heuristics (Abatecola 2014): a method of solving problems by finding practical ways of dealing with them, learning from past experience (Oxford Dictionary 2012).
Regarding the clinical decision process related to therapy and clinical medicine, there are several studies that can explain which kind of factors may lead this process in different fields; listed below there are some sample factors that can modify the decision-making process in some medicine fields:
  • In internal medicine, there are contextual factors, interactions, and how information is collected and acquired (McBee et al. 2015).
  • In physical therapy: situational circumstances, the perspective of the client, reasoning strategies, knowledge, and experience (Elvén et al. 2019; Wainwright et al. 2011).
  • In dentistry: age of physician, number of dependents, perception of practice loans, and place of initial training. (Ghoneim et al. 2020).
Furthermore, medicine literature has identified and described more than one hundred cognitive biases (Cohen and Burgin 2016), listed below there are some of the major biases of other medicine fields:
  • Anaesthesiology: Anchoring, Availability bias, Premature closure, Feedback bias, Confirmation bias, Framing effect, Commission bias, Overconfidence bias, Omission bias, Sunk costs, Visceral bias, Zebra retreat, Unpacking principle, Psych-out error (Stiegler et al. 2012).
  • Neurology: Framing Effects, Anchoring, Availability, Representativeness, Blind Obedience (Vickrey et al. 2010).
  • Medical imaging: Availability Bias, Alliterative Bias, Anchoring Bias, Framing Bias, Attribution Bias, Blind Spot Bias, Regret Bias, Satisfaction of Search, Scout Neglect Bias, Hindsight Bias (Itri and Patel 2018).
  • Dermatology: Anchoring, Availability bias, Representativeness restraint (Dunbar et al. 2013).
  • General surgery: Anchoring, Availability Bias, Commission Bias, Overconfidence Bias, Omission Bias, and Sunk Costs (Vogel and Vogel 2019).
Moreover, some scholars recognised the ownership of a Hospital Institution as able to directly/indirectly influence the clinical decision process in surgery. The main difference between public and private hospitals in the surgical field regards the type of healthcare intervention provided to users/patients; public-access hospitals carry out more traumatic and emergency interventions on acute patients; private hospitals, instead, tend to provide mostly elective and planned surgery (Ierano et al. 2019).
As a consequence, another significant difference between public and private hospitals is about physician autonomy in decision-making. Accordingly, Ierano et al. (2019) stated that “autonomy was perceived to be greater in the private hospital setting than in the public hospital setting”. In their study, the interviewed nursing staff noted that private surgeons had the capability to “dictate their own practice” irrespective of the guidelines or the hospital policy, as the private physicians were “doing their own thing and renting the space”.
Concerning the field of this study, orthopaedic surgery, it is important to underline that it focuses on both the emergency and clinical/therapy medicine fields; therefore, the orthopaedic surgery field can be led both by “emergency bias” (mostly heuristics) and by the “clinical biases”, depending on which kind of healthcare services we focus on. Particularly, Sizer et al. (2016) proposed a model to drive the clinical decision process in the orthopaedic field: the “evidence-supported practice wheel” that poses the clinician’s expertise and the patient at the centre of the problem, it makes the physician more flexible to adapt to the patient’s needs and context, still relying on scientific literature. According to the author, some technical factors influence decision-making in orthopaedic surgery; Sizer defines the biomedical information on the patient that has to be taken by the physician in order to provide an aware decision-making process.
In addition, as stated by Grove et al. (2015), orthopaedic surgery is characterised by high professionalisation based on long-training and proven practical experience. In this field, according to the author, an elite group of surgeons (usually grouped per different countries and regions on the base of specialist surgery) is recognised as the reference key-opinion leaders able to influence (ordinary) surgeons in decision-makings through their researches and case study reports developed in their working experience. In our opinion, these circumstances could affect the orthopaedist’s decision-making with cognitive biases (as identified by Kahneman), which might display the guidelines from the key-opinion leaders as always valid, without any further in-depth consideration of the patient case.
Moreover, some scholars belonging to the industrial field state that the working context and job experience are also able to influence the decision-making sphere (Hendrick 1999; Kobus et al. 2001). Accordingly, the expectation of this manuscript, for the healthcare field, is to understand how patient information together with physicians’ variables would impact personal thinking as a base of the decision-making process.

3. Materials and Methods

Qualitative research (Patton 2002) was a better fit for the types of study that we conduct and for the state of prior research and theory we refer to (Edmondson and Mcmanus 2007). Accordingly, qualitative descriptions allow the researchers to stand by the data and provide factual summaries of participants’ experiences and perceptions (Neergaard et al. 2009). Given the above, to analyse different scenarios of the orthopaedic surgery world, three semi-structured “face to face” (El Said 2017) interviews with three different orthopaedic surgeons from different working contexts in Italy were conducted. In particular:
  • the head of a public “trauma-centre” hospital and university professor/director of a “Postgraduate School in Orthopaedics”, with more than 20 years of experience; (SD)
  • the head of several orthopaedic surgery teams, working in private hospitals, with more than 15 years of experience (SP);
  • the Coordinator of the orthopaedic emergency team in a public “trauma-centre” hospital, with less than five years of experience (SH).
The respondents work in Italy, in different regions.
Even if only three, in the authors’ opinion, these interviewees represent a quite good depiction of the orthopaedic landscape in Italy (Torre et al. 2017); the Italian healthcare, according to Spano and Aroni (2018) is based on public-access hospitals/health authorities (which mostly provide free of charge services as responses to emergencies and scheduled surgeries) and private organisations (which exclusively sell scheduled services).
Accordingly, considering respondents belonging to different ownerships (public and private) of healthcare organisations with different roles and experience means estimating the main variables concerning the physician job: working experience and working context. Particularly, the respondents’ sample (even if small) considers:
  • a high-experienced physician, working in a public context (SD);
  • a high-experienced physician, working in a private context (SP);
  • a law-experienced physician, working in a public context (SH).
Please, note that all the three interviewees have the responsibility of their working team, and they are the main coordinators of the decision-making during their specific work-shifts and surgical activities. Accordingly, they could be considered as key-expert-informants (Yin 2004, 2017) for the aim of this study, given also its explorative-goal (Scapens 1990) based on critical case study sampling (Patton 2002) design. This explains why it would not make sense to interview a law-experienced physician, working in a private context, whose contribution to the decision-making process would be surely considered as secondary.
Hence, in order to contextualise and define the topic, a definition of follow-up was required by all respondents. Thus, to achieve this study goal, Kahneman’s checklist (as the quality control tool aimed at monitoring causes and effects of cognitive distortions) was submitted to the three respondents; on the basis of their expertise, interviewees were asked to highlight and explain those biases recognised on their current decision-making, with specific reference on hospital discharges (Jette et al. 2003).
Kahneman’s checklist aims, in fact, to find those biases related to the possibility that decisions can be distorted by cognitive mistakes/biases.
Thus, according to Cristofaro (2017a), the checklist was adjusted to make it suitable for the healthcare context and in particular it was focused on the intellectual process regarding the decision-making about patient’s follow-up (Jette et al. 2003) after knee or hip arthroplasty.
Table 1 reports the adjusted checklist submitted to interviewees, modified to be suitable for this study’s aims. The last column of Table 1 contains the link between each checklist question and the related control question(s), separately listed at the end of the table.
As stated by Stylianou (2008), a control question is defined as a probe question “that controls an independent variable in the participant’s thinking for verification and exploration purposes” (p. 242). Particularly, control questions should be used in situations in which “the substantive theme contains multidimensional concepts or complex causal structures” (Stylianou 2008, p. 242) that need to be disentangled.
As in our case study, control questions allow the interviewer to monitor that the questions protocols were respected and that all the inquiry issues have been understood by interviewees in coherence with the study goals.
Thus, as included in the previous Table 1, the control questions were provided for the following reasons, concerning the understandability of the qualitative results detected:
  • C.Q. A was to verify if the interviewees agree on the concept of follow-up.
  • C.Q. B was to understand which “non-clinical” information is considered as “necessary” in decision-making development.
  • C.Q. C was to examine in depth how decisions are undertaken within a group of orthopaedists and if the group tends to review some decisions taken by a member.
  • C.Q. D was to understand what it would mean for an orthopaedic surgeon to always take decisions without any discussions/debates with the team.
  • C.Q. E was to analyse what would be considered a positive or a negative scenario in the orthopaedic context for the follow-up choices.
  • C.Q. F was to understand which kind of self-interests should be involved in decision-making about orthopaedic patient follow-up
Precisely, orthopaedics participated in video-conference semi-structured interviews conducted by the principal investigator between April and June 2020. Each interview lasted between 60 and 80 min and was digitally recorded. The analytic process was guided by the principles of conventional content analysis (Hsieh and Shannon 2005), the interviews after the recording were transcribed verbatim, identifying information was removed, and data were stored in a password-protected computer. Verbatim transcription was investigated through the thematic analysis approach (Braun and Clarke 2006), following the methodological fit drawn from (Edmondson and Mcmanus 2007). The results of the ongoing analysis were reviewed by the authors together with interviewees directly during the regular meetings; disagreements were resolved by discussion and consensus.
Proceeding further, Figure 1 shows the phases carried out in the methodology. Specifically, it includes the chronological order of all steps followed and the inclusion/exclusion criteria considered for interviewees choice.

4. Results

This section reports the findings obtained from the use of the checklist (Kahneman et al. 2011), already mentioned in the method section.
Looking at the biases, no evidence was found about the possibility that self-interest, sunk cost, disaster neglect, and loss aversion can lead the decision-making process of orthopaedics regarding patient follow-up. All three interviewees attested the presence of four biases in the decision-making process: Affect Heuristic (i.e., SH: “we do what literature tell us, avoiding customised follow-up”); Saliency, (i.e., SP: “a similar success in the past influences decisions above all”), Anchoring and Halo Effect (i.e., SD: “we make decisions based on the score that we apply; for sure decisions are influenced by decisions made by other departments or other clinical contexts”).
Regarding the other biases analysed, their presence or absence depends on the interviewee characteristics. Groupthink and Availability came out from the SH and SD interviews but not from the one of SP. On the contrary, the Overconfidence affects the decision process of SP and SD but not the one of SH. Moreover, Confirmation bias emerged only from SH.
The results are summarised in the following table (Table 2).

5. Discussion

Thanks to face-to-face interviews, based on the Kahneman et al. (2011) checklist adjusted for orthopaedics, we derived some qualitative issues highlighting potential weaknesses of the cognitive path which steers orthopaedists in decisions. In particular, we preliminary understood that the follow-up choice (C.Q. A) can be considered as the only stage of the orthopaedic surgical process where physicians could potentially take decisions depending solely on the patient’s real needs and conditions, without interferences from guidelines.
Nevertheless, the analysis of all the interviews discloses a high impact of consolidated practice also on clinical decision-making about follow-up. This might be due to two reasons: firstly, orthopaedics is a traditional specialisation, based on “standardisation of process” for this kind of healthcare pathway (Healy et al. 1998; Scranton 1999). The other reason is due to a “cultural issue” related to the physician’s training, which makes practitioners feel more confident in doing what they have always done (as reported by SP during his interview); such behaviour can bring decision-makers to minimise risks of their practice and exacerbate its benefits (Kahneman et al. 2011) conducting to the Affect Heuristic.
Moreover, all respondents agree that a variable that certainly influences the decision-making is the coherence with decisions undertaken by other similar departments. According to Grove et al. (2015), orthopaedic surgeon decision-making is strongly influenced by the key-opinion leaders’ guidelines and suggestions. This often brings the surgeon to take a decision in his/her operating context as an emulation of someone else’s choice or endpoints; this kind of bias, however, could jeopardise the correct clinical decision.
This result is related to the scientific nature of medical science (which is based on scientific literature) as emerging:
In particular, evidence-based medicine (EBM) involves the use of the current best practices to make decisions regarding the patient’s care (Sackett et al. 1996); in this way, decisions are steered together with the kind of data that a physician collects (Twells 2015). This approach, strongly related to the Anchoring bias, might speed up the doctors’ decision-making with certain data and information (based on scholars’ endpoints) without looking for (potentially useful) others.
Otherwise, the impact of other departments’ evidence or key-opinion leader insights is a very meaningful aspect of understanding the decision process of surgeons; this concerns the Halo Effect. Accordingly, as reported by Cook et al. (2009), discussion with colleagues is more influential in clinical practice than empirical support; in this regard, SD is convinced that decisions are surely influenced also by the other departments’ experience; and, as reported by SP, “for me, it is easier changing the surgical practice, if an opinion leader in the sector suggests me to change it”. Moreover, according to all respondents’ opinions, surgeons are more willing to change their practice on the basis of the experience of practitioners deemed most competent in the field.
Regarding the influence of past results (Saliency), according to Stewart and Chambless (2007), all interviewed clinicians recognise that the past experience is more important in clinical treatment decisions than empirical research knowledge.
Nonetheless, there are some biases that are absent for some interviewees’ opinions; to analyse more in detail these errors, it is important to notice that the respondents have some characteristics in common. In particular:
  • SD and SP are more expert surgeons, they have more than fifteen years of experience, while SH has less than five years of experience;
  • SD and SP are both directors of their department/surgery team;
  • SD and SH share the same status of public employees, they both work in a public hospital;
  • SP works in a private organisation, where he supervises only the operating theatre teams (whose he is the head) for elective surgeries.
Therefore, the errors of Groupthink and Availability (recognised by SD and SH) are probably most concerning to the context of the public ownership of the hospital. Indeed, as reported by SP, the simultaneous absence of Groupthink and Availability in his practice “is related to the characteristic of my working place: I have my private patients whose information is owned mostly by me. In the private hospital, we are several physicians with our own patients; I just made decisions for mine”. This result is in line with Smith et al. (2007) and Eisenberg (1979) that the identified external factors, features of clinicians, clinician’s interaction (Robinson et al. 2017) with his/her profession, and the health care system as the factors that can modify the decision process. The context in which decisions are made can be very significant, but it has not been rigorously explored in prior studies (Durning et al. 2011).
Concerning the Overconfidence (the errors that SD and SP have in common), it may be caused by the lengthy experience of the surgeon. On the contrary, the lack of experience can be the explanation for the Confirmation bias as an error came out only from SH’s interview. He is, indeed, the one with less than five years of experience.
Summarising, according to Kahneman’s checklist, the analysis of the interviews showed the possibility that 8 biases out of 12 affect the decision-making process of the orthopaedic surgeons regarding the follow-up of patients undergoing hip and knee arthroplasty. Below, Table 3 reports the errors that emerged, their explanation, and medical scientific literature regarding them. The last column of Table 3 also reports which of the interviewees recognised the specific errors.

6. Conclusions and Implications

This paper addresses the theme of the clinical reasoning process in healthcare. Precisely, the work aims at understanding if and how the decision-making process of orthopaedic surgeons can be affected by cognitive biases. Particularly, within the decision process sphere, the choice regarding the patient’s follow-up after knee and hip arthroplasty has been analysed. To achieve the goal of this study, Kahneman’s checklist was employed in order to recognise which kind of errors can mostly lead surgeons’ decision-making process; accordingly, we conducted three semi-structured interviews with key-decisional orthopaedic surgeons working in different organisations. The results show several biases that can affect the clinical decision process regarding follow-up after knee and hip arthroplasty.
In particular, some are in common for all the interviews (Affect heuristic, Anchoring, Halo effect, Saliency); the others (Groupthink, Availability, Overconfidence, Confirmation) are related to the following two personal variables of surgeon: (i) working experience; (ii) working context.
Concerning the biases which differ among interviewees, the following Figure 2 summarises the main contribution of the work.
Reading the figure as a matrix, it highlights the connection between working experience and working context with specific biases of the decision-making process in the orthopaedic field, in coherence with the main literature listed in Table 3.
Figure 2 shows that the less experienced a surgeon is, the more likely is he/she to look for Confirmation (Smith et al. 2010); this might be because he or she is very tied to theory and tends to look for what he knows. Ierano et al. (2019) confirmed that less experienced surgeons/junior health professionals always look for confirmation through guidelines. On the contrary, Overconfidence incurs due to high experience. Besides the working context, high experience leads the surgeons to think mostly on positive scenarios (Kahneman et al. 2011) regarding their work/task as they are more confident in their skills.
Groupthink and Availability, however, are both mostly related to the public context characteristics; the first one is typical of hospital ward teamwork, that is out of the SP’s working sphere. The second one, instead, may be related to the data present as support to the decision-making path, whose availability (per timing, quality, and quantity) are mandatory only in public hospitals, according to public performance measurement roles (Bouckaert and Halligan 2007; Pinnarelli et al. 2015).
Hence, based on the level of experience (low/high) of surgeons and the ownership of the healthcare institution (public/private) in which they work, the matrix points out the most probable biases in the orthopaedic field according to the specific features of decision-makers. This could be very useful for the management of the healthcare institution in terms of prompt reaction to the expected cognitive errors.
Furthermore, for sure our results confirm that a qualitative cognitive tool, as the Kahneman et al. (2011) checklists, could potentially help physicians avoid these errors, but it needs to be integrated in daily practice, also as a more usable electronic version (Raymond et al. 2017; Otokiti 2019). For sure this manuscript presents some limitations; most of them are those related to the use of the qualitative method which implies the interpretive role of researchers and limited extension of data. The first limitation of this qualitative inquiry regards the lack of distinction between “heuristics” and “traps”, which both fall under the bias umbrella term. Although heuristics could generally have also a positive impact on decisions (thus, in particular circumstances, they should not be reduced), however, this is not proved in the medical field according to Ryan et al. (2018). In addition, only three physicians are interviewed. Nevertheless, as explained in the methodology section, the differences in terms of working experience and context of our interviewees would reduce such a limitation by giving a good representation of the Italian orthopaedic environment. Moreover, given that this research is moving its first steps and it is at an original level of investigation in a still understudied field, according to literature Cristofaro (2017a) and Jette et al. (2003), three interviewees can be considered enough if they represent the apical position for the decision-making context of their organisational environment. For sure to overcome these limitations, a higher number of interviews should be made in future research. Further studies could focus on theoretical exploration (e.g., systematic literature reviews, bibliometric literature reviews, etc.) of clinical decision-making in the surgical field, which would arise the difference between specialisation sub-fields. Moreover, given the fact that the magnitude of heuristic effects on complex clinical decisions is still unexplored, this specific aspect would also deserve to be investigated in future streams of research. In addition, the study leaves some areas of investigation uncovered; particularly surprising is the lack of connection with the Kahneman’s bias of Self Interest. No direct evidence of this arose from the interviews; only insights regarding the decisions about the frequency of follow-ups came out. According to all respondents, the schedule of follow-up is established in line with literature, practice, and in some rare cases according to the patient’s peculiarity (this would confirm all other results). Nevertheless, as highlighted by the control question E, a potential reason that could modify the follow-up frequency and schedule would be the specific interest of physicians in collecting information (both clinical and epidemiologic) according to healthcare management and scientific production based on big-data (Roski et al. 2014; Yan et al. 2017). Also, these aspects deserve to be further investigated.
In conclusion, this study has highlighted the linkage between the clinical decision-making process and management tools to improve decisions, by fostering debate in these fields. For practitioners, this study shows the experience of quality decision-making process tool (To et al. 2018) employment that brings out some cognitive shortcuts that can lead the clinical decision process. From an academic point of view, this article represents a preliminary contribution to the influence of cognitive biases in limiting the rational thinking of decision-makers in the specificity of the orthopaedic field.
With this regard, the study can surely contribute to the debate by both scholars and practitioners about the application of tools in improving the quality of the clinical decision process. According to Antonacci et al. (2020), indeed, the improvement of decision-making is one of the main leverages for enhancing better healthcare outcomes, which, in turn, can be translated into better performance (Skaržauskiene 2010; Safi and Burrell 2007; Oyewobi et al. 2016) for the healthcare organisation.

Author Contributions

Conceptualization, I.S. and G.P.; methodology, G.P.; software, I.S.; validation, G.P., I.S. and A.C.; formal analysis, I.S. and G.P.; investigation, G.P. and I.S.; resources, G.P., I.S. and A.C.; data curation, I.S. and G.P.; writing—original draft preparation, I.S. and G.P.; writing—review and editing, G.P. and I.S.; visualization, A.C.; supervision, A.C. and G.P.; project administration, I.S. and G.P. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Abatecola, Gianpaolo. 2014. Untangling self-reinforcing processes in managerial decision making. Co-evolving heuristics? Management Decision 52: 934–49. [Google Scholar] [CrossRef]
  2. Antonacci, Anthony C., Samuel P. Dechario, Caroline Antonacci, Gregg Husk, Vihas Patel, Jeffrey Nicastro, Gene Coppa, and Mark Jarrett. 2020. Cognitive Bias Impact on Management of Postoperative Complications, Medical Error, and Standard of Care. Journal of Surgical Research 258: 47–53. [Google Scholar] [CrossRef] [PubMed]
  3. Ashoorion, Vahid, Mohammad Javad Liaghatdar, and Peyman Adibi. 2012. What variables can influence clinical reasoning? Journal of Research in Medical Sciences 17: 1170. [Google Scholar] [PubMed]
  4. Atoum, Ibrahim A., and Nasser A. Al-Jarallah. 2019. Big data analytics for value-based care: Challenges and opportunities. International Journal of Advanced Trends in Computer Science and Engineering 8: 3012–16. [Google Scholar] [CrossRef]
  5. Augestad, Liv Ariane, Knut Stavem, Ivar Sønbø Kristiansen, Carl Haakon Samuelsen, and Kim Rand-Hendriksen. 2016. Influenced from the Start: Anchoring Bias in Time Trade-off Valuations. Quality of Life Research 25: 2179–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Austin, Jared P., and Stephanie A. C. Halvorson. 2019. Reducing the Expert Halo Effect on Pharmacy and Therapeutics Committees. JAMA 321: 453. [Google Scholar] [CrossRef]
  7. Balsamo, Brittany, Mark D. Geil, Rebecca Ellis, and Jianhua Wu. 2018. Confirmation Bias Affects User Perception of Knee Braces. Journal of Biomechanics 75: 164–70. [Google Scholar] [CrossRef]
  8. Baron, Robert A. 1998. Cognitive mechanisms in entrepreneurship: Why and when enterpreneurs think differently than other people. Journal of Business Venturing 13: 275–94. [Google Scholar] [CrossRef]
  9. Bouckaert, Geert, and John Halligan. 2007. Managing Performance: International Comparisons. Abingdon: Routledge. [Google Scholar]
  10. Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef] [Green Version]
  11. Christensen, Nicole, Lisa Black, Jennifer Furze, Karen Huhn, Ann Vendrely, and Susan Wainwright. 2017. Clinical Reasoning: Survey of Teaching Methods, Integration, and Assessment in Entry-Level Physical Therapist Academic Education. Physical Therapy 97: 175–86. [Google Scholar] [CrossRef] [Green Version]
  12. Cohen, Jeffrey M., and Susan Burgin. 2016. Cognitive biases in clinical decision making: A primer for the practicing dermatologist. Jama Dermatology 152: 253–54. [Google Scholar] [CrossRef] [PubMed]
  13. Collin, Peter Hodgson. 2009. Dictionary of Medical Terms. London: A&C Black. [Google Scholar]
  14. Cook, Joan M., Paula P. Schnurr, Tatyana Biyanova, and James C. Coyne. 2009. Apples don’t fall far from the tree: Influences on psychotherapists’ adoption and sustained use of new therapies. Psychiatric Services 60: 671–76. [Google Scholar] [CrossRef] [PubMed]
  15. Cooper, Nicola, and John Frain. 2016. ABC of Clinical Reasoning. Hoboken: John Wiley & Sons. [Google Scholar]
  16. Cristofaro, Matteo. 2017a. Reducing Biases of Decision-Making Processes in Complex Organizations. Management Research Review 40: 270–91. [Google Scholar] [CrossRef]
  17. Cristofaro, Matteo. 2017b. Herbert Simon’s Bounded Rationality: Its Historical Evolution in Management and Cross-Fertilizing Contribution. Journal of Management History 23: 170–90. [Google Scholar] [CrossRef]
  18. Croskerry, Pat. 2003. The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them. Academic Medicine 78: 775–80. [Google Scholar] [CrossRef] [Green Version]
  19. Del Mar, C., J. Doust, and P. Glasziou. 2006. Clinical Thinking; Evidence, Communication and Decision-Making. Oxford: Blackwell Publishing Ltd. BMJ Books. [Google Scholar]
  20. Dunbar, Miles, Stephen E. Helms, and Robert T. Brodell. 2013. Reducing Cognitive Errors in Dermatology: Can Anything Be Done? Journal of the American Academy of Dermatology 69: 810–13. [Google Scholar] [CrossRef]
  21. Durning, Steven, Anthony R. Artino, Louis Pangaro, Cees P. M. van der Vleuten, and Lambert Schuwirth. 2011. Context and Clinical Reasoning: Understanding the Perspective of the Expert’s Voice: Understanding the Perspective of the Expert’s Voice. Medical Education 45: 927–38. [Google Scholar] [CrossRef]
  22. Durning, Steven J., Anthony R. Artino, Lambert Schuwirth, and Cees van der Vleuten. 2013. Clarifying Assumptions to Enhance Our Understanding and Assessment of Clinical Reasoning. Academic Medicine 88: 442–48. [Google Scholar] [CrossRef]
  23. Edmondson, Amy C., and Stacy E. Mcmanus. 2007. Methodological Fit in Management Field Research. Academy of Management Review 32: 1246–64. [Google Scholar] [CrossRef] [Green Version]
  24. Eisenberg, John M. 1979. Sociologic Influences on Decision-Making by Clinicians. Annals of Internal Medicine 90: 957. [Google Scholar] [CrossRef]
  25. El Said, Ghada Refaat. 2017. Understanding how learners use massive open online courses and why they drop out: Thematic analysis of an interview study in a developing country. Journal of Educational Computing Research 55: 724–52. [Google Scholar] [CrossRef]
  26. Elston, Dirk M. 2020. Confirmation Bias in Medical Decision-Making. Journal of the American Academy of Dermatology 82: 572. [Google Scholar] [CrossRef] [Green Version]
  27. Elvén, Maria, Jacek Hochwälder, Elizabeth Dean, and Anne Söderlund. 2019. Predictors of Clinical Reasoning Using the Reasoning 4 Change Instrument with Physical Therapist Students. Physical Therapy 99: 964–76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Fargen, Kyle M., and William A. Friedman. 2014. The science of medical decision making: Neurosurgery, errors, and personal cognitive strategies for improving quality of care. World Neurosurgery 82: e21–e29. [Google Scholar] [CrossRef] [PubMed]
  29. Flynn, Darren, Meghan A. Knoedler, Erik P. Hess, M. Hassan Murad, Patricia J. Erwin, Victor M. Montori, and Richard G. Thomson. 2012. Engaging patients in health care decisions in the emergency department through shared decision-making: A systematic review. Academic Emergency Medicine 19: 959–67. [Google Scholar] [CrossRef] [PubMed]
  30. Ghoneim, Abdulrahman, Bonnie Yu, Herenia Lawrence, Michael Glogauer, Ketan Shankardass, and Carlos Quiñonez. 2020. What Influences the Clinical Decision-Making of Dentists? A Cross-Sectional Study. A cura di Gururaj Arakeri. PLoS ONE 15: e0233652. [Google Scholar] [CrossRef] [PubMed]
  31. Goldsby, Elizabeth, Michael Goldsby, Christopher B. Neck, and Christopher P. Neck. 2020. Under Pressure: Time Management, Self-Leadership, and the Nurse Manager. Administrative Sciences 10: 38. [Google Scholar] [CrossRef]
  32. Grove, Amy, Aileen Clarke, and Graeme Currie. 2015. The barriers and facilitators to the implementation of clinical guidance in elective orthopaedic surgery: A qualitative study protocol. Implementation Science 10: 81. [Google Scholar] [CrossRef] [Green Version]
  33. Haley, Usha C. V., and Stephen A. Stumpf. 1989. Cognitive Trails in Strategic Decision-Making: Linking Theories of Personalities and Cognitions. Journal of Management Studies 26: 477–97. [Google Scholar] [CrossRef]
  34. Hausmann, Daniel, Cristina Zulian, Edouard Battegay, and Lukas Zimmerli. 2016. Tracing the Decision-Making Process of Physicians with a Decision Process Matrix. BMC Medical Informatics and Decision Making 16: 133. [Google Scholar] [CrossRef] [Green Version]
  35. Healy, William L., Michael E. Ayers, Richard Iorio, Douglas A. Patch, David Appleby, and Bernard A. Pfeifer. 1998. Impact of a clinical pathway and implant standardization on total hip arthroplasty: A clinical and economic study of short-term patient outcome. The Journal of Arthroplasty 13: 266–76. [Google Scholar] [CrossRef]
  36. Hendrick, Hal W. 1999. Handbook of human factors and ergonomics, edited by Gavriel Salvendy, 1997, New York: John Wiley & Sons, Inc., p. 2137, ISBN 0-471-11690-4. Human Factors and Ergonomics in Manufacturing & Service Industries 9: 321–22. [Google Scholar]
  37. Hess, Erik P., Corita R. Grudzen, Richard Thomson, Ali S. Raja, and Christopher R. Carpenter. 2015. Shared decision-making in the emergency department: Respecting patient autonomy when seconds count. Academic Emergency Medicine 22: 856–64. [Google Scholar] [CrossRef] [PubMed]
  38. Higgs, Joy, Gail M Jensen, Stephen Loftus, and Nicole Christensen. 2019. Clinical Reasoning in the Health Professions. Oxford: Elsevier Health Sciences. [Google Scholar]
  39. Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15: 1277–88. [Google Scholar] [CrossRef]
  40. Ierano, Courtney, Karin Thursky, Trisha Peel, Arjun Rajkhowa, Caroline Marshall, and Darshini Ayton. 2019. Influences on surgical antimicrobial prophylaxis decision making by surgical craft groups, anaesthetists, pharmacists and nurses in public and private hospitals. PLoS ONE 14: e0225011. [Google Scholar] [CrossRef] [Green Version]
  41. Itri, Jason N., and Sohil H. Patel. 2018. Heuristics and cognitive error in medical imaging. American Journal of Roentgenology 210: 1097–105. [Google Scholar] [CrossRef]
  42. Jette, Diane U., Lisa Grover, and Carol P. Keck. 2003. A qualitative study of clinical decision making in recommending discharge placement from the acute care setting. Physical Therapy 83: 224–36. [Google Scholar] [CrossRef]
  43. Kaba, Alyshah, Ian Wishart, Kristin Fraser, Sylvain Coderre, and Kevin McLaughlin. 2016. Are We at Risk of Groupthink in Our Approach to Teamwork Interventions in Health Care? Medical Education 50: 400–8. [Google Scholar] [CrossRef]
  44. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Macmillan. [Google Scholar]
  45. Kahneman, Daniel, Dan Lovallo, and Olivier Sibony. 2011. Before you make that big decision. Harvard Business Review 89: 50–60. [Google Scholar]
  46. Kobus, David A., Sherry Proctor, and Steven Holste. 2001. Effects of experience and uncertainty during dynamic decision making. International Journal of Industrial Ergonomics 28: 275–90. [Google Scholar] [CrossRef]
  47. Lippa, Katherine D., Markus A. Feufel, F. Eric Robinson, and Valerie L. Shalin. 2017. Navigating the Decision Space: Shared Medical Decision Making as Distributed Cognition. Qualitative Health Research 27: 1035–48. [Google Scholar] [CrossRef] [PubMed]
  48. Lo, Bernard, and Mitchell H. Katz. 2005. Clinical decision making during public health emergencies: Ethical considerations. Annals of Internal Medicine 143: 493–98. [Google Scholar] [CrossRef] [PubMed]
  49. Locock, Louise, Sue Dopson, David Chambers, and John Gabbay. 2001. Understanding the role of opinion leaders in improving clinical effectiveness. Social Science & Medicine 53: 745–57. [Google Scholar]
  50. Mailoo, Venthan. 2015. Common Sense or Cognitive Bias and Groupthink: Does It Belong in Our Clinical Reasoning? The British Journal of General Practice 65: 27. [Google Scholar] [CrossRef] [Green Version]
  51. Makhinson, Michael. 2012. Biases in the Evaluation of Psychiatric Clinical Evidence. The Journal of Nervous and Mental Disease 200: 76–82. [Google Scholar] [CrossRef]
  52. Mamede, Sílvia, Marco Antonio de Carvalho-Filho, Rosa Malena Delbone de Faria, Daniel Franci, Maria do Patrocinio Tenorio Nunes, Ligia Maria Cayres Ribeiro, Julia Biegelmeyer, Laura Zwaan, and Henk G. Schmidt. 2020. “Immunising” Physicians against Availability Bias in Diagnostic Reasoning: A Randomised Controlled Experiment. BMJ Quality & Safety 29: 550–59. [Google Scholar] [CrossRef] [Green Version]
  53. McBee, Elexis, Temple Ratcliffe, Katherine Picho, Anthony R. Artino, Lambert Schuwirth, William Kelly, Jennifer Masel, Cees van der Vleuten, and Steven J. Durning. 2015. Consequences of Contextual Factors on Clinical Reasoning in Resident Physicians. Advances in Health Sciences Education 20: 1225–36. [Google Scholar] [CrossRef]
  54. Ministero della Salute. 2016. Tavole Rapporto SDO. Available online: (accessed on 20 October 2020).
  55. Nagaraj, Guruprasad, Carolyn Hullick, Glenn Arendts, Ellen Burkett, Keith D. Hill, and Christopher R. Carpenter. 2018. Avoiding Anchoring Bias by Moving beyond “mechanical Falls” in Geriatric Emergency Medicine. Emergency Medicine Australasia 30: 843–50. [Google Scholar] [CrossRef] [Green Version]
  56. Neergaard, Mette Asbjoern, Frede Olesen, Rikke Sand Andersen, and Jens Sondergaard. 2009. Qualitative description–the poor cousin of health research? BMC Medical Research Methodology 9: 52. [Google Scholar] [CrossRef] [Green Version]
  57. Norman, Geoffrey. 2005. Research in Clinical Reasoning: Past History and Current Trends. Medical Education 39: 418–27. [Google Scholar] [CrossRef]
  58. Olszak, Celina M., and Kornelia Batko. 2012. Business intelligence systems. New chances and possibilities for healthcare organizations. Informatyka Ekonomiczna 5: 123–38. [Google Scholar]
  59. Otokiti, Ahm. 2019. Using informatics to improve healthcare quality. International Journal of Health Care Quality Assurance 32: 425–430. [Google Scholar] [CrossRef] [PubMed]
  60. Oxford Dictionary. 2012. Oxford Dictionary. Available online: (accessed on 20 November 2020).
  61. Oyewobi, Luqman Oyekunle, Abimbola Windapo, and James Olabode Bamidele Rotimi. 2016. Relationship between decision-making style, competitive strategies and organisational performance among construction organisations. Journal of Engineering, Design and Technology 14: 713–38. [Google Scholar] [CrossRef]
  62. Patton, Michael Quinn. 2002. Two decades of developments in qualitative inquiry: A personal, experiential perspective. Qualitative Social Work 1: 261–83. [Google Scholar] [CrossRef]
  63. Pinnarelli, Luigi, Alice Basiglini, Danilo Fusco, and Marina Davoli. 2015. Piano Nazionale Esiti. Available online: (accessed on 20 October 2020).
  64. Porter, Michael E. 2009. A Strategy for Health Care Reform—Toward a Value-Based System. New England Journal of Medicine 361: 109–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Rashid, Ahmad, and Halim Boussabiane. 2019. Conceptualizing the influence of personality and cognitive traits on project managers’ risk-taking behaviour. International Journal of Managing Projects in Business. [Google Scholar] [CrossRef]
  66. Raymond, Louis, Guy Paré, and Éric Maillet. 2017. IT-based clinical knowledge management in primary health care: A conceptual framework. Knowledge and Process Management 24: 247–56. [Google Scholar] [CrossRef]
  67. Robinson, James C., Alexis Pozen, Samuel Tseng, and Kevin J. Bozic. 2012. Variability in costs associated with total hip and knee replacement implants. JBJS 94: 1693–98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Robinson, Jennifer, Marta Sinclair, Jutta Tobias, and Ellen Choi. 2017. More dynamic than you think: Hidden aspects of decision-making. Administrative Sciences 7: 23. [Google Scholar] [CrossRef] [Green Version]
  69. Robinson, Frank Eric, Markus A. Feufel, Valerie L. Shalin, Debra Steele-Johnson, and Brian Springer. 2020. Rational Adaptation: Contextual Effects in Medical Decision Making. Journal of Cognitive Engineering and Decision Making 14: 112–31. [Google Scholar] [CrossRef]
  70. Roski, Joachim, George W. Bo-Linn, and Timothy A. Andrews. 2014. Creating value in health care through big data: Opportunities and policy implications. Health Affairs 33: 1115–22. [Google Scholar] [CrossRef]
  71. Rubio-Navarro, Alfonso, Diego José García-Capilla, Maria José Torralba-Madrid, and Jane Rutty. 2020. Decision-Making in an Emergency Department: A Nursing Accountability Model. Nursing Ethics 27: 567–86. [Google Scholar] [CrossRef] [PubMed]
  72. Ryan, Aedin, Sophie Duignan, Damien Kenny, and Colin J. McMahon. 2018. Decision making in paediatric cardiology. Are we prone to heuristics, biases and traps? Pediatric Cardiology 39: 160–67. [Google Scholar] [CrossRef]
  73. Sackett, David L., William MC Rosenberg, JA Muir Gray, R. Brian Haynes, and W. Scott Richardson. 1996. Evidence Based Medicine: What It Is and What It Isn’t. BMJ 312: 71–72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Safi, Asila, and Darrell Norman Burrell. 2007. Developing Advanced Decision-Making Skills in International Leaders and Managers. Vikalpa 32: 1–8. [Google Scholar] [CrossRef]
  75. Scapens, Robert W. 1990. Researching Management Accounting Practice: The Role of Case Study Methods. The British Accounting Review 22: 259–81. [Google Scholar] [CrossRef]
  76. Scranton, Pierce E., Jr. 1999. The cost effectiveness of streamlined care pathways and product standardization in total knee arthroplasty. The Journal of Arthroplasty 14: 182–86. [Google Scholar] [CrossRef]
  77. Simon, H. 1947. Administrative Behaviour. New York: The Free Press. [Google Scholar]
  78. Sizer, Phillip S., Manuel Vicente Mauri, Kenneth Learman, Clare Jones, Norman ‘Skip’ Gill, Chris R. Showalter, and Jean-Michel Brismée. 2016. Should Evidence or Sound Clinical Reasoning Dictate Patient Care? Journal of Manual & Manipulative Therapy 24: 117–19. [Google Scholar] [CrossRef] [Green Version]
  79. Skaržauskiene, A. 2010. Managing complexity: Systems thinking as a catalyst of the organization performance. Measuring Business Excellence 14: 49–64. [Google Scholar] [CrossRef]
  80. Smith, Megan, Joy Higgs, and Elizabeth Ellis. 2007. Physiotherapy decision making in acute cardiorespiratory care is influenced by factors related to the physiotherapist and the nature and context of the decision: A qualitative study. Australian Journal of Physiotherapy 53: 261–67. [Google Scholar] [CrossRef] [Green Version]
  81. Smith, Megan, Higgs Joy, and Elizabeth Ellis. 2010. Effect of experience on clinical decision making by cardiorespiratory physiotherapists in acute care settings. Physiotherapy Theory and Practice 26: 89–99. [Google Scholar] [CrossRef]
  82. Sousa, Maria José, António Miguel Pesqueira, Carlos Lemos, Miguel Sousa, and Álvaro Rocha. 2019. Decision-Making Based on Big Data Analytics for People Management in Healthcare Organizations. Journal of Medical Systems 43: 290. [Google Scholar] [CrossRef] [PubMed]
  83. Spano, Alessandro, and Anna Aroni. 2018. Organizational Performance in the Italian Health care Sector. In Outcome-Based Performance Management in the Public Sector. Cham: Springer, pp. 25–43. [Google Scholar]
  84. Stewart, Rebecca E., and Dianne L. Chambless. 2007. Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology 63: 267–81. [Google Scholar] [CrossRef]
  85. Stiegler, M. P., Jacques P. Neelankavil, Cecilia Canales, and A. Dhillon. 2012. Cognitive errors detected in anaesthesiology: A literature review and pilot study. British Journal of Anaesthesia 108: 229–35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Stylianou, Stelios. 2008. Interview control questions. International Journal of Social Research Methodology 11: 239–56. [Google Scholar] [CrossRef]
  87. Sun, Alexander Y., and Bridget R. Scanlon. 2019. How can Big Data and machine learning benefit environment and water management: A survey of methods, applications, and future directions. Environmental Research Letters 14: 073001. [Google Scholar] [CrossRef]
  88. Timmermans, Stefan, and Alison Angell. 2001. Evidence-based medicine, clinical uncertainty, and learning to doctor. Journal of Health and Social Behavior 42: 342–59. [Google Scholar] [CrossRef] [Green Version]
  89. To, Wai Ming, Billy T. W. Yu, and Peter K. C. Lee. 2018. How quality management system components lead to improvement in service organizations: A system practitioner perspective. Administrative Sciences 8: 73. [Google Scholar] [CrossRef] [Green Version]
  90. Torre, M., E. Carrani, and I. Luzi. 2017. Progetto Registro Italiano Artroprotesi. Potenziare la Qualità dei dati per Migliorare la Sicurezza dei Pazienti. Quarto Report. Rome: Il Pensiero Scientifico Editore. [Google Scholar]
  91. Twells, Laurie K. 2015. Evidence-Based Decision-Making 1: Critical Appraisal. In Clinical Epidemiology. Edited by a cura di Patrick S. Parfrey and Brendan J. Barrett. New York: Springer, vol. 1281, pp. 385–96. [Google Scholar] [CrossRef]
  92. Utter, Garth H., Ronald V. Maier, Frederick P. Rivara, and Avery B. Nathens. 2006. Outcomes after Ruptured Abdominal Aortic Aneurysms: The “Halo Effect” of Trauma Center Designation. Journal of the American College of Surgeons 203: 498–505. [Google Scholar] [CrossRef]
  93. Vickrey, Barbara G., Martin A. Samuels, and Allan H. Ropper. 2010. How neurologists think: A cognitive psychology perspective on missed diagnoses. Annals of Neurology 67: 425–33. [Google Scholar] [CrossRef]
  94. Vogel, P., and D. H. V. Vogel. 2019. Cognition errors in the treatment course of patients with anastomotic failure after colorectal resection. Patient Safety in Surgery 13: 4. [Google Scholar] [CrossRef] [Green Version]
  95. Vuong, Phoenix, Jason Sample, Mary Ellen Zimmermann, and Pierre Saldinger. 2017. Trauma Team Activation: Not Just for Trauma Patients. Journal of Emergencies, Trauma, and Shock 10: 151–53. [Google Scholar] [CrossRef] [PubMed]
  96. Waddington, L., and S. Morley. 2000. Availability Bias in Clinical Formulation: The First Idea That Comes to Mind. The British Journal of Medical Psychology 73, (Pt 1): 117–27. [Google Scholar] [CrossRef] [PubMed]
  97. Wainwright, Susan Flannery, Katherine F. Shepard, Laurinda B. Harman, and James Stephens. 2011. Factors that influence the clinical decision making of novice and experienced physical therapists. Physical Therapy 91: 87–101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Wu, Tungju, Yenchun Jim Wu, Hsientang Tsai, and Yibin Li. 2017. Top management teams’ characteristics and strategic decision-making: A mediation of risk perceptions and mental models. Sustainability 9: 2265. [Google Scholar] [CrossRef] [Green Version]
  99. Yan, X., C. Dong, and C. Yao. 2017. The evolvement of evidence-based medicine research in the big data era. Chinese Journal of Evidence-Based Medicine 17: 249–54. [Google Scholar] [CrossRef]
  100. Yin, R. K. 2004. Case study methods. In Handbook of Complementary Methods in Education Research, American Educational Research Association. Edited by Judith. L. Green, Gregory Camilli and Patricia. B. Elmore. Washington, DC: Taylor & Francis Group, pp. 111–22. [Google Scholar]
  101. Yin, Robert K. 2017. Case Study Research and Applications: Design and Methods. Thousand Oaks: Sage Publications. [Google Scholar]
  102. Zavala, Alicia M., Gary E. Day, David Plummer, and Anita Bamford-Wade. 2018. Decision-making under pressure: Medical errors in uncertain and dynamic environments. Australian Health Review 42: 395–402. [Google Scholar] [CrossRef]
Figure 1. Step of analysis. Source: Author’s elaboration.
Figure 1. Step of analysis. Source: Author’s elaboration.
Admsci 10 00094 g001
Figure 2. Matrix connection between physician’s working experience and context with the cognitive biases in orthopaedics. Source: Author’s elaboration.
Figure 2. Matrix connection between physician’s working experience and context with the cognitive biases in orthopaedics. Source: Author’s elaboration.
Admsci 10 00094 g002
Table 1. Adjusted checklist and biases which it refers to.
Table 1. Adjusted checklist and biases which it refers to.
Factors That Can Lead to DistortionBias/CodeAdjusted Checklist QuestionsC.Q. *
Own interest of decision-makerSelf-interest1. In your choice of the patient’s follow-up path, do you think there is any reason to think that the personal motivations of the clinical operator (orthopaedic doctor) influenced the prescription (number and frequency of checks)?A
Preference of decision-maker about one alternativeAffect heuristic2. Is it possible that the choice of a specific follow-up path has been made on the basis of consolidated practice, rather than on the specific analysis from the context in reference to the specific contingencies of the patient?A
Team communication or absence of communication among team membersGroupthink3. Are decisions regarding follow-up made at the operating team/ward level or at the individual doctor’s (orthopaedic patient’s) level?
3a. If conflicting opinions emerge, are they sufficiently examined? How are any “conflicts” resolved?
Past success Saliency4. In your opinion, how much is the choice of a specific follow-up path influenced by the experience of the clinical operator regarding similar past situations?A
No full evaluation of other alternativesConfirmation5. When choosing a follow-up path, are different credible and reliable alternatives considered?A
Information availabilityAvailability6. What clinical information is used to make decisions about the patient follow-up process? If you could have other information (non-clinical) which would you need?A
Information baseAnchoring7. Which source provides you with the data referred to in the previous question?A
Connection between alternatives or situation and decision-makerHalo effect8. When choosing a follow-up path, is it possible that the decision was made (or influenced) on the basis of similar decisions made by other departments or other clinical contexts?A
History or past eventsSunk-cost fallacy9. When choosing a follow-up path, how does the patient’s medical history influence your decision?A
Excessively optimisticOverconfidence optimistic10. When choosing a follow-up path, do you usually consider extremely positive implication scenarios regarding the patient’s specific contingencies?A
Excessively pessimisticDisaster neglect11. When choosing a follow-up path, do you always consider a realistic scenario regarding the patient’s specific contingencies?A
Excessively conservativeLoss aversion12. When choosing a follow-up path, do you usually consider extremely negative implication scenarios regarding the patient’s specific contingencies?A
n* Control questions (C.Q.)
AWhat do you mean by the follow-up process?
BWhat else?
CWhat does it happen if something happened during the surgery and a doctor thinks he wants to see that patient again? Is this kind of decision made by the group?
DDo you decide only on your own, without discussing with your team?
EWhat do you mean by a positive or negative scenario?
FWhat other kinds of interest can bring you to define different timespans for follow-ups?
Source: Authors’ elaboration inspired by Kahneman et al. (2011), Cristofaro (2017a), Stylianou (2008). The * it is referred to the text in the twelfth line. For this reason, the text in that line begins with *.
Table 2. Results derived from the application of Kahnemen’s checklist.
Table 2. Results derived from the application of Kahnemen’s checklist.
QuestionsBias/CodeContent ExampleRespondent(s)Bias Presence
In your choice of the patient’s follow-up path, do you think there is any reason to think that the personal motivations of the clinical operator (orthopaedic doctor) influenced the prescription (number and frequency of checks)?Self-interest“Most likely yes”SDNO
“Systematically not, however, there is a percentage of variability linked to the patient”SPNO
“No, because the prosthetic follow-up is completely standardised”SHNO
Is it possible that the choice of a specific follow-up path has been made on the basis of consolidated practice, rather than on the specific analysis from the context in reference to the specific contingencies of the patient?Affect heuristic“Probably yes”SDYES
“Yes, it is possible”SPYES
“Customisable follow-ups are rare. We do what the scientific literature reported”SHYES
Are decisions regarding follow-up made at the operating team/ward level or at the individual doctor’s (orthopaedic patient’s) level?Groupthink“Decisions are made by the operating team”SDYES
“In my working reality, the individual doctor decides because often the surgeons make follow-up in their private clinics”SPNO
“Decisions are made due to standardisation of wards”SHYES
a. If conflicting opinions emerge, are they sufficiently examined? How are any “conflicts” resolved?“Conflicts are resolved by the team leader and/or the oldest one”SD/
Not availableSP/
“Yes. Conflicts are resolved by ward director”SH/
In your opinion, how much the choice of a specific follow-up path is influenced by the experience of the clinical operator regarding similar past situations?Saliency“A little, because it is probably connected also with what you want to evaluate and with what literature reported”SDYES
“Above all. The choice is influenced almost exclusively by similar past situations”SPYES
“It influences because, in addition to scientific bases, orthopaedic surgery also relies heavily on personal experience”SHYES
When choosing a follow-up path, are different credible and reliable alternatives considered?Confirmation“Yes, they are”SDNO
“Not much. Alternatives exist but we don’t consider them enough”SPNO
“There are not many alternatives to standard follow-up”SHYES
What clinical information is used to make decisions about the patient follow-up process? If you could have other information (non-clinical) which would you need?Availability“The ones reported by the literature”SDYES
“Patient’s pain, functional skills, lifestyle habits, and job”SPNO
“Mainly comorbidities and type of surgery are used to make a decision about follow-up.If I could have other information, I would like to know the patient’s lifestyle habits and where he/she lives”SHYES
Which source provides you with the data referred to in the previous question?Anchoring“Yes. Scientific literature gives us many things”SHYES
“The several international scores that you want to apply”SDYES
“Medical examination and patient itself”SPYES
When choosing a follow-up path, is it possible that the decision was made (or influenced) on the basis of similar decisions made by other departments or other clinical contexts?Halo effect“Certainly yes”SDYES
“Yes. Opinion leaders and their modus operandi matter a lot”SPYES
“A deeply patient’s anamnesis”SHYES
When choosing a follow-up path, how does the patient’s medical history influence your decision?Sunk-cost fallacy“As far as the case of the clinical sphere is concerned, probably the embedding parameters make the difference”SDNO
“It could make follow-up more frequent”SPNO
“It conditions a lot, for example in an epileptic or Parkinsonian patient it is known that a more frequent follow-up is necessary”SHNO
When choosing a follow-up path, do you usually consider extremely positive implication scenarios regarding the patient’s specific contingencies?Overconfidence optimistic“I determine the extent of the follow-up both for what I have read in literature and because I think that is the right period to detect the progress of that situation”SDYES
“I define the period according to the time for a patient to slowly start to have a normal life, without aids, without particular foreclosures”SPYES
“When choosing a follow-up path, I usually consider always positive scenarios”SHNO
When choosing a follow-up path, always consider a realistic scenario regarding the patient’s specific contingencies?Disaster neglect“Actually, the period is defined because at that point I should have data that tells me if that path is a positive or negative path.”SDNO
“That period has logic behind it. It is the healing time of the tissues from the intervention. I can imagine them repaired in a month and for this, I set that date for the medical examination”SPNO
“The choice is always ideal as it should be”SHNO
When choosing a follow-up path, do you usually consider extremely negative implication scenarios regarding the patient’s specific contingencies?Loss aversion“Actually, the period is defined because at that point I should have data that tells me if that path is a positive or negative path.”SDNO
“The guideline is the same logic that I said before”SPNO
“When choosing a follow-up path, I usually consider always positive scenarios because complications are infrequent in this kind of surgery”SHNO
Source: Authors’ elaboration inspired by Cristofaro (2017a).
Table 3. literature on medical field our study’s finding confirmed by literature.
Table 3. literature on medical field our study’s finding confirmed by literature.
BiasesDescription *Medical LiteratureErrors Recognized By
Affect heuristicThe decision-maker tends to minimise the risks and costs and/or exaggerate the benefits of something he/she likes(Makhinson 2012)SD, SP, SH
AnchoringThe decision-maker makes the decision taking into consideration some initial reference data without adjusting its estimates according to the new information gained(Nagaraj et al. 2018; Augestad et al. 2016)SD, SP, SH
Halo effectThe decision-maker sees a story as more emotionally consistent than it really is(Austin and Halvorson 2019; Vuong et al. 2017; Utter et al. 2006)SD, SP, SH
SaliencyThe decision-maker tends to approve a proposal that is similar to a successful one in the past(Makhinson 2012; Vickrey et al. 2010) **SD, SP, SH
GroupthinkThe inclination of groups to converge on a decision because it reduces the conflict and can gain large support(Kaba et al. 2016; Mailoo 2015)SD, SH
AvailabilityThe decision-maker makes the decision with the available data without making an effort to find other useful information that is uncovered(Mamede et al. 2020; Waddington and Morley 2000)SD, SH
OverconfidenceThe decision-maker with positive track records is prone to excessive optimism in forecasts(Cohen and Burgin 2016; Vickrey et al. 2010)SD, SP
ConfirmationThe decision-maker tends to elaborate only one alternative for which he/she tries to find confirming data(Balsamo et al. 2018; Elston 2020)SH
Source: Authors’ elaboration inspired by Cristofaro (2017a); * All descriptions are taken from Cristofaro (2017a); ** Authors refer to Saliency as “Representativeness”.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Schettini, I.; Palozzi, G.; Chirico, A. Enhancing Healthcare Decision-Making Process: Findings from Orthopaedic Field. Adm. Sci. 2020, 10, 94.

AMA Style

Schettini I, Palozzi G, Chirico A. Enhancing Healthcare Decision-Making Process: Findings from Orthopaedic Field. Administrative Sciences. 2020; 10(4):94.

Chicago/Turabian Style

Schettini, Irene, Gabriele Palozzi, and Antonio Chirico. 2020. "Enhancing Healthcare Decision-Making Process: Findings from Orthopaedic Field" Administrative Sciences 10, no. 4: 94.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop