Next Article in Journal
Evaluation of Coupling Coordination Degree between Economy and Eco-Environment Systems in the Yangtze River Delta from 2000 to 2020
Next Article in Special Issue
Manufacturing System Design in Industry 5.0: Incorporating Sociotechnical Systems and Social Metabolism for Human-Centered, Sustainable, and Resilient Production
Previous Article in Journal
Hierarchical Coordinated Energy Management Strategy for Hybrid Energy Storage System in Electric Vehicles Considering the Battery’s SOC
Previous Article in Special Issue
RAD-XP: Tabletop Exercises for Eliciting Resilience Requirements for Sociotechnical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Biases in Stakeholder Elicitation as a Precursor to the Systems Architecting Process

1
Department of Psychology, The University of Alabama in Huntsville, Huntsville, AL 35899, USA
2
Department of Industrial & Systems Engineering and Engineering Management, The University of Alabama in Huntsville, Huntsville, AL 35899, USA
3
Naval Postgraduate School, Monterey, CA 93943, USA
*
Author to whom correspondence should be addressed.
Systems 2023, 11(10), 499; https://doi.org/10.3390/systems11100499
Submission received: 31 July 2023 / Revised: 13 September 2023 / Accepted: 27 September 2023 / Published: 28 September 2023
(This article belongs to the Special Issue Design Methods for Human–Machine Teams)

Abstract

:
Many systems engineering projects begin with the involvement of stakeholders to aid in decision-making processes. As an application of systems engineering, systems architecture involves the documentation of stakeholder needs gathered via elicitation and the transformation of these needs into requirements for a system. Within human–machine teaming, systems architecture allows for the creation of a system with desired characteristics elicited from stakeholders involved with the project or system. Though stakeholders can be excellent sources for expert opinion, vested interests in a project may potentially bias stakeholders and impact decision-making processes. These biases may influence the design of the system architecture, potentially resulting in a system that is developed with unbalanced and misrepresented stakeholder preferences. This paper presents an activity analysis of the Stakeholder Needs and Requirements Process as described in the Systems Engineering Body of Knowledge (SEBoK) to identify potential biases associated with this elicitation process. As part of the research presented in this paper, a workshop was conducted where currently practicing systems architects provided feedback regarding perceptions of biases encountered during the elicitation process. The findings of this research will aid systems architects, developers, and users in understanding how biases may impact stakeholder elicitation within the architecting process.

1. Introduction

As the technological capabilities of machines continue to advance, interest in human–machine interaction and teaming has grown within the domain of systems engineering. Human–machine teaming (HMT) allows for a bi-directional interaction between humans and machines, shifting the human role from operator to supervisor [1]. The architecture of these human–machine systems comprises four elements, including the human, the machine to be manipulated, the environment, and the work task to be completed [2]. Within adaptive HMT, an architectural framework allows for the organization of concepts and relationships that are necessary to model, explore, and evaluate potential adaptive HMT options [1]. Task allocation criteria for this type of architecture include human strength and limitations, machine strengths and limitations, contextual factors that favor either machine or human over the other, and heuristics. The following introduction sections describe heuristics within architecture, how these heuristics are related to cognitive bias, and how this cognitive bias may impact stakeholder decision-making. Following this introduction, cognitive biases that may potentially impact stakeholder elicitation as a precursor to systems architecture are discussed in relation to the elicitation process laid out in the Systems Engineering Body of Knowledge (SEBoK) [3]. Identifying these potential biases may allow systems architects to be more deliberate during the engineering process to support desired system characteristics.

1.1. Heuristics

Previous research by Kahneman [4] decomposes human thinking into two processes: System 1 and System 2. System 1 thinking involves fast, automatic, unconscious, and emotional responses, typically made rapidly and without effort. System 2 thinking is a more slow and effortful process, resulting in logical responses to complex problem-solving. Within System 1 thinking, automatic processes filter information based on relevancy and mental shortcuts for decision-making. These mental shortcuts are known as heuristics. Heuristics are qualitative rules of thumb often utilized within systems engineering and systems architecture [3]. Heuristics are primarily utilized due to the accuracy-effort tradeoff, wherein effort is saved through using a heuristic at the cost of accuracy in decision-making or problem-solving [5,6]. The use of these heuristics leaves room for error in the form of cognitive biases. The list of biases discussed within this paper is not exhaustive, as an estimated 180 cognitive biases have been previously identified [7].
The biases covered within this paper are of particular importance to systems architects due to their relevance and potential impact to the stakeholder elicitation process as described in the SEBoK [3]. For the full list of biases discussed in this paper and their associated definitions, see Appendix A.

1.2. Stakeholders and Cognitive Bias

Various levels of stakeholders exist, dependent upon the differing roles and levels of impact that stakeholders may possess. Primary stakeholders have been identified as those who are essential to the survival and well-being of an organization, whereas secondary stakeholders are those that an organization interacts with but are not necessarily essential to the organization’s survival [8,9]. In addition to various levels of stakeholders, various relationships between stakeholders exist. These relationships feature aspects of power, legitimacy, and urgency [10]. Types of stakeholder relationships include the following: stakeholder dominant (organization is dependent upon stakeholder), organization dominant (stakeholder is dependent upon organization), and mutually power-dependent (stakeholder and organization are mutually dependent upon one another) [10]. These relationships and levels of importance regarding stakeholders are influencing factors that may introduce bias into processes involving stakeholder participation. Stakeholders are inherently biased, given a vested interest in a project, and all may attempt to influence design decisions in various ways [11]. The inclusion of stakeholders in the decision-making processes of a project is a major source of both knowledge and complexity, as expert judgment can be useful in a project but may also prove to be a source of error [12,13,14]. This error may be seen in the form of bias, which in turn can impact or influence decisions made within a project. Inappropriate and ill-informed elicitation methods can potentially amplify biases, exacerbating their potential impact [14]. Inadequate elicitation methods may include relying on subjective and unreliable methods for selecting experts [15], asking poorly specified questions [16], ignoring protocols to counteract negative group interactions [17], and applying subjective or biasing aggregation methods [18,19]. While many biases are interrelated and experienced in association with one another (i.e., overconfidence and optimism bias [20,21]), this paper examines biases independently of each other to better understand their occurrences in the systems architecture process.

1.3. Systems Architecture

Within the domain of systems engineering, those involved in the systems architecting process may be tasked with various activities, including developing requirements, selecting appropriate life cycle models, and architecting a system of systems [3]. In the SEBoK, a system architecture is described as a model representation of the fundamental organization of a system in terms of the structural elements and their behavior interactions [3]. System architecture is decomposed into three categories: functional architecture, logical architecture, and physical architecture [3]. Functional architecture models decompose system capabilities into automation functions and interactions to perform the mission or business objective. Logical architecture models view the functionality and behavior of a future system as it should operate during service, consisting of technical concepts and principles that support logical system operation. Logical architecture models may comprise functional architecture models, behavioral architecture models, and temporal architecture models. Physical architecture models are physical elements that provide the solution for a product, service, or enterprise. This architecture is built from systems, system elements, and all necessary physical interfaces between elements and external elements. Systems architecture has been utilized within HMT in various applications, such as military systems [22] and adaptive systems for dynamic environments [1].
As systems architecture is an application of systems engineering, elicitation processes laid out within the body of knowledge can be utilized during the beginning stages of a systems architecture project to aid in requirements development. This elicitation process consists of the identification and selection of stakeholders to aid in decision-making processes, the collection and documentation of stakeholder needs, and the transformation of these needs into verifiable system requirements. This paper examines the stakeholder elicitation process described in the SEBoK to understand where cognitive bias may impact stakeholder decision-making.

1.4. Research Contribution

The first research contribution of this paper involves the activity analysis of the SEBoK stakeholder needs and requirements elicitation process. Though bias is mentioned in a general sense in the SEBoK, specific biases that may potentially be impactful during the elicitation of stakeholder needs and requirements as well as where in the process they may occur are not discussed. This paper aims to fill this research gap through identifying various biases in research shown to be associated with specific activities included in the systems engineering stakeholder elicitation process.
Research Question 1: What biases are commonly associated with activities featured in the Stakeholder Needs and Requirements elicitation process as laid out in the SEBoK?
The second research contribution of this paper includes feedback from systems architects in which biases personally encountered during the elicitation process were identified. This inclusion of perspectives given by practicing systems architects can help inform readers of biases encountered in the applied sense in addition to those identified in the literature as potentially impactful.
Research Question 2: What biases are identified by practicing systems architects as being present during the Stakeholder Needs and Requirements elicitation process?

2. Materials and Methods

To understand potential biases associated with stakeholder elicitation, an activity analysis of the Stakeholder Needs and Requirements Process as laid out in the SEBoK [3] was conducted. Each step in the elicitation process was decomposed into the activities involved: the selection of stakeholders, the identification of stakeholder needs and requirements, the collection of stakeholder needs and requirements (including the activities of stakeholder participation, stakeholder response, and group environment discussions), capturing needs and defining requirements, and classifying the requirements. Once this was complete, a literature review was conducted to examine what biases had previously been linked to the activities included in each step of the elicitation process.

Systems Architecture Workshop

In order to obtain feedback from those engaged in applications of systems architecture, a workshop was conducted and perceptions regarding biases in the systems architecture process were collected. Participants (N = 8) were systems engineers involved in systems architecture, predominantly working in the HMT domain. Participants were all male and held at least a bachelor’s degree. The Educational backgrounds of participants included software and systems engineering, engineering management, and computer science. Years of experience in systems architecture ranged from 2 years to 27 years.
To begin the workshop, participants were given a list of 31 biases and definitions featured in the Appendix A to review in order to familiarize themselves with common biases associated with the activities featured in the elicitation of stakeholder needs and requirements in the literature. Though this process could have inherently biased participants, they were not limited to the biases featured in the Appendix A during the discussion and were free to provide responses outside of these biases and definitions. The biases on the list were intentionally not grouped according to the SEBoK Stakeholder Needs and Requirements process.
Once the participants were given time to review the biases, a general group discussion was facilitated by two authors of this paper. Each step of the elicitation process was presented to the group and participants were given 10 min to discuss their personal experiences amongst themselves. After this time was up, the facilitators requested each participant to describe biases they have encountered within the specific process step being discussed. Notes were taken of which biases were discussed and the number of biases discussed by participants. This allowed for the research team to not only link biases with steps of the SEBoK elicitation process but also provided context as to which biases were more commonly discussed.

3. Results and Discussion

3.1. General Stakeholder Biases

Several biases were found to be associated with stakeholders in a general sense. Four of these general stakeholder biases are relevant to primary stakeholders, while two are relevant to secondary stakeholders. Seven biases identified were associated with both primary and secondary stakeholders while five biases were identified but not necessarily specified to a level of stakeholder (see Figure 1).
Primary stakeholder biases include optimism bias, planning fallacy, confirmation bias, and loss aversion. Optimism bias [23] may lead to overestimating positive events within a project and can potentially influence stakeholder perspectives. Planning fallacy [23] may result in an unrealistic estimation of resources and benefits associated with systems architectures resulting from the stakeholder elicitation process. Confirmation bias [23] could potentially impact a systems architecture if those involved only consider information that supports their own beliefs or ideas, resulting in an unbalanced representation of stakeholder preferences. Loss aversion [23] can occur if the stakeholders involved in the system architecture process prioritize avoiding losses rather than acquiring gains.
Secondary stakeholder biases include representativeness and groupthink. Representativeness [4,23,24,25] can occur when a stakeholder mistakenly attributes one characteristic of a project to imply another. Groupthink [23,26] may lead to an unbalanced representation of stakeholder preferences in favor of the majority.
Biases identified to be associated with both primary and secondary stakeholders include overconfidence, status quo bias, anchoring, ostrich effect, framing effect, hindsight bias, and strategic misrepresentation. Overconfidence [23] in the architecting process may lead to overly optimistic assessments of a project by stakeholders. Status quo bias [23] may be associated with aversion to change within stakeholders, as any change from the baseline or current state of affairs is considered a loss. Status quo bias has also been associated with human–machine systems and trust, with previous research positing the relationship between the number of changes recommended by a system and user trust [27]. Anchoring [23,24,28] may result in stakeholders’ inability to adapt as they rely too heavily on one piece of information in a project. Anchoring has also been associated with human preferences in systems [27,29]. The Ostrich effect [23] may lead to stakeholders avoiding risky or difficult situations or failed projects at the cost of learning. The framing effect [23] is associated with using an approach or description that is too narrow and inappropriate for the given situation or issue. Stakeholders may frame certain aspects of a project in a positive or negative light to influence perspectives on that specific aspect. In HMT and Artificial Intelligence (AI), humans placed in human-AI teams may have their trust and decisions impacted by the framing effect [27,30,31]. Hindsight bias [28] may impact stakeholders as they tend to see past events as predictable at the time they occurred. Stakeholders may be prone to strategic misrepresentation [23,28,32] as they tend to deliberately and systematically distort or misstate information for strategic purposes. This may be especially relevant if differing stakeholders have conflicting objectives and motivations.
Unclassified stakeholder biases include professional bias, previous knowledge bias, previous experience bias, fundamental attribution error, and “not invented here” syndrome. Professional bias [33] may exist within stakeholders if their area of expertise impacts judgments or predictions about a systems architecture. Previous knowledge bias and previous experience bias [34] occur when stakeholders utilize information and lessons learned from prior knowledge and experience to make decisions within the architecture process. Fundamental attribution error [35] occurs when stakeholders attribute an error or an accident as a human error as opposed to being the result of a situation or environment. In decision-making, “not invented here” syndrome [36] is an attitude-based bias towards knowledge derived from external sources outside of the perspective of the individual making the decision.
Though not directly related to the stakeholder needs and requirements elicitation process laid out in the SEBoK, these biases inform Research Question 1 through identifying the general biases that may be associated with stakeholders in and of themselves. The following section will further answer this research question through identifying biases associated with steps in the elicitation process.

3.2. Stakeholder Needs and Requirements

In the SEBoK [3], stakeholder needs and requirements are described to represent the views of those at the business or enterprise operations level. These requirements are of importance within systems engineering as they form the basis of requirements activities, system validation, and stakeholder acceptance [3]. These requirements also serve as a reference for integration and verification activities as well as a means of communication between technical staff, management, finance, and stakeholders [3]. The purpose of the Stakeholder Needs and Requirements activities, as stated in the SEBoK, are to elicit needs related to a new or changed mission for an enterprise and to transform those needs into requirements that can be later verified. Concepts within the elicitation process can be viewed as individual steps within a multistep process that can be further broken down into major activities and/or tasks to be performed during the process. Each step can be seen in Figure 2 and described in further detail below.

3.2.1. Identifying Stakeholders

The identification of stakeholders involves identifying and subsequently selecting stakeholders or classes of stakeholders to generate a set of needs and requirements for a system of interest. These stakeholders must be taken into consideration across all stages of the appropriate life cycle model, such as engineering, development, transfer for production or use, logistics and maintenance, operation, and disposal [3]. Examples of stakeholders include potential users, research and development departments, suppliers, operators, and trainers, among various others with a vested interest in the system(s).

3.2.2. Identifying Stakeholder Needs and Requirements

The identification of stakeholder needs and requirements involves the elicitation of stakeholder needs in the form of the ConOps or Strategic Business Plan (SBP) followed by the transformation of needs into a formal set of stakeholder requirements [3]. Identification of needs includes the impacted stakeholders, their needs, their prioritizations, and any changes with time over the lifecycle [37]. Requirements are captured as models or textual requirements and documented within the Stakeholder Requirement Specification (StRS) or the Stakeholder Requirement Document (StRD). The purpose of requirement specification is to identify demands placed on the system based on the need under consideration, including functional requirements (how the system should work), physical requirements (how the system should be built), operational requirements (how the system should run), and economic requirements (the costs of development and operation) [37]. Examples of tools that may be utilized during the identification of needs and requirements include diagrams, models, simulations, and trade studies.

3.2.3. Collecting Stakeholder Needs and Requirements

The collection of stakeholder needs and requirements may be approached with multiple, various techniques depending on the system or project at hand. These techniques may include brainstorming workshops, interviews and questionnaires, documentation review, prototyping, modeling, and use case diagrams among others listed within the SEBoK [3]. Many of these techniques involve group-based situations in which collection occurs during an open discussion amongst stakeholders.

3.2.4. Capturing Needs and Defining Requirements

Once stakeholder needs and requirements have been identified and collected, the needs must then be captured and the requirements defined. Needs are broken down into six classifications: real needs (conditioned by context in which people live), perceived needs (based on individual awareness that improvements can be made), expressed needs (which originate from perceived needs and are prioritized), retained needs (selected from expressed needs), specified needs (translation of stakeholder needs to represent views of the supplier; translated into system requirements), and realized needs (the product, service, or enterprise realized). This process involves prioritization and weighing as well as the selection of certain needs to be transformed into requirements.

3.2.5. Classification of Stakeholder Requirements

After all stakeholder requirements have been defined, the classification of requirements can occur to group similar concepts into a useful set of elements. Requirement classification examples include operational, human factors, maintenance, production, validation, and various others.

3.3. Biases in Stakeholder Elicitation

In order to identify potential biases associated with stakeholder needs and requirements elicitation, an activity analysis was conducted, and types of activities within the elicitation process were examined. This section will discuss activities within each process step identified in “Stakeholder Needs and Requirements” and potential biases associated with these activities.

3.3.1. Identifying Stakeholders

The first activity within the stakeholder elicitation process, as laid out in the SEBOK is the identification of stakeholders relevant to the project. Various stakeholders are relevant to the systems architecture process, including but not limited to stakeholders as designers, users, and industry stakeholders in the form of original equipment manufacturers (OEMs). As these stakeholders may be affiliated with differing organizations, potential differences in the goals and motivations of each stakeholder may exist. For example, some may prioritize safety, some may prioritize profit maximization, and some may prioritize innovation. As these different motivations become apparent, stakeholders may have conflicting goals during the design and development of an architecture.
As the identification process entails choosing stakeholders to be included in the elicitation of needs and requirements, biases may exist regarding the selection strategies utilized to obtain these stakeholders (see Figure 3). Four stakeholder selection processes have been previously identified in the literature: purposive selection, snowballing, open call, and systematic selection [38]. During purposive selection, stakeholders that are targeted are usually well-known and familiar, which may lead to an identification bias [38] resulting in an unbalanced group of stakeholders. The snowballing process involves choosing a stakeholder to choose another stakeholder, who then chooses another one until the appropriate number of stakeholders have been selected. Similar to purposive selection, snowballing may result in identification bias but may also repeat the same bias across multiple stakeholders leading to network bias [38]. An open call approach for stakeholder engagement involves publicly advertising a need for stakeholder participation allowing for a wide diversity of stakeholders but risks excluding stakeholders that do not have access to the advertisement or were not aware that the engagement was occurring, resulting in what Haddaway and colleagues refer to as awareness bias [38]. Systematic approaches to stakeholder selection are more concrete in methodology and often repeatable but may result in too large a volume of stakeholders to engage and risks the exclusion of stakeholders with limited online presence, potentially leading to a self-promotion bias [38].

3.3.2. Identifying Stakeholder Needs and Requirements

The next activity within the stakeholder elicitation process as laid out in the SEBoK involves identifying needs expressed by stakeholders and creating a formal set of requirements from these needs. Ideally, this process would be guided by a well-defined and documented requirements analysis process, ensuring repeatability of the requirements identification process. Several biases may exist in the identification of the needs and requirements process, including previous knowledge bias, previous experience bias, and anchoring (see Figure 4).
Previous knowledge bias and previous experience bias [34] can occur when systems architects use any previous knowledge or experience gained from other architecture projects to drive decisions made during the identification of needs and requirements. Systems architects with prior knowledge may be at an advantage during this activity. Systems architects may also experience a bias known as anchoring [23,24,28], where decision-making is anchored to an initial starting point. Architects who are anchored to previous knowledge or experience may be more likely to miss any new and relevant information or sources that may impact the subsequent architecture of the system. This may occur due to architects framing their decision-making in the context of previous projects.

3.3.3. Collecting Stakeholder Needs and Requirements

The next activity within the SEBoK stakeholder elicitation process is to collect stakeholder needs and requirements. Biases relevant to the collection of stakeholder needs and requirements can be split into three groups: biases in stakeholder responses, biases in group environments, and biases in stakeholder participation. Biases in stakeholder responses include those associated with the actual responses themselves as opposed to participation behaviors or external group influences. Biases in group environments are those in which the group-based environment of the process may influence or impact the behaviors of group members. Biases in stakeholder participation are biases that impact the participation behaviors of stakeholders but are not necessarily contributed to a group-based setting.
Biases in stakeholder responses include any biases that may impact how stakeholders represent their concerns, such as popularity bias, anchoring, availability, range-frequency, and overconfidence (see Figure 5). Popularity bias [39] may result in stakeholders that are viewed in a more favorable light being given higher utility values, thus having their concerns prioritized over other less popular stakeholders. This may lead to an unbalanced representation of stakeholder concerns, subsequently impacting the systems architecture under construction. Anchoring [23,24,28] may result in stakeholders’ inability to be flexible in their concerns. Availability [4,28,40] may lead to stakeholders only identifying concerns that are more easily retrievable from memory. Range-frequency [40] may allow stakeholders to create an unbalanced estimation of probability regarding concerns. Overconfidence [28,32,40] may result in stakeholders having an overly optimistic view regarding concerns.
Biases in group environments include any biases that may exist during the process of group decision-making. In the elicitation process, stakeholder workshops and brainstorming sessions are often utilized to generate stakeholder needs and requirements. These workshops use a group environment to identify concerns, therefore, group decision-making biases are particularly relevant. Biases in group environments include groupthink, social loafing, group polarization, and escalation of commitment (see Figure 6). Groupthink [17,26] may result in an unbalanced representation of stakeholder concerns as the desire for unanimity may overcome the desire to represent individual unique concerns. Social loafing [26] may lead to biased representations of stakeholder concerns as those involved with the group may make less effort in communicating concerns than they would if asked to do so on an individual basis. Group polarization [26] may lead to more extreme representations of stakeholder concerns than if concerns were communicated on an individual basis. Escalation of commitment [26] may result in increased investment in some concerns over others, leading to an unbalanced representation of stakeholder concerns.
Biases in stakeholder participation include any biases that may impact the way the stakeholder interacts or participates with other stakeholders. These biases include awareness bias, intimidation bias, faith bias, and apathy bias (see Figure 7). Awareness bias [38] may lead to an unbalanced representation of stakeholders participating in the elicitation process. Intimidation bias [38] may result in an inaccurate representation of concerns as some stakeholders may be less likely to communicate concerns if they feel their views will not be taken into consideration by the majority. Intimidation bias may be particularly relevant in areas of the elicitation process in which stakeholders of differing authority and power dynamics interact. Faith bias [38] may lead to an unbalanced representation of concerns if stakeholders feel their perspectives will not be taken into account due to the failure of others. Apathy bias [38] may lead to a lack of stakeholder involvement if the stakeholders believe other stakeholders will perform their role for them.

3.3.4. Capturing Needs and Defining Requirements

The next activity within the stakeholder elicitation process as laid out in the SEBoK involves the capturing of stakeholders’ needs and using these needs to define requirements for a system. Biases in the capturing of needs and defining of requirements include any biases that may impact how an architect captures stakeholder needs as well as how they then transform these needs into requirements definitions. These biases may include previous experience bias and previous knowledge bias [34] (see Figure 8). Architects with previous knowledge or experience regarding the documentation of needs and definition of requirements may complete these activities through the lens of how they previously captured and defined requirements. Architects with more experience may potentially be at an advantage when attempting to complete this step in a timely manner, though they may be doing so with a narrow viewpoint if only considering previous experiences.

3.3.5. Classification of Stakeholder Requirements

The final activity within the stakeholder elicitation process as laid out in the SEBoK involves the classification or categorization of stakeholder requirements by similarity. Various classifications of stakeholder requirements are exemplified in the body of knowledge, including service or functional requirements, operational requirements, logistical requirements, and maintenance requirements, among a handful of other classification types. Constraints encountered by stakeholders during the classification process may include enterprise, business, project, design, realization, and process constraints [3]. Several biases associated with categorization are relevant to the classification of stakeholder requirements as both processes include grouping items dependent upon common themes or constructs. These biases include previous knowledge bias, previous experience bias, and anchoring (see Figure 9).
Previous knowledge bias and previous experience bias [34] can occur when system architectures utilize any prior knowledge or experience to inform decisions regarding the classification of requirements. When classifying systems architecture requirements, anchoring [23,24,28] can occur if a requirement would be more appropriately classified in a different category but has been categorized inappropriately due to initial judgments.
Results from this review answer Research Question 1 through identifying the various biases that may be present when attempting to elicit stakeholder needs and requirements as a precursor to the systems architecture process.

3.4. Systems Architecture Workshop

While biases identified through a literature review may differentiate from those experienced during the application of systems architecture, it was of importance to gain a better understanding from the applied field. Once common biases associated with activities within the SEBoK elicitation process were identified, a workshop was held in order to evaluate perceptions of biases given by those with applied experience in systems architecture. The following sections identify biases discussed by architects for each of the steps within the elicitation process. For the full list of biases identified by systems architects per elicitation step, see Figure 10.

3.4.1. Stakeholder Identification and Selection

Three biases were suggested to be associated with the stakeholder identification and selection process by architects that had not been identified previously as an associated bias in stakeholder selection, including status quo bias, apathy bias, and availability bias. Although self-promotion bias, which is associated with identification and selection, was identified in the literature, workshop participants did not mention this bias.

3.4.2. Identifying Stakeholder Needs and Requirements

Various biases were also identified by workshop participants that were not previously linked to the identification of needs and requirements including professional bias, framing effect, groupthink, popularity bias, and confirmation bias. It can be argued that both previous experience bias and previous knowledge bias are a form of confirmation bias, as these demographic aspects may influence architects to identify needs and requirements that confirm their individual beliefs of what should be included in a system. If an individual’s previous experiences or knowledge shape the current beliefs of that individual, which in turn impacts decision-making, this could be interpreted as a confirmation bias. However, it could also be argued that previous experience and knowledge may not align with an individual’s currently held beliefs, which is the foundation of confirmation bias [41]. For this paper, the concepts of previous experience, previous knowledge, and confirmation bias were examined separately. Interestingly, anchoring bias was not identified by participants to be linked to identifying stakeholder needs and requirements.

3.4.3. Collecting Stakeholder Needs and Requirements: Stakeholder Response Biases

Almost all biases identified by systems architects did not align with biases found to be associated with stakeholder responses in the literature. Architects identified hindsight bias, faith bias, framing bias, groupthink, and apathy bias to be biases encountered when eliciting stakeholder responses in a workshop-type setting. Popularity bias was identified both in research and by workshop participants to be impactful to stakeholder responses.

3.4.4. Collecting Stakeholder Needs and Requirements: Group Environment Biases

Two biases were identified by systems architects that were previously identified in literature to be associated with group environments, including groupthink and social loafing. Group polarization and escalation of commitment were not mentioned during the workshop, though this lack of inclusion does not indicate the lack of existence of either of these biases within group environments. Self-promotion bias, intimidation bias, and representativeness were mentioned by workshop participants to be biases encountered in group environments.

3.4.5. Collecting Stakeholder Needs and Requirements: Stakeholder Participation Biases

Two biases were identified by systems architects that were previously identified in the literature to be associated with stakeholder participation, including intimidation bias and apathy bias. Awareness bias and faith bias were not mentioned during the workshop, though this lack of inclusion does not indicate the lack of existence of these biases within group environments. Workshop participants mentioned groupthink, previous knowledge, and previous experience to be biases encountered in stakeholder participation.

3.4.6. Capturing Needs and Defining Requirements

Feedback from the systems architecture workshop suggested that status quo bias was the most commonly encountered bias when capturing needs and defining requirements. This suggests that architects may capture and define requirements in a fashion similar to how it has been done previously, as to not disrupt established processes applied during this step regardless of if it is truly the best way to capture and define requirements. Five biases were identified by systems architects that were not previously identified in literature to be associated with capturing needs and defining requirements, including planning fallacy, ostrich effect, apathy bias, professional bias, and status quo bias.

3.4.7. Classification of Stakeholder Requirements

Two biases were identified by systems architects that were previously identified in the literature to be associated with classification, including the ostrich effect and planning fallacy. Anchoring was not mentioned during the workshop, though this lack of inclusion does not indicate the lack of existence of this bias within classification activities.
The frequency of biases mentioned per elicitation activity was generated to gain a more holistic view of architects’ perceptions of biases encountered (see Figure 10). Each time a bias was identified by an architect, the occurrence of the bias was counted and the number of times each bias was discussed was totaled per elicitation activity. Percentages were generated to understand which biases were most commonly discussed by architects during the workshop. Each bias could be mentioned up to eight times per elicitation step as that was the total number of participants in the workshop. The highest number of biases encountered fell within the stakeholder selection activity. Extra focus and attention should be given to the identification and selection of stakeholders involved in a project in order to lessen the potential impact of biases that may exist. This research may aid systems architects to be aware of potential biases in stakeholder identification and selection, such as networking bias or awareness bias, when choosing stakeholders for a project. Rather than being mindful of bias in general, architects can identify and avoid specific biases identified in this paper during each of the various steps associated with the elicitation of stakeholder needs and requirements. As this step is foundational in the generation of stakeholder needs to then transform into requirements, involving inappropriate stakeholders could potentially create issues as previous research has found that an insufficient number of individuals involved in design review sessions is one of the major flaws in conventional design review approaches [11,42]. Engaging appropriate stakeholders in the early stages of a project increases the accuracy of initial and subsequent estimates due to larger amounts of data being available at an earlier point in time [43].
Previous experience, apathy bias, and previous knowledge were the most commonly encountered biases identified by systems architects across the elicitation process. Architects considered experienced and knowledgeable in the field may be incredibly useful to a project as they bring in a wealth of knowledge and expertise, though this expertise may allow for a narrow field of view in which the architect may be more susceptible to biased decision-making. It is important for architects to take these biases into consideration early as to either utilize bias to the project’s advantage or mitigate bias to lessen the impact on the project.
Results from this workshop inform Research Question 2 through identifying biases encountered by current systems architects as well as where in the elicitation process these biases may emerge. Though the biases identified in previous research did not differ when compared to responses collected during the workshop, there was a difference observed in where during the elicitation process architects identified biases compared to previous research. Implications from the lack of differentiation in the biases identified by architects compared to those found in literature are discussed in the limitations section of this paper.

3.5. Application of Results

The findings of this paper are unique in the attempt to link biases commonly found in literature with activities featured in the SEBoK stakeholder elicitation process. Though these biases are pre-established and often discussed in the literature, no previous research has attempted to create a list of biases that may be encountered when involved in systems architecting processes. Beyond linking biases with systems engineering elicitation activities, this research provides feedback from an application perspective of biases actually encountered by those involved in the architecting process. Previous research has shown a disconnect between academic findings and industry applications [44]. The inclusion of perspectives given by current systems architects takes this disconnect into account through differentiating between what biases may be encountered as found in literature and what biases have been encountered as stated by systems architects. Implementation of results from both the literature review and the workshop into architecting activities may aid architects in understanding not only when biases may emerge during the elicitation of needs and requirements, but how these biases may impact architectural decision-making. Though these biases discussed in the workshop were collected from architects within the HMT domain, the elicitation process steps examined are not specific to HMT and can be generalized to systems engineers using the SEBoK as a guideline.
While the SEBoK does mention biases in some capacity, the link between cognitive biases and stakeholder elicitation is lacking. Only four cognitive biases are mentioned by name (i.e., confirmation bias, rankism, complacency, and optimism), and these biases are discussed within the decision-management section of the body of knowledge with no specific ties to stakeholder elicitation. Insights from this paper may be used during the stakeholder needs and requirements elicitation process laid out in the SEBoK to aid in the identification and subsequent mitigation of cognitive biases encountered by systems architects. While this work focuses specifically on the identification of biases, this can be considered a foundational step in the mitigation process as architects must first be aware of an issue in order to address it. The SEBoK may benefit from the expansion of the current stakeholder needs and requirements process to include this list of general biases that may be potentially impactful to the architecting process.

4. Limitations and Future Work

This paper identifies potential biases that may be encountered during the stakeholder elicitation process as defined in the SEBoK. One major limitation of the study is, ironically, the presence of bias during the workshop. The bias list provided to the workshop participants was considered useful in order to generate a discussion surrounding cognitive bias and provide examples of common biases, but participants may have felt constricted by the list though discussions were not limited to the biases list featured in the Appendix A. Biases discussed by architects in the workshop did not differentiate in terms of what was identified in the list provided, but did differentiate when examining where in the elicitation process these biases may emerge. Future iterations of this workshop may benefit from providing a more exhaustive appendix of biases, though this would significantly increase the time length and complexity of the workshop as over 180 biases have been identified in research [7]. Future workshops may also consider not including a list of biases at all, but may in turn risk inadvertently limiting the discussion to only the most well-known biases, thus unintentionally creating an availability or recall bias among participants. This may result in architects overlooking potentially impactful but less common biases.
Though the biases discussed in this paper were considered independent from one another, many cognitive biases are interrelated and evaluated in groups according to likeness. Mineka and Sutton [45] identified four types of cognitive biases: attentional bias, memory bias, judgmental bias, and associative bias. Other researchers have focused on a more rigid categorization of cognitive biases, examining the following four groups: prior hypotheses with a focus on limited targets, exposure to limited alternatives, insensitivity to outcome probabilities, and illusion of manageability [46,47]. Future research may benefit from applying categorizations to the biases identified in this paper to examine which groups of interrelated biases are common during the systems architecture elicitation process.
It is important to note that biases are not necessarily positive or negative in nature but rather something to take into account and either mitigate or utilize to one’s advantage. Previous research has suggested techniques to either utilize [48,49] or mitigate [14,50,51] potential biases involved in stakeholder elicitation, though these mitigation techniques are outside the scope of this current research. Future work will focus on potential strategies and approaches to mitigate biases associated with stakeholder elicitation. Simply being aware of the possible existence of bias is not enough to overcome bias, but it is an initial step in the right direction. Systems architects should be familiarized with possible mitigation techniques to adequately navigate biases when they are inevitably encountered during the elicitation process.
Though this paper is the first of its kind to include feedback from systems architects regarding biases encountered, the sample size of participants is small. Future research will include a larger sample size of systems architects to gain a better understanding of biases that may potentially be impactful during the elicitation of stakeholder needs and requirements for human–machine interactions. The results of this paper may be useful in shaping how systems engineering processes are executed for developing HMT systems. For instance, mission engineering, a function that systems engineers can perform, could benefit from an improved understanding of requirements biases [52]. Developing digital twins for systems engineering purposes is another area that may benefit from a better understanding of requirements biases [53]. Many other applications of this paper likely exist throughout systems engineering due to the key importance of requirements elicitation and management [54].

5. Conclusions

Biases exist in almost any system that involves humans. Biases are often thought of concerning the users of systems involving HMT; however, the engineers and architects creating the system are also a significant source of bias. When designing and developing an architecture for a system, decisions should be made intentionally and deliberately. In the realm of systems architecture, this intentionality may include taking into consideration cognitive biases that may potentially impact a system. Previous research has posited that the future of human–machine collaboration will focus on systems that model, understand, and potentially replicate cognitive biases seen in humans [27]. The results from this study can inform systems architects of potential biases they may encounter during the stakeholder elicitation process and when they may emerge. When examining the individual steps of the SEBoK elicitation process, architects provided feedback detailing which biases they personally encountered when completing various elicitation activities. This allows for a deeper understanding of biases in an applied and practical sense within systems engineering, as this feedback is from the perspective of those currently involved in systems architecture.
This research provided readers with a list of potential biases to be mindful of during the stakeholder needs and requirements elicitation process. Biases may exist within the architecting process that have not been identified in the literature and vice versa. This analysis of the SEBoK stakeholder elicitation process, in conjunction with feedback from systems architects, contributes to a growing body of research surrounding cognitive biases in systems engineering processes.

Author Contributions

Conceptualization, T.Y., K.W. and B.M.; Data curation, T.Y.; Funding acquisition, B.M.; Methodology, T.Y., K.W. and B.M.; Supervision, K.W. and B.M.; Writing—original draft, T.Y.; Writing—review and editing, K.W., B.M., J.C. and D.V.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was sponsored by U.S. Army under Grant W15QKN-18-D-0040.

Data Availability Statement

The authors retain copyright of all materials previously published in conference proceedings. Due to the sensitive nature of this research, data are available from the authors after review by the Department of Defense.

Acknowledgments

Any opinions or findings of this work are the responsibility of the authors, and do not necessarily reflect the views of the Department of Defense or any other organizations. Approved for public release; distribution is unlimited.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

  • Anchoring [23,24,28]—The tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions, typically the first piece of information acquired of the relevant subject.
  • Apathy bias [38]—Stakeholders may not respond if they feel others will perform their role for them.
  • Availability bias [4,28,40]—The tendency to overestimate the likelihood of events with greater ease of retrieval (availability) in memory.
  • Awareness bias [38]—Announcing an open call for stakeholder engagement may target a biased and unbalanced group of stakeholders.
  • Confirmation bias [23]—The tendency to focus on information that affirms the individual’s beliefs and assumptions.
  • Escalation of commitment [26]—The tendency to justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting the decision may be wrong (some may refer to this as the sunk cost fallacy).
  • Faith bias [38]—Stakeholders may not engage if they believe that their views will not be heard due to failures on the part of others.
  • Framing effect [23]—Using an approach or description that is too narrow for the situation or issue.
  • Fundamental attribution error [35]—People blame individuals rather than the situation for negative events.
  • Group polarization [26]—Groups sometimes make more extreme (compound) decisions than the initial position of its (individual) members.
  • Groupthink [17,26]—A mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action.
  • Hindsight [28]—The tendency to see past events as being predictable at the time those events happened.
  • Identification bias [38]—Purposeful selection of stakeholders using personal/organizational knowledge or unsystematic searches may result in a biased and unbalanced group of stakeholders.
  • Intimidation bias [38]—Stakeholders may be less likely to respond if they feel their views are unlikely to be heard over the views of the majority.
  • Loss aversion [23]—The tendency of individuals to prefer to avoid losses than acquire gains.
  • Network bias [38]—Asking others to suggest potential stakeholders may result in a biased and unbalanced group of stakeholders.
  • “Not invented here” syndrome [36]—A general negative attitude towards knowledge (ideas, technologies) derived from an external source.
  • Optimism bias [23]—The tendency to be overly optimistic about the outcome of planned actions, including overestimation of the frequency and size of positive events and underestimation of the frequency and size of negative ones.
  • Ostrich effect [23]—Avoiding risky or difficult situations or failed projects at the cost of learning.
  • Overconfidence [23]—Making fast and intuitive decisions when slow and deliberate decisions are necessary; individuals are overly optimistic in their initial assessment of a situation and then are slow to incorporate additional information about the situation into later assessments because of their initial overconfidence.
  • Planning fallacy [23]—The tendency to underestimate costs, schedule, and risk and overestimate benefits and opportunities.
  • Popularity bias [39]—Certain stakeholders (popular ones) may achieve very high utility values while other stakeholders (less popular ones) are ignored.
  • Previous experience bias [34]—Prior experience can make a significant impact in judgments.
  • Previous knowledge bias [34]—Prior knowledge is used to make judgments.
  • Professional bias [33]—Practitioners’ experience or expertise may impact judgments/predictions.
  • Range-frequency bias [40]—The tendency to assign less probability to the categories judged most likely and more probability to other categories.
  • Representativeness [4,23,24,25]—The tendency to irrationally attribute one characteristic to imply another.
  • Self-promotion bias [38]—Systematically searching for potential stakeholders may select only those with an online presence, producing a biased or unbalanced group of stakeholders.
  • Social loafing [26]—Group situations may reduce the motivation, level of effort, and skills employed in problem-solving compared with those that an individual would deploy when working alone.
  • Status quo [23]—The human preference for the current state of affairs; any change from the baseline is considered a loss.
  • Strategic misrepresentation [23,28,32]—The planned, systematic distortion or misstatement of facts for strategic purposes.

References

  1. Madni, A.M.; Madni, C.C. Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 2018, 6, 44. [Google Scholar] [CrossRef]
  2. Suzuki, S.; Harashima, F. Estimation algorithm of machine operational intention by Bayes filtering with self-organizing map. Adv. Hum.-Comput. Interact. 2012, 2012, 724587. [Google Scholar] [CrossRef]
  3. SEBoK Editorial Board. The Guide to the Systems Engineering Body of Knowledge (SEBoK); v. 2.7; Cloutier, R.J., Ed.; The Trustees of the Stevens Institute of Technology: Hoboken, NJ, USA, 2022. [Google Scholar]
  4. Kahneman, D. Thinking, Fast and Slow; Macmillan: New York, NY, USA, 2011. [Google Scholar]
  5. Payne, J.W.; Bettman, J.R.; Johnson, E.J. The Adaptive Decision Maker; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  6. Oppenheimer, D.M. Not so fast! (and not so frugal!): Rethinking the recognition heuristic. Cognition 2003, 90, B1–B9. [Google Scholar] [CrossRef]
  7. Ellis, G. Cognitive Biases in Visualizations; Springer: Cham, Switzerland, 2018. [Google Scholar]
  8. Freeman, R.E. Strategic Management: A Stakeholder Approach; Pitman: Boston, MA, USA, 1984. [Google Scholar]
  9. Clarkson, M.B.E. A stakeholder framework for analyzing and evaluating corporate social performance. Acad. Manag. Rev. 1995, 20, 92–117. [Google Scholar] [CrossRef]
  10. Mitchell, R.K.; Agle, B.R.; Wood, D.J. Toward a Theory of Stakeholder Identification and Salience: Defining the Principle of Who and What Really Counts. Acad. Manag. Rev. 1997, 22, 853–886. [Google Scholar] [CrossRef]
  11. Babar, M.A.; Zhu, L.; Jeffery, R. A framework for classifying and comparing software architecture evaluation methods. In Proceedings of the 2004 Australian Software Engineering Conference, Melbourne, VIC, Australia, 13–16 April 2004; pp. 309–318. [Google Scholar]
  12. Caron, F. Project planning and control: Early engagement of project stakeholders. J. Mod. Proj. Manag. 2014, 2, 85–97. [Google Scholar]
  13. Burgman, M.A. Expert Frailities in Conservation Risk Assessment and Listing Decisions; Royal Zoological Society of New South Wales: New South Wales, Australia, 2004. [Google Scholar]
  14. Hemming, V.; Burgman, M.A.; Hanea, A.M.; McBride, M.F.; Wintle, B.C. A practical guide to structured expert elicitation using the IDEA protocol. Methods Ecol. Evol. 2018, 9, 169–180. [Google Scholar] [CrossRef]
  15. Shanteau, J.; Weiss, D.J.; Thomas, R.P.; Pounds, J.C. Performance-based assessment of expertise: How to decide if someone is an expert or not. Eur. J. Oper. Res. 2002, 136, 253–263. [Google Scholar] [CrossRef]
  16. Wallsten, T.S.; Budescu, D.V.; Rapoport, A.; Zwick, R.; Forsyth, B. Measuring the vague meanings of probability terms. J. Exp. Psychol. Gen. 1986, 115, 348. [Google Scholar] [CrossRef]
  17. Janis, I. Groupthink. In A First Look at Communication Theory; McGrawHill: New York, NY, USA, 1991; pp. 235–246. [Google Scholar]
  18. Aspinall, W.P.; Cooke, R.M. Quantifying Scientific Uncertainty from Expert Judgment Elicitation. In Risk and Uncertainty Assessment for Natural Hazards; Rougier, J., Sparks, S., Hill, L., Eds.; Cambridge University Press: Cambridge, UK, 2013; pp. 64–99. [Google Scholar]
  19. Lorenz, J.; Rauhut, H.; Schweitzer, F.; Helbing, D. How social influence can undermine the wisdom of crowd effect. Proc. Natl. Acad. Sci. USA 2011, 108, 9020–9025. [Google Scholar] [CrossRef]
  20. Arik, S.L.; Sri, A.L. The Effect of Overconfidence and Optimism Bias on Stock Investment Decisions with Financial Literature as Moderating Variable. Eurasia Econ. Bus. 2021, 12, 84–93. [Google Scholar]
  21. Alnifie, K.M.; Kim, C. Appraising the Manifestation of Optimism Bias and Its Impact on Human Perception of Cyber Security: A Meta Analysis. J. Inf. Secur. 2023, 14, 93–110. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Li, H. Research on Human Machine Viewpoint of Architecture for C4ISR System. In Proceedings of the 2019 11th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 24–25 August 2019; pp. 84–88. [Google Scholar] [CrossRef]
  23. Chatzipanos, P.A.; Giotis, T. Cognitive Biases as Project & Program Complexity Enhancers: The Astypalea Project; Project Management Institute: Newtown Square, PA, USA, 2014. [Google Scholar]
  24. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
  25. Irshad, S.; Badshah, W.; Hakam, U. Effect of Representativeness Bias on Investment Decision Making. Manag. Adm. Sci. Rev. 2016, 5, 26–30. [Google Scholar]
  26. Mannion, R.; Thompson, C. Systematic biases in group decision-making: Implications for patient safety. Int. J. Qual. Health Care 2014, 26, 606–612. [Google Scholar] [CrossRef] [PubMed]
  27. Gulati, A.; Lozano, M.A.; Lepri, B.; Oliver, N. BIASeD: Bringing Irrationality into Automated System Design. arXiv 2022, arXiv:2210.01122. [Google Scholar]
  28. Flyvbjerg, B. Top Ten Behavioral Biases in Project Management: An Overview. Proj. Manag. J. 2021, 52, 531–546. [Google Scholar] [CrossRef]
  29. Adomavicius, G.; Bockstedt, J.C.; Curley, S.P.; Zhang, J. Do recommender systems manipulate consumer preferences? A study of anchoring effects. Inf. Syst. Res. 2013, 24, 956–975. [Google Scholar] [CrossRef]
  30. Souza, P.E.U.; Carvalho Chanel, C.P.; Dehais, F.; Givigi, S. Towards human-robot interaction: A framing effect experiment. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary,, 9–12 October 2016; pp. 001929–001934. [Google Scholar]
  31. Kim, T.; Song, H. Communicating the limitations of AI: The effect of message framing and ownership on trust in artificial intelligence. Int. J. Hum. Comput. Interact. 2023, 39, 790–800. [Google Scholar] [CrossRef]
  32. Flyvbjerg, B. Quality control and due diligence in project management: Getting decisions right by taking the outside view. Int. J. Proj. Manag. 2012, 3, 760–774. [Google Scholar] [CrossRef]
  33. Enriquez-de-Salamanca, A. Stakeholders’ manipulation of Environmental Impact Assessment. Environ. Impact Assess. Rev. 2018, 68, 10–18. [Google Scholar] [CrossRef]
  34. Das-Smaal, E.A. Biases in categorization. Adv. Psychol. 1990, 68, 349–386. [Google Scholar]
  35. Coombs, W.T.; Holladay, S.J. Unpacking the halo effect: Reputation and crisis management. J. Commun. Manag. 2006, 10, 123–137. [Google Scholar] [CrossRef]
  36. Antons, D.; Piller, F.T. Opening the black box of “not invented here”: Attitudes, decision biases, and behavioral consequences. Acad. Manag. Perspect. 2015, 29, 193–217. [Google Scholar] [CrossRef]
  37. Samset, K. Early Project Appraisal: Making the Initial Choices; Palgrave Macmillan: London, UK, 2010. [Google Scholar]
  38. Haddaway, N.R.; Kohl, C.; Rebelo da Silva, N.; Schiemann, J.; Spok, A.; Stewart, R.; Sweet, J.B.; Wilhelm, R. A framework for stakeholder engagement during systematic reviews and maps in environmental management. Environ. Evid. 2017, 6, 1–14. [Google Scholar] [CrossRef]
  39. Abdollahpouri, H. Popularity Bias in Ranking and Recommendation. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019. [Google Scholar]
  40. O’Hagan, A. Expert Knowledge Elicitation: Subjective, but Scientific. Am. Stat. 2019, 73, 69–81. [Google Scholar] [CrossRef]
  41. Nelson, J.D.; McKenzie, C.R. Confirmation Bias. In Encyclopaedia of Medical Decision Making; Sage: Los Angeles, CA, USA, 2009; pp. 167–171. [Google Scholar]
  42. Parnas, D.L.; Weiss, D.M. Active Design Reviews: Principles and Practices. In Proceedings of the 8th International Conference on Software Engineering, London, UK, 28–30 August 1985. [Google Scholar]
  43. Zuber, L. What in the world were we thinking? Managing stakeholder expectations and engagement through transparent and collaborative project estimation. Proj. Manag. World J. 2013, 2, 1–13. [Google Scholar]
  44. Loh, G.H. Simulation differences between academia and industry: A branch prediction case study. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2005, Madison, WI, USA, 20–22 March 2005; pp. 21–31. [Google Scholar]
  45. Mineka, S.; Sutton, S.K. Cognitive biases and the emotional disorders. Sage J. 1992, 3, 65–69. [Google Scholar] [CrossRef]
  46. Acciarini, C.; Brunetta, F.; Boccardelli, P. Cognitive biases and decision-making strategies in times of change: A systematic literature review. Manag. Decis. 2021, 59, 638–652. [Google Scholar] [CrossRef]
  47. Das, T.K.; Teng, B.S. Cognitive biases and strategic decision processes: An integrative perspective. J. Manag. Stud. 1999, 36, 757–778. [Google Scholar] [CrossRef]
  48. Hammond, J.S.; Keeney, R.L.; Raiffa, H. Even swaps: A rational method for making trade-offs. Harv. Bus. Rev. 1998, 76, 137–150. [Google Scholar]
  49. Lahtinen, T.J.; Hämäläinen, R.P.; Jenytin, C. On preference elicitation processes which mitigate the accumulation of biases in multi-criteria decision analysis. Eur. J. Oper. Res. 2020, 282, 201–210. [Google Scholar] [CrossRef]
  50. Cooke, R.M. Experts in Uncertainty: Opinion and Subjective Probability in Science; Oxford University Press: New York, NY, USA, 1991. [Google Scholar]
  51. O’Hagan, A.; Buck, C.E.; Daneshkhah, A.; Eiser, J.R.; Garthwaite, P.H.; Jenkinson, D.J.; Rakow, T. Uncertain Judgements: Eliciting Experts’ Probabilities; John Wiley & Sons: West Sussex, UK, 2006. [Google Scholar]
  52. Van Bossuyt, D.L.; Beery, P.; O’Halloran, B.M.; Hernandez, A.; Paulo, E. The Naval Postgraduate School’s Department of Systems Engineering approach to mission engineering education through capstone projects. Systems 2019, 7, 38. [Google Scholar] [CrossRef]
  53. Bickford, J.; Van Bossuyt, D.L.; Beery, P.; Pollman, A. Operationalizing digital twins through model-based systems engineering methods. Syst. Eng. 2020, 23, 724–750. [Google Scholar] [CrossRef]
  54. Bhatia, G.; Mesmer, B. Trends in occurrences of systems engineering topics in literature. Systems 2019, 7, 28. [Google Scholar] [CrossRef]
Figure 1. General stakeholder biases.
Figure 1. General stakeholder biases.
Systems 11 00499 g001
Figure 2. Steps associated with the Stakeholder Needs and Requirements process in SEBoK.
Figure 2. Steps associated with the Stakeholder Needs and Requirements process in SEBoK.
Systems 11 00499 g002
Figure 3. Biases associated with stakeholder selection.
Figure 3. Biases associated with stakeholder selection.
Systems 11 00499 g003
Figure 4. Biases associated with identifying needs and requirements.
Figure 4. Biases associated with identifying needs and requirements.
Systems 11 00499 g004
Figure 5. Biases associated with stakeholder responses.
Figure 5. Biases associated with stakeholder responses.
Systems 11 00499 g005
Figure 6. Biases associated with group environments.
Figure 6. Biases associated with group environments.
Systems 11 00499 g006
Figure 7. Biases associated with stakeholder participation.
Figure 7. Biases associated with stakeholder participation.
Systems 11 00499 g007
Figure 8. Biases Associated with Capturing Needs and Requirements.
Figure 8. Biases Associated with Capturing Needs and Requirements.
Systems 11 00499 g008
Figure 9. Biases associated with classification.
Figure 9. Biases associated with classification.
Systems 11 00499 g009
Figure 10. Occurrence of biases in elicitation process according to workshop results. Percentages refer to the percentage of times each bias was discussed out of all biases identified per elicitation step. Each elicitation step totals to 100%.
Figure 10. Occurrence of biases in elicitation process according to workshop results. Percentages refer to the percentage of times each bias was discussed out of all biases identified per elicitation step. Each elicitation step totals to 100%.
Systems 11 00499 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeazitzis, T.; Weger, K.; Mesmer, B.; Clerkin, J.; Van Bossuyt, D. Biases in Stakeholder Elicitation as a Precursor to the Systems Architecting Process. Systems 2023, 11, 499. https://doi.org/10.3390/systems11100499

AMA Style

Yeazitzis T, Weger K, Mesmer B, Clerkin J, Van Bossuyt D. Biases in Stakeholder Elicitation as a Precursor to the Systems Architecting Process. Systems. 2023; 11(10):499. https://doi.org/10.3390/systems11100499

Chicago/Turabian Style

Yeazitzis, Taylor, Kristin Weger, Bryan Mesmer, Joseph Clerkin, and Douglas Van Bossuyt. 2023. "Biases in Stakeholder Elicitation as a Precursor to the Systems Architecting Process" Systems 11, no. 10: 499. https://doi.org/10.3390/systems11100499

APA Style

Yeazitzis, T., Weger, K., Mesmer, B., Clerkin, J., & Van Bossuyt, D. (2023). Biases in Stakeholder Elicitation as a Precursor to the Systems Architecting Process. Systems, 11(10), 499. https://doi.org/10.3390/systems11100499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop