Human Factors in Systems Engineering

A special issue of Systems (ISSN 2079-8954).

Deadline for manuscript submissions: closed (15 August 2020) | Viewed by 26926

Special Issue Editors


E-Mail Website
Guest Editor
Systems Engineering and Management, Air Force Institute of Technology, Dayton, OH 45433, USA
Interests: human factors; modeling the human element; human systems integration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Systems Engineering and Management, Air Force Institute of Technology, Dayton, OH 45433, USA
Interests: human factors; human performance modeling; human workload
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Modeling is increasingly used to provide early insight into human physical, perceptual, and cognitive performance. These include static models, such as cognitive maps, analytic hierarchies or task analyses and dynamic models, including models implemented in tools such as anthropometric simulation models, computational human performance models, and agent-based models. Simultaneously, recent advances in model-based systems engineering have provided tools to bridge disciplines, facilitate static and dynamic analysis of early stage system designs, and improve communication among diverse development teams. An important driving force in the increasing use of modeling is the ever-present pressure to increase the rate of development of increasingly complex systems, requiring improvements in rapid systems integration. Within the human domain, complexity is arising from increasing integration of larger geographically-separated teams, the incorporation of automation, as well as the increasing complexity of the underlying hardware and software. 

This Special issue is focused on presenting advances in human-centered modeling methods to address the design of human systems with increasing complexity across multiple industries. Papers are being sought in the following areas:

  • Models of human and team performance
  • Approaches to modeling performance in diverse teams
  • Approaches to modeling and designing human-agent teams
  • Extensions to MBSE techniques which better represent the human element in systems
  • Trade studies which include representations of the operator
  • System modeling methods which include teams of human and artificial cognitive agents
  • Physical ergonomic or anthropometric models
  • Physiological models relating to human performance
  • Physics-based models of forces on human body
  • Model validation and verification methods applicable to human modeling
  • Model visualization and simulation
  • Modeling human–system resiliency
  • Modeling of human–system lifecycle trades
  • Integration of MBSE models and methods with third-party human modeling tools

Dr. Michael E. Miller
Dr. Christina Rusnock
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Systems is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Human-performance modeling
  • Human-factors methods and models
  • Models for human–system integration
  • Life cycle
  • Human–system integration
  • Humans in complex systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 230 KiB  
Editorial
Introduction to the Special Issue on “Human Factors in Systems Engineering”
by Michael E. Miller and Christina F. Rusnock
Systems 2020, 8(4), 50; https://doi.org/10.3390/systems8040050 - 1 Dec 2020
Viewed by 2047
Abstract
This paper summarizes the aim and the results of this Special Issue [...] Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)

Research

Jump to: Editorial

19 pages, 1233 KiB  
Article
Virtual Modeling of User Populations and Formative Design Parameters
by Benjamin M. Knisely and Monifa Vaughn-Cooke
Systems 2020, 8(4), 35; https://doi.org/10.3390/systems8040035 - 3 Oct 2020
Cited by 5 | Viewed by 2831
Abstract
Human variability related to physical, cognitive, socio-demographic, and other factors can contribute to large differences in human performance. Quantifying population heterogeneity can be useful for designers wishing to evaluate design parameters such that a system design is robust to this variability. Comprehensively integrating [...] Read more.
Human variability related to physical, cognitive, socio-demographic, and other factors can contribute to large differences in human performance. Quantifying population heterogeneity can be useful for designers wishing to evaluate design parameters such that a system design is robust to this variability. Comprehensively integrating human variability in the design process poses many challenges, such as limited access to a statistically representative population and limited data collection resources. This paper discusses two virtual population modeling approaches intended to be performed prior to in-person design validation studies to minimize these challenges by: (1) targeting recruitment of representative population strata and (2) reducing the candidate design parameters being validated in the target population. The first approach suggests the use of digital human models, virtual representations of humans that can simulate system interaction to eliminate candidate design parameters. The second approach suggests the use of existing human databases to identify relevant human characteristics for representative recruitment strata in subsequent studies. Two case studies are presented to demonstrate each approach, and the benefits and limitations of each are discussed. This paper demonstrates the benefit of modeling prior to conducting in-person human performance studies to minimize resource burden, which has significant implications on early design stages. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

22 pages, 936 KiB  
Article
Design and Validation of a Method to Characterize Human Interaction Variability
by Kailyn Cage, Monifa Vaughn-Cooke and Mark Fuge
Systems 2020, 8(3), 32; https://doi.org/10.3390/systems8030032 - 17 Sep 2020
Cited by 1 | Viewed by 3623
Abstract
Human interactions are paramount to the user experience, satisfaction, and risk of user errors. For products, anthropometry has traditionally been used in product sizing. However, structured methods that accurately map static and dynamic capabilities (e.g., functional mapping) of musculoskeletal regions for the conceptualization [...] Read more.
Human interactions are paramount to the user experience, satisfaction, and risk of user errors. For products, anthropometry has traditionally been used in product sizing. However, structured methods that accurately map static and dynamic capabilities (e.g., functional mapping) of musculoskeletal regions for the conceptualization and redesign of product applications and use cases are limited. The present work aims to introduce and validate the effectiveness of the Interaction Variability method, which maps product components and musculoskeletal regions to determine explicit design parameters through limiting designer variation in the classification of human interaction factors. This study enrolled 16 engineering students to evaluate two series of interactions for (1) water bottle and (2) sunglasses applications enabling method validity and designer consistency assessments. For each interaction series, subjects identified and characterized product applications, components, and human interaction factors. Primary interactions, product mapping, and application identification achieved consensus between ranges of 31.25% and 100.00%, with significance (p < 0.1) observed at consensus rates of ≥75.00%. Significant levels of consistency were observed amongst designers, for at least one measure in all phases except anthropometric mapping for the sunglasses application indicating method effectiveness. Interaction variability was introduced and validated in this work as a standardized approach to identify, define, and map human and product interactions, which may reduce unintended use cases and user errors, respectively, in consumer populations. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

14 pages, 1298 KiB  
Article
Strategic Decision Facilitation: Supporting Critical Assumptions of the Human in Empirical Modeling of Pairwise Value Comparisons
by Joseph P. Kristbaum and Frank W. Ciarallo
Systems 2020, 8(3), 30; https://doi.org/10.3390/systems8030030 - 9 Sep 2020
Cited by 2 | Viewed by 2443
Abstract
Modeling human decision-making is difficult. Decision-makers are typically primed with unique biases that widen the confidence interval of judgment. Therefore, it is important that the human process in the system being modeled is designed to alleviate damaging biases and assumptions in an effort [...] Read more.
Modeling human decision-making is difficult. Decision-makers are typically primed with unique biases that widen the confidence interval of judgment. Therefore, it is important that the human process in the system being modeled is designed to alleviate damaging biases and assumptions in an effort to increase process consistency between decision-makers. In this experiment, it is hypothesized that coupling specific decision-facilitation methods with a specific scale range will affect the consistency between decision-makers. This article presents a multiphase experiment that examines a varying presentation mode as well as scale range to determine how value is determined in subsequent pairwise comparisons of alternatives against specific requirements. When considering subject value ratings of the expected rank order of alternative subgroups (indicating strong criteria independence), results show that subjects used consistent comparison ratios regardless of the scale range. Furthermore, when comparing the subgroups of expected rank order responses to the subgroups of biased responses, although ratios were different, the same general trend of comparison existed within subgroups. Providing evidence that careful selection of the presentation mode can facilitate more consistent value comparisons between compatible decision-makers allows for the identification of and adjustment of disparities due to bias and potential lack of incremental scaling detail. Furthermore, by creating decision processes that render more consistent cognitive behavior between decision-makers, tighter confidence intervals can be obtained, and critical assumptions can be validated. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

15 pages, 1419 KiB  
Article
A Wargame-Augmented Knowledge Elicitation Method for the Agile Development of Novel Systems
by Stephen L. Dorton, LeeAnn R. Maryeski, Lauren Ogren, Ian T. Dykens and Adam Main
Systems 2020, 8(3), 27; https://doi.org/10.3390/systems8030027 - 12 Aug 2020
Cited by 12 | Viewed by 3855
Abstract
There are inherent difficulties in designing an effective Human–Machine Interface (HMI) for a first-of-its-kind system. Many leading cognitive research methods rely upon experts with prior experiences using the system and/or some type of existing mockups or working prototype of the HMI, and neither [...] Read more.
There are inherent difficulties in designing an effective Human–Machine Interface (HMI) for a first-of-its-kind system. Many leading cognitive research methods rely upon experts with prior experiences using the system and/or some type of existing mockups or working prototype of the HMI, and neither of these resources are available for such a new system. Further, these methods are time consuming and incompatible with more rapid and iterative systems development models (e.g., Agile/Scrum). To address these challenges, we developed a Wargame-Augmented Knowledge Elicitation (WAKE) method to identify information requirements and underlying assumptions in operator decision making concurrently with operational concepts. The developed WAKE method incorporates naturalistic observations of operator decision making in a wargaming scenario with freeze-probe queries and structured analytic techniques to identify and prioritize information requirements for a novel HMI. An overview of the method, required apparatus, and associated analytical techniques is provided. Outcomes, lessons learned, and topics for future research resulting from two different applications of the WAKE method are also discussed. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

15 pages, 1529 KiB  
Article
Applying Control Abstraction to the Design of Human–Agent Teams
by Clifford D. Johnson, Michael E. Miller, Christina F. Rusnock and David R. Jacques
Systems 2020, 8(2), 10; https://doi.org/10.3390/systems8020010 - 12 Apr 2020
Cited by 6 | Viewed by 4530
Abstract
Levels of Automation (LOA) provide a method for describing authority granted to automated system elements to make individual decisions. However, these levels are technology-centric and provide little insight into overall system operation. The current research discusses an alternate classification scheme, referred to as [...] Read more.
Levels of Automation (LOA) provide a method for describing authority granted to automated system elements to make individual decisions. However, these levels are technology-centric and provide little insight into overall system operation. The current research discusses an alternate classification scheme, referred to as the Level of Human Control Abstraction (LHCA). LHCA is an operator-centric framework that classifies a system’s state based on the required operator inputs. The framework consists of five levels, each requiring less granularity of human control: Direct, Augmented, Parametric, Goal-Oriented, and Mission-Capable. An analysis was conducted of several existing systems. This analysis illustrates the presence of each of these levels of control, and many existing systems support system states which facilitate multiple LHCAs. It is suggested that as the granularity of human control is reduced, the level of required human attention and required cognitive resources decreases. Thus, it is suggested that designing systems that permit the user to select among LHCAs during system control may facilitate human-machine teaming and improve the flexibility of the system. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

17 pages, 1615 KiB  
Article
Would You Fix This Code for Me? Effects of Repair Source and Commenting on Trust in Code Repair
by Gene M. Alarcon, Charles Walter, Anthony M. Gibson, Rose F. Gamble, August Capiola, Sarah A. Jessup and Tyler J. Ryan
Systems 2020, 8(1), 8; https://doi.org/10.3390/systems8010008 - 18 Mar 2020
Cited by 9 | Viewed by 5502
Abstract
Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to [...] Read more.
Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to date has largely ignored the human factors aspects of automated code repair. The current study explored trust perceptions, reuse intentions, and trust intentions in code repair with human generated patches versus automated code repair patches. In addition, comments in the headers were manipulated to determine the effect of the presence or absence of comments in the header of the code. Participants were 51 programmers with at least 3 years’ experience and knowledge of the C programming language. Results indicated only repair source (human vs. automated code repair) had a significant influence on trust perceptions and trust intentions. Specifically, participants consistently reported higher levels of perceived trustworthiness, intentions to reuse, and trust intentions for human referents compared to automated code repair. No significant effects were found for comments in the headers. Full article
(This article belongs to the Special Issue Human Factors in Systems Engineering)
Show Figures

Figure 1

Back to TopTop