Next Article in Journal
Exploring Urban Green Space Optimization of the Urban Walking Life Circle in Fuzhou, China
Next Article in Special Issue
Perceived Occupational Noise Exposure and Depression in Young Finnish Adults
Previous Article in Journal
SOphrology Intervention to Improve WELL-Being in Hospital Staff (SO-WELL): Protocol for a Randomized Controlled Trial Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cognitive Analyses for Interface Design Using Dual N-Back Tasks for Mental Workload (MWL) Evaluation

by
Nancy Ivette Arana-De las Casas
1,2,*,
Jorge De la Riva-Rodríguez
1,
Aide Aracely Maldonado-Macías
1,* and
David Sáenz-Zamarrón
2
1
Graduate Studies and Research Division, Tecnológico Nacional de México/Instituto Tecnólogico de Cd. Juárez, Cd. Juárez 32500, Chih., Mexico
2
Graduate Studies and Research Division, Tecnológico Nacional de México/Instituto Tecnólogico de Cd. Cuauhtémoc, Cd. Cuauhtémoc 31500, Chih., Mexico
*
Authors to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2023, 20(2), 1184; https://doi.org/10.3390/ijerph20021184
Submission received: 26 November 2022 / Revised: 30 December 2022 / Accepted: 4 January 2023 / Published: 9 January 2023
(This article belongs to the Special Issue Mental and Physical Health for Occupational Wellness)

Abstract

:
In the manufacturing environments of today, human–machine systems are constituted with complex and advanced technology, which demands workers’ considerable mental workload. This work aims to design and evaluate a Graphical User Interface developed to induce mental workload based on Dual N-Back tasks for further analysis of human performance. This study’s contribution lies in developing proper cognitive analyses of the graphical user interface, identifying human error when the Dual N-Back tasks are presented in an interface, and seeking better user–system interaction. Hierarchical task analysis and the Task Analysis Method for Error Identification were used for the cognitive analysis. Ten subjects participated voluntarily in the study, answering the NASA-TLX questionnaire at the end of the task. The NASA-TLX results determined the subjective participants’ mental workload proving that the subjects were induced to different levels of mental workload (Low, Medium, and High) based on the ANOVA statistical results using the mean scores obtained and cognitive analysis identified redesign opportunities for graphical user interface improvement.

1. Introduction

Today, the technologically complex and advanced human–machine systems being adopted in the new manufacturing environments are placing considerable mental workload (MWL) on workers. For example, mental workload increases when the complexity of the assembly of products upsurges.
Nowadays, product variants are constantly changing due to customer requirements, so more information processing, interference control, and focused attention are demanded from workers [1,2,3]. Thus, such environments’ conditions negatively impact their health, causing them to experience mental problems rather than physical problems in recent times [4]. Therefore, mental workload consideration has become relevant due to its relation to human errors, accidents, deficient performance, work stress, and other adverse effects [5]. Accordingly, in cognitive ergonomics, it is crucial to understand the influences of mental workload on worker performance and conduct analyses and interventions to reduce its harmful effects [6].
However, this poses a few challenges, such as studying and defining relevant concepts of mental workload. For example, one of the most accepted definitions of mental workload explains it as the amount of cognitive work exerted on a task over time [7,8]. Likewise, it can be understood as the relationship between environmental and task-relevant demands and the internal supply of mental resources [1]. In accordance, some authors state that mental workload exists as a function of task demands and moderating variables and can be understood as a subjective experience and a physiological reaction, resulting in task-related behavior [9].
On the other hand, the use and design of interfaces remain important mental load aspects to be studied in these new work environments. The interface can be defined as the mode through which a user interacts with the human–machine system [10]. Most of the time, these interfaces are graphical, known as Graphical User Interfaces (GUI). They can be understood as information assistance systems that workers use to acquire and enter information related to their job activities, i.e., instructions, specifications, work orders, and real-time sensor process results. These activities require cognitive resources that may be mental workload-related [1].
Additionally, GUI design considers cognitive and ergonomic factors such as user memory, physical capacities, interaction preferences, and error tolerance; therefore, an ergonomic perspective is necessary to eliminate or reduce MWL. To achieve these objectives conducting a Cognitive Analysis (CA) in the human-system interaction design phase to minimize human error and increase operator safety [11]. The cognitive analysis also implies deep task analysis, since a well-designed interface should be easy to use and feature a screen where users can quickly identify relevant information [12]. An interface should have a cognitive directness, which involves using appropriate visual cues and meaningful icons and minimizing mental transformations data, i.e., “control + shift + del + 7” to accept a statement [13].
Concerning mental workload measurement and interfaces, the Dual N-Back tasks have proven applicable to the study of mental workload. For example, a case study introduced the N-Back as a test of continuous interactive activity allowing for mental workload measurement and subject concentration [14]. A variant with an auditive stimulus known as Dual N-Back was introduced in 2003 [15].
The Dual N-Back tasks have had several uses; for example, to create a high cognitive workload and evaluate the effects of MWL on involuntary attention [16], and also to induce low, medium, and high MWL to monitor mental workload levels in office-work scenarios [17]. On the other hand, researchers used an N-Back task while measuring EEG (Electroencephalography), skin conductance, respiration, ECG (Electrocardiogram), pupil size, and eye blinks to compare measurements for a mental workload assessment [18]. Another study quantified mental workload using an EEG, functional near-infrared spectroscopy (fNIRS), and an auditive/verbal N-back task [19]. In some other research, a variant was used to examine the role of “interruptions” on qualitative and quantitative load on vigilance [20]; also, in a study using an auditory N-Back task to assess performance and cognitive states during cognitive work [21].
These studies used Dual N-back tasks to study different forms of mental workload. However, no cognitive analysis has been performed in an interface using these tasks while searching for a better user–system interaction, which is the main objective of cognitive analysis, as mentioned in different articles [22,23,24]. This work aims to design, evaluate, and validate a graphical user interface developed to create a mental workload using Dual N-Back tasks and, thus, contribute to a novel approach for the cognitive evaluation of such an interface. The study and validation of the GUI design were conducted using the NASA-TLX methodology, Hierarchical Task Analysis (HTA), and the Task Analysis Method for Error Identification (TAFEI).

1.1. Cognitive Analyses (CA)

1.1.1. Hierarchical Task Analysis (HTA)

The importance of task analysis in improving worker productivity, in the beginning, was studied by researchers [25]. Later, a contribution concerning a valuable idea of describing a task in terms of an operations’ hierarchy and plans was generated; the result was the development of the HTA as a precedent for most human factors approaches and methods such as human error identification, workload assessment, allocation of function, interface design among others [26].
HTA enables a detailed breakdown of the tasks, which involves the hierarchical division of overall working labor into sub-tasks, elementary lessons, and specific activities expressed in a tree diagram. This methodology has been widely used and precedes any cognitive analysis [27,28,29]. One disadvantage, however, is that it can be challenging to perform [30].
On the other hand, several studies have used hierarchical task analysis to validate their work-related tasks [28,31,32,33,34]. For example, to review the goals and related cognitive processes of the mental workload-related interface [35]. Additionally, these authors incorporated the TAFEI method to study the interactions with the subjects to identify context-specific human errors.

1.1.2. Task Analysis for Error Identification (TAFEI) in Interface Design

The methodology to design error-tolerant consumer products was developed and received the name Task Analysis for Error Identification (TAFEI) [36]. This method enables the prediction of interface errors by modeling the interaction between the user and the interface analyzed.
Furthermore, it features certain advantages since it includes proposals for error reduction; however, it also requires a degree of skill to be performed effectively. Numerous studies have validated and used the methodology [37]. For example, to validate the methodology by conducting two studies to compare TAFEI’s predictions of humans’ actual performance and found that the method had an accuracy of at least 67% in error prediction [38]. In addition, to identify and decrease human errors while operating a meat grinder to improve system productivity [28]. Additionally, it was used to identify three possible error causes when changing a PC power supply [39]. Moreover, in a study, twenty-nine possible cases of illegal transfer in a “high-pressure power post” were found; each transfer could be a potential accident in this type of task [27].

1.2. NASA-TLX for Mental Workload Assessment

The last methodology used was NASA-TLX, one of the most widely used subjective techniques to measure Mental Workload [40]. It offers a subjective classification model to analyze workers’ perceptions of the complexity of the task. In addition, the same authors report the advantages of this technique, such as ease of use, short application time, and low cost. However, one of its disadvantages is the mental workload correlation with worker performance. Additionally, its results can be affected by the respondents’ individuality, as well as by bias, low sensibility for task differentiation, errors, and task aversion during the analysis [41].
The NASA-TLX is divided into six subscales: mental demand, temporal demand, physical demand, performance, effort, and frustration. First, each subscale’s value must be within a range of 100 points with steps of 5 points. These classifications are then merged to obtain the task load index. The other part of the test proposes establishing an individual weighting for these subscales by allowing subjects to match them in pairs based on their perceived importance [42]. NASA-TLX and HTA have been used together in some studies in different domains, including evaluating mental workload in human system design for manufacturing operations [33,43], human performance in various tasks [40,42,44], and assessment of the subjective MWL with the NASA-TLX and N-Back tasks. Researchers in a study obtained higher mental demand in a 3-Back task with average scores for NASA in a range of (38.68 ± 3.82), compared to an oddball condition with average scores for NASA in a range of (15.07 ± 1.78); these results were verified via an ocular aberration measurement as a physiological, mental workload measurement [45]. In another study, investigators created a low, medium, and high MWL by applying three variants of the Dual N-Back task to subjects in everyday life office-work scenarios [17]. The N-Back task under three demand conditions was also used in another study to determine the effect of task demand as an incentive on neurophysiological and cardiovascular effort markers [46]. Likewise, other studies used three levels of the N-Back tasks and NASA-TLX and discovered diverse levels of difficulty that were statistically different [47,48]. As can be seen, previous studies have applied Dual N-Back for various purposes; however, it is essential to mention that none of the articles conducted a cognitive analysis of the user interfaces. Therefore, this paper contributes to developing a cognitive analysis to evaluate an interface using this type of task.

2. Materials and Methods

This study is a transversal, quasi-experimental, and eclectic approach. It was conducted during the first semester of 2021. Accordingly, (COVID-19) pandemic restrictions applied; therefore, the present study was conducted remotely using Information and Communication Technologies (ICTs) and hardware owned by the subjects. This section will describe the participants and materials. Additionally, the GUI design method, the procedure for cognitive interface evaluation, and the statistical data analysis are presented.

2.1. Limitations of the Study

The study limitations were COVID-19 pandemic restrictions, time, sample size, hardware availability, and software constraints.
As mentioned, the study was completed during the first semester of 2021, when all pandemic limitations were in place. As a consequence, interactions with students by law were only virtual. This means subjects must have the internet connections, hardware, and software necessary to do the experimental sessions, and not all students comply with this requirement. All of the above influenced the reduced study sample size to carry on the project.

2.2. Participants

Ten participants took part in the study. All of them were healthy university students (not suffering from any current illness or mental disorder) with normal vision. Six were female, and the average age was 21.3 years (SD = 1.01 years). Before the experiment, the subjects provided their informed consent to participate in the study. The investigation was conducted following the ethical guidelines approved by the TecNM/Technological Institute of Ciudad Juarez ethics committee, which considers the General Health law on research for health Mexican regulation, the Nuremberg code, the Universal Declaration of Human Rights in its articles 1 to 5 and the American Declaration of the Rights and Duties of Men in its articles IV and XXVII. The subjects participated willingly and could suspend their participation at any time they wished; additionally, they were not reimbursed in any way. The participants were also instructed to abstain from alcohol intake 24 h before each experimental session. Participants were engineering students with certain English notions; however, all instructions were given in Spanish.

2.3. Materials

2.3.1. Hardware

The central computer used for the interface design was a DELL computer with an Intel(R) Core (TM) i5-8250U CPU@ 1.60 GHz 1.80 GHz, an 8.00 GB RAM installed, and a Windows 10 Pro Operative system; the subjects were also connected to the main computer via the TeamViewer® software, which enables remote access and can be freely downloaded from its website for non-commercial purposes. For data analysis and connection via Google Meet® with the subjects, the equipment used was an ACER computer with an Intel(R) Core (TM) i5-8250U CPU@ 1.60 GHz 1.80 GHz, an 8.00 GB RAM installed, with pencil compatibility and tactile function with ten tactile points, and a Windows 10 Home operative system. Finally, for connections with the subjects via Google Meet Version 62.0.372156507 and the NASA-TLX iOS app. Ver. 1.0.3, an Apple iPad Air (3rd generation) with IOS version 14.4.2 software, was used.

2.3.2. Software

The software used by this study to do a subjective mental workload measure was the NASA TLX iOS EULA app Ver. 1.0.3, developed at the Ames Research Center [49]. The software is a full computational version of its predecessor Windows NT; a pencil and paper version of this app can be downloaded from https://itunes.apple.com/us/app/nasa-tlx/id1168110608, (accessed on 19 January 2021) and data collection can only be performed through iOS System equipment.

2.3.3. Methodology for GUI Design

The interface was designed using MATLAB’s GUIDE tool, a user interface design environment used widely to develop GUIs [50,51]; the researchers chose it because it offers the advantage of having a drag and drop features and programmatically generating a sequential application that uses a timer to generate a fixed time base of 3 s. This is the usual period between mental load test events, such as showing random operands (1 to 10) so that their positions are in a 3 × 3 matrix and their values are memorized [17]. While the operands are displayed in the array (for 3 s), its corresponding position and value must be remembered.
The program will then request that the position of each operand in the 3 × 3 matrix be mentally identified and decide whether an operand in time (n) appeared in the same position as an operand in times: (n 1), (n 2), or (n 3). The program can also request that an arithmetic operation (sum, subtraction, multiplication, or division) be performed under the same pattern, that is, that one of the arithmetic operations be performed mentally with the value of the current-time operand (n) and the value of any of the operands at times (n 1), (n 2), and (n 3). The mental activity performed by the user depends on the defined difficulty settings, which, following the Dual N-Back task, can be the following [17]:
  • Easy: It consists of identifying the position and adding and subtracting the operands in times (n) and (n 1).
  • Medium: It consists of performing addition or subtraction operations of the operands in times (n) and (n 1) or (n 2).
  • Hard: It consists of performing one of the following operations: addition, subtraction, multiplication, or division of the operands in times (n) and (n 2) or (n 3).
After variable initialization, the program generates operators with their position every 3 s in an n-time. However, that n-time is compared to the vectors of previously configured values depending on the degree of difficulty selected. The Graphical User Interface block program diagram is shown in Figure 1.
When the number of test operations (N) is terminated, the program evaluates the user’s performance (correct/N) and terminates.
In addition, the program makes use of GUIDE, which contains the graphical interface functions such as the MATLAB© command “uicontrol,” which can insert elements such as:
  • text. Provides static text in the figure to provide information to the user and indicate values.
  • popup menu. Creates a menu that displays a list of options on whether the operands at times n and (n 1), (n 2), and (n 3) are in the same position or not. The user selects them by pressing the mouse button on the desired option.
  • edit. An editable text field is created where the keyboard cursor becomes active and blinking, waiting for information to be entered. This is where the user writes the result of the mental operation he performs (addition, subtraction, multiplication, division).
  • axes. Creates a Cartesian axis in the figure for images to be inserted. It is used to insert the icons shown to users to tell them what to do, for example, Memorize, Add, Subtract, Multiply, and Divide. It also indicates whether an answer should be given and whether it is correct or incorrect.
  • uitable. Creates a table component in the figure for the user interface. It is used to display the 3 × 3 matrix where the operand is positioned.
  • sound. This command lays an audio file (.wav) on the computer speaker. Audio clips simultaneously play as icons are displayed to instruct users regarding the activity they must perform, for example, Memorize, Add, Subtract, Multiply, or Divide.

2.4. Procedure for Interface Cognitive Evaluation

The Graphical User Interface design cognitive evaluation and validation was descriptive, analytical, quantitative, featuring a transversal design, and was undertaken in the following five phases:

2.4.1. Phase 1: Task Analysis and Hierarchical Task Analysis (HTA)

In this phase, two subjects ran the interface completely and were analyzed. The HTA was developed from this analysis following the methodology proposed by Stanton [52], which includes deciding on the aim of the analysis, establishing task objectives and performance criteria, identifying sources of task information, and obtaining data and a draft decomposition diagram.

2.4.2. Phase 2: Task Analysis for Error Identification (TAFEI)

As part of the cognitive evaluation, error identification was relevant in the interface design. Therefore, the GUI was evaluated to propose pertinent changes and improvements. Accordingly, the TAFEI method was helpful since it gives a vision of the error prediction and modeling of the interaction between the user and the interface under analysis [50].
Two steps were followed in this phase, according to Prabaswary [53]:
  • Step 1: Create space–time diagrams (SSD), including the list of states that may happen in the interface. These states are constructed to represent the interface behavior as a whole (artifact), and they consist of a series of states that the interface passes through, from a starting state to the goal state. For each series of states, there will be a current state and a set of possible exits to other states.
  • Step 2: Create the transition matrix, including the detection of impossible transitions denoted by the (-) symbol, possible and desirable denoted by (L), and possible but undesirable/Illegal denoted by (I). These transitions will be further analyzed. TAFEI intends to assist the interface (artifact) design by illustrating when a state transition is possible but undesirable (Illegal, I). Consequently, designers will try to make all the illegal transitions impossible and facilitate the cooperative attempt of interface use. All possible states are entered as headers on the matrix, where the cells represent the state transitions.

2.4.3. Phase 3: Experiment for Interface Validation of Mental Workload Using Dual N-Back Tasks

In this phase, the participants take part in six sessions (two for each Mental Workload level). Previous to the first session, an informed consent form received via e-mail had to be signed and returned before moving forward. Beginning the first session, the subjects received initial information through a PowerPoint® presentation. After that, they were asked to connect to the main computer via TeamViewer® to begin customized training on the interface use and the tasks performed during the test. Then, they performed the assigned tasks, designated using a randomized factorial design. The subjects were identified with an “S” and a randomly assigned number from 1 to 10; T1 was used for tasks with a low mental workload, T2 for a task with a medium, and T3 for tasks featuring a high mental workload. The final schedule is shown in Table 1.

2.4.4. Phase 4: Mental Workload Subjective Evaluation Using NASA-TLX

As was mentioned before, the MWL evaluation was conducted using the NASA-TLX iOS app, version 1.0.3®, which was downloaded in an iPad® Air 3er Gen. Due to the COVID-19 pandemic, the test was conducted remotely, and the subjects contacted the researchers via Google Meet®. Once the assigned task was finished, the subjects responded to NASA TLX.
Since the previous application was only available in English, the subjects received a translation of all the app’s terms and definitions in advance. Then, during the test, the researchers read out each screen shown to the subjects in Spanish. In addition, to improve the comprehension of the method, NASA-TLX dimensions were explained carefully before the subjects began with the pairwise comparisons and final dimension scores assignation.

2.4.5. Phase 5: Statistical Analysis

Statistical data analysis was conducted to confirm that the three levels of mental workload induced by the N-Back tasks were appropriately differentiated by subject and by software by using an ANOVA Lineal General Model with a confidence level of 95% with two factors: mental workload level and subject. The mental workload level factor had three values (Low, Medium, and High), and the subject factor had ten values (Participants).

3. Results

This section will present the results related to the Graphical User Interface design, HTA, and TAFEI methodologies the mental workload subjective examination completed with NASA-TLX and its corresponding statistical analysis.

3.1. Results of Graphical User Interface (GUI) Design

The GUI design was performed using the MATLAB© interface GUIDE. Figure 2 shows the GUI’s initial skeleton.
The interface’s main window allowed users to access the tool’s functionalities. The interaction with the subjects sought to minimize the subjects’ errors and operational problems
Visual cues and icons in the main window were divided by their functionalities into three areas:
  • 3 × 3 Matrix
  • Response box
  • Operation
Figure 3 shows an example of the GUI task in which the subject is asked to subtract the operands displayed in times (n) and (n 1).
The designed interface at the end of the test shows the two last screens. In the first one, shown in Figure 4a, users could see their performance expressed in a percentage obtained through a correct answer ratio. In the second one, shown in Figure 4b, a graph appeared showing the performance over time (seconds); the correct answers were displayed in green bars, and a number was represented in the positive part, while the incorrect responses were displayed in red bars shown in the negative aspect of the graph. Even though knowing their performance may influence the NASA-TLX answers, this limitation was not studied sufficiently to have an accurate report about it.

3.2. Hierarchical Task Analysis (HTA) Results

The first method applied was the HTA. The analysis broke down the main activities into smaller tasks: five subtasks and twelve sub-sub tasks resulting from such division. Using this method, sub-task five (5.0) was assigned to the execution of the test and was, thus, recognized as the most complex task since this is where the low, medium, and high mental workloads were induced via the GUI used by the participants. Likewise, this task was also considered as the one where more errors were expected to occur, especially in the high mental workload interface tasks (5.3), which demanded a higher complexity in the tasks of memorizing and calculating. Table 2 shows an excerpt of the hierarchical task breakdown; the complete HTA is reported in chart form in Appendix A, Table A1.

3.3. Task Analysis for Error Identification (TAFEI)

Once the HTA was carried out and the participants’ interaction and activities in the system were clarified, human errors were identified using TAFEI. Meanwhile, the determination of the SSDs was also possible. Figure 5 shows the corresponding nine SSDs in the man–machine system during the task. These diagrams revealed potential human and technical system errors that were tended to so that they could be prevented during the experimental session, as all of them could jeopardize the test and be time-consuming for both participants and researchers.
Aside from the SSDs, the transition matrix for TAFEI shown in Table 3 shows eight illegal transitions (I). These transitions are the following: from state 2 (computer on) to 4 (computer mouse not working), this error occurs when the participant moves the mouse before checking whether the software executable is available. A similar transition is an illegal transition from 2 (computer on) to 6 (keyboard not working). About the illegal transitions from 2 to 7 (test interface) and 8 (interface not working) can become potential human and system errors. Therefore, the researchers designed the training interface for the first session. As can be seen, all the illegal transitions can happen before the training or the actual test and correspond to the HTA in subtasks 4.0 and 5.0. Therefore, when these illegal transitions were identified, the researchers eliminated them and discussed them with the subjects before the tests began. Hence, such errors were absent when the actual sessions took place.

3.4. Results for Mental Workload Interface Validation of Mental Workload Using NASA-TLX and Statistical Analysis

Phases 3 and 4 were necessary to validate the interface using Dual N-Back tasks to analyze mental workload using NASA-TLX. The NASA-TLX adjusted results were obtained by dimension (pairwise comparison/weight) in conjunction with the overall global NASA-TLX scores. The performance average is shown in the percentage of correct responses for all participants (Table 4). The results found for overall NASA-TLX scores were in the corresponding ranges of Low (0–29), Medium (30–49), and High (50–100), values stated according to the data collected [54] and very similar to the values expressed in other studies [53].
These results show the highest average values of NASA’s dimensions in the highest mental workload level, except for the physical demand. In this respect, the execution of Dual N-Back tasks using the GUI demands physical related to computer interaction such as computer mouse movement or the keyboard use when typing a response. Accordingly, in their responses, subjects in the sample perceived the lowest values of physical demand dimension in the MWL induced by these tasks in all levels.
As a result of the ANOVA, a significant difference (p = 0.001 α = 0.05) was found between the mental workload levels and between NASA-TLX global score results between subjects (p = 0.000 α = 0.05), the results are shown in Table 5. From these results, it can be inferred that the GUIs using the Dual N-Back tasks and methods to induce MWL at three levels.
Figure 6 and Figure 7 show the overall NASA-TLX results by session. The code used included the letter “S” for the Subject, the number assigned to them randomly (1 to 10), and the number of the type of mental workload test (1: Low, 2: Medium, and 3: High). Thus, “S1–3” was used to designate subject number one, performing the high mental workload task (3). Additionally, the NASA TLX scores for the “LOW” mental workload level bars are displayed in green, the scores for the “MEDIUM” mental workload level in blue color bars, and the scores for the “HIGH” mental workload level in aqua color bars. As can be observed, participants’ highest scores for NASA TLX correspond to the induced “HIGH” mental workload level. Similar results can be seen for the rest of the mental workload levels. Therefore, it can be inferred that the interface design can induce low, medium, and high MWL levels effectively.

4. Discussion

This work aimed to design and evaluate a GUI developed to create a mental workload (MWL) based on the Dual N-back tasks [17]. This objective was accomplished since there are a lack of studies that evaluate interfaces using Dual N-Back tasks from a cognitive ergonomics perspective. Therefore, this paper’s novelty lies in that it offers the cognitive evaluation of such an interface, thus increasing the knowledge of the GUI, the methodology, and the participants’ performance. Additionally, the human error method and its results helped correctly identify human and system errors, as has been completed in others researches [27,28,32,36,39]. Another objective was to validate the design of the GUI with a cognitive approach. For this purpose, the Hierarchical Task Analysis (HTA) in its chart form helped break down the tasks into subtasks of interface interaction. This method preceded the NASA-TLX analysis; which was applied to determine the participants’ MWL during the interface interaction, obtaining results similar to experimentations that used interfaces and NASA-TLX [55]. The Task Analysis Method for Error Identification (TAFEI) was also applied, identifying potential human and system errors related to the state of the hardware and software. In addition, it helped identify human and system errors that can be prevented and avoided effectively.

5. Conclusions

As conclusions, the interface design using the Dual N-Back tasks accomplished its purpose of inducing different levels of mental workload (Low, Medium, and High) based on the ANOVA statistical results using the mean scores obtained by global NASA-TLX scores in the sample of subjects of this study. Additionally, the subject’s performance was coherent with the level of MWL induced: it was better in the Low mental workload sessions, with an average of 89.22% of correct answers, compared to an average of 80% for the Medium level, and 76.1% for the High one. Thus, it can be concluded that the Graphical user interface (GUI) design and evaluation may be effective when studying mental workload under the stated conditions. Future studies can include a bigger sample size using this graphical user interface to obtain reliable measures of mental workload by level to support additional analyses related to it and several important performance variables of participants, such as the number of correct answers and the response time in seconds. Finally, some physiological variable monitoring can be incorporated into the methodology to study the effects of mental workload on the participants under several conditions of interest.

Author Contributions

Conceptualization, A.A.M.-M. and N.I.A.-D.l.C.; methodology, N.I.A.-D.l.C.; software, D.S.-Z.; validation, N.I.A.-D.l.C. and D.S.-Z.; formal analysis, N.I.A.-D.l.C.; investigation, N.I.A.-D.l.C. and D.S.-Z.; resources, N.I.A.-D.l.C. and D.S.-Z.; data curation, D.S.-Z.; writing—original draft preparation, N.I.A.-D.l.C.; writing—review and editing, A.A.M.-M. and J.D.l.R.-R.; supervision, J.D.l.R.-R.; project administration, N.I.A.-D.l.C.; funding acquisition, N.I.A.-D.l.C. and D.S.-Z. All authors have read and agreed to the published version of the manuscript.

Funding

The Tecnológico Nacional de México supported this work under the grant [9949.21-P]. and the Mexican National Council for Science and Technology (CONACYT) supported this work under the grant to doctoral student 46307.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Tecnológico Nacional de México/I.T. Cd. Juárez (protocol code DEPI-001 approved on 1 March 2021) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The database data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The researchers acknowledge the Tecnológico Nacional de México for all the help provided during the research and preparation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

List of sub-tasks, elementary tasks, and specific activities in the Hierarchical Task Analysis (HTA).
Table A1. Complete HTA.
Table A1. Complete HTA.
SubtaskElementary TaskSpecific Activity
0. Interface validation
Plan 0: Do 1,2,3 in order. Do 4 in the first session, and when required, after and to finish 5.
1. Sitting in front of the computer
Plan 1: Do 1.1 and 1.2 in any order.
1.1 Arrange the chair in position to the desk.
1.2 Arrange keyboard.
2. Locate the Mouse. Plan 2: Do 2.12.1 Arrange Mouse.
3. Visualize the Computer screen. Plan 3: Do 3.13.1 Locate the executable of the program corresponding to training and click.
4. Training
Plan 4: do 4.1, 4.2, 4.3, 4.4, and 4.5 in order.
4.1 Beginning of the training
4.2 Digit position training
Plan 4.2: 4 times ((Do 4.2.1 & 4.2.2) × 2 then 4.2.3, 4.2.4 & 4.2.5 in order))
4.2.1 Listen to the word: “Memorize.”
4.2.2 See 3 × 3 matrix with a number from 1 to 9. Memorize the digit and position.
4.2.3 Look at the sentence “Same Position” (n)/(n-1)? on the computer screen.
4.2.4 Select “no” if the position of the corresponding digits is different or “Yes” if it is the same.
4.2.5 The subject receives feedback.
4.3 Addition or subtraction training 1 N-Back (n 1)
Plan 4.3: 4 times ((Do 4.3.1 × 2, then 4.3.2) then 4.3.3 & 4.3.4 in order)
4.3.1 Memorize.
Plan 4.3.1 Do 4.3.1.1 and 4.3.1.2 in order
4.3.1.1 Listen and memorize.
4.3.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
4.3.2 Look at the writing on the computer screen “Operation (n)/(n 1)” and operation determination (addition or subtraction)
Plan 4.3.2: Do 4.3.2.1 and 4.3.2.2 in order.
4.3.2.1 Listen and see the mathematical operation to be completed.
4.3.2.2 Do the operation mentally.
4.3.3 Answer.
Plan 4.3.3: Do 4.3.3.1 & 4.3.3.2 in order
4.3.4.1 Position the cursor in the answer box and click.
4.3.4.2 Use the keyword to write the answer.
4.3.4 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong
4.4 Addition or subtraction training 2 N-Back (n 2)
Plan 4.4: 4 times ((Do 4.4.1 × 3, then 4.4.2) then 4.4.3, 4.4.4 & 4.4.5 in order)
4.4.1 Memorize.
Plan 4.4.1: Do 4.4.1.1 & 4.4.1.2 in order.
4.4.1.1 Listen and memorize.
4.4.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
4.4.2 Look at the writing “Operation (n)/(n 2)” on the computer screen
4.4.3 See operation determination (addition or subtraction)
Plan 4.4.3: Do 4.4.3.1 and 4.4.3.2 in order
4.4.3.1 Listen and see the operation to be completed
4.4.3.2 Do the operation mentally
4.4.4 Answer
Plan 4.4.4: Do 4.4.4.1 & 4.4.4.2 in order
4.4.4.1 Position the cursor in the answer box and click
4.4.4.2 Using the keyboard to write the answer.
4.4.5 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong
4.5 Multiplication or division training 3 N-Back (n 3)4.5.1 Memorize
Plan 4.5.1 Do 4.5.1.1 and 4.5.1.2 in order.
4.5.1.1 Listen and memorize.
4.5.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
4.5.2 See writing “Operation (n)/(n 3)” on the computer screen
4.5.3 See operation determination (multiplication or division).
Plan 4.5.3: Do 4.5.3.1 and 4.5.3.2 in order.
4.5.3.1 Listen and observe the operation to be completed.
4.5.3.2 Do operation mentally.
4.5.4 Answer
Plan 4.5.4: Do 4.5.4.1 & 4.5.4.2 in order.
4.5.4.1 Position the cursor in the answer box and click.
4.5.4.2 Using the keyboard, write the answer.
4.5.5 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong
5 Test
Plan 5: Do 5.1 or 5.3, or 5.4 as corresponding
5.1 Low Mental Workload Interface
Plan 5.1: 13 times (Do 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5 in order)
5.1.1 Memorize.
Plan 5.1.1: 2 times (Do 5.1.1.1 & 5.5.1.2 in order)
5.1.1.1 Listen and memorize.
5.1.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
5.1.2 See the writing on the computer screen.
Plan 5.1.2: Look at 5.1.2.1 or 5.1.2.2
5.1.2.1 Look at the writing on the computer screen “Same Position (n)/(n 1)”?
5.1.2.2 Observe writing on the computer screen “Operation (n)/(n 1)”?
5.1.3 See the mathematical equation determination (position, addition, or subtraction).
Plan 5.1.3: Do 5.1.3.1 and 5.1.3.2 in order.
5.1.3.1 Listen and observe the mathematical operation to be completed
5.1.3.2 Do the operation mentally.
5.1.4 Answer.
Plan 5.1.4: Do 5.1.4.1 & 5.1.4.2 in order.
5.1.4.1 Position the cursor in the answer box and click.
5.1.4.2 Select the answer or use the keyboard to write the answer as it corresponds.
5.1.5 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong
5.2 Medium Mental Workload Interface5.2.1 Memorize.
Plan 5.2.1: 2 times if (n 1) or 3 times if (n 2) as it corresponds (Do 5.2.1.1 & 5.2.1.2 in order)
5.2.1.1 Listen and memorize.
5.2.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
5.2.2 See the writing on the computer screen
Plan 5.2.2: Look 5.2.2.1 or 5.2.2.2
5.2.2.1 Look at the writing on the screen “Operation (n)/(n 1)”?
5.1.2.2 Look at the writing on the screen “Operation (n)/(n 2)”?
5.2.3 See mathematical operation determination (Addition or subtraction).
Plan 5.2.3: Do 5.2.3.1 and 5.2.3.2 in order
5.2.3.1 Listen and observe the mathematical operation to be completed.
5.2.3.2 Do operation mentally.
5.2.4 Answer.
Plan 5.2.4: Do 5.2.4.1 & 5.2.4.2 in order
5.2.4.1 Position the cursor in the answer box and click.
5.2.4.2 Use the keyboard to write the answer.
5.2.5 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong
5.3 High Mental Workload Interface5.3.1 Memorize.
Plan 5.3.1: 3 times if (n 2) or 4 times if (n 3) as it corresponds (Do 5.3.1.1 & 5.3.1.2 in order)
5.3.1.1 Listen and memorize.
5.3.1.2 Look at the 3 × 3 matrix with a digit from 0 to 9. Memorize digits and position.
5.3.2 See writing.
Plan 5.3.2: Look at 5.3.2.1 or 5.3.2.2
5.3.2.1 Look at the writing on the screen “Operation (n)/(n 2)”?
5.3.2.2 Look at the writing on the screen “Operation (n)/(n 3)”?
5.3.3 See mathematical operation determination (Multiplication or division).
Plan 5.3.3: Do 5.1.3.1 and 5.1.3.2 in order.
5.3.3.1 Listen and see the mathematical operation to be completed on the computer screen.
5.3.3.2 Do operation mentally.
5.3.4 Answer.
Plan 5.1.4: Do 5.1.4.1 & 5.1.4.2 in order.
5.3.4.1 Position the cursor in the answer box and click.
5.3.4.2 Using the keyboard, write the answer.
5.3.5 Receive Feedback in a visual way
Ijerph 20 01184 i001If correct
Ijerph 20 01184 i002If wrong

References

  1. Bläsing, D.; Bornewasser, M. Influence of Increasing Task Complexity and Use of Informational Assistance Systems on Mental Workload. Brain Sci. 2021, 11, 102. [Google Scholar] [CrossRef] [PubMed]
  2. Van Acker, B.; Parmentier, D.; Conradie, P.; Van Hove, S.; Biondi, A.; Bombeke, K.; Vlerick, P.; Saldien, J. Development and validation of a behavioral video coding scheme for detecting mental workload in manual assembly. Ergonomics 2021, 64, 78–102. [Google Scholar] [CrossRef] [PubMed]
  3. Argyle, E.; Marinescu, A.; Wilson, M.; Lawson, G.; Sharples, S. Physiological indicators of task demand, fatigue, and cognition in future digital manufacturing environments. IJHCS 2021, 145, 102522. [Google Scholar] [CrossRef]
  4. Rössler, W. Stress, burnout, & job dissatisfaction in mental health workers. Eur. Arch. Psychiatry Clin. Neurosci. 2012, 262, 65–69. [Google Scholar]
  5. Dehais, F.; Lafont, A.; Roy, R.; Fairclough, S. A Neuroergonomics Approach to Mental Workload, Engagement, and Human Performance. Front. Neurosci. 2020, 14, 268. [Google Scholar] [CrossRef]
  6. Young, M.S.; Brookhuis, K.A.; Wickens, C.D.; Hancock, P.A. State of science: Mental workload in ergonomics. Ergonomics 2015, 58, 1–17. [Google Scholar] [CrossRef]
  7. Galy, E.; Paxion, J.; Berthelon, C. Measuring mental workload with the NASA-TLX needs to examine each dimension rather than relying on the global score: An example with driving. Ergonomics 2018, 61, 517–527. [Google Scholar] [CrossRef] [Green Version]
  8. Albuquerque, I.; Tiwari, A.; Gagnon, J.; Lafond, D.; Parent, M.; Tremblay, S.; Falk, T. On the Analysis of EEG Features for Mental Workload Assessment During Physical Activity. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Miyazaki, Japan, 7–10 October 2018; pp. 538–542. [Google Scholar] [CrossRef]
  9. Van Acker, B.; Parmentier, D.; Verick, P.; Saldlen, J. Understanding mental workload: From a clarifying concept analysis toward an implementable framework. Cogn. Technol. Work 2018, 20, 351–365. [Google Scholar] [CrossRef] [Green Version]
  10. Rakhra, A.; Green, M.; Mann, D. The Influence of a User-Centered Design Focus on the Effectiveness of a User Interface for an Agricultural Machine. Agric. Sci. 2020, 11, 947–965. [Google Scholar] [CrossRef]
  11. Dai, X.; Ming, J.; Mao, T.; Yang, M.; Yang, J.; Zhang, Y.; Lu, H. A Hierarchical Task Analysis Approach for static Human Factors Engineering Verification and Validation of Human-System Interface. In Nuclear Power Plants: Innovative Technologies for Instrumentation and Control Systems; (SICPNP. Lecture Notes in Electrical Engineering); Springer: Singapore, 2019; Volume 595. [Google Scholar] [CrossRef]
  12. Hong, J.; Tai, K.; Hwang, M.; Kuo, Y.; Chen, J. Internet cognitive failure relevant to users’ satisfaction with content and interface design to reflect continuance intention to use a government e-learning system. Comput. Hum. Behav. 2017, 66, 353–362. [Google Scholar] [CrossRef]
  13. Hix, D.; Hartson, R. Developing User Interfaces: Ensuring Usability Through Product & Process; John Wiley & Sons: New York, NY, USA, 1993; p. 210. [Google Scholar]
  14. Kirchner, W.K. Age differences in short-term retention of rapidly changing information. J. Exp. Psychol. 1958, 55, 352–358. [Google Scholar] [CrossRef]
  15. Jaeggi, S.; Seewer, R.; Nirkko, A.; Eckstein, D.; Shroth, G.; Groner, R.; Gutbrod, K. Does excessive memory load attenuate activation in the prefrontal cortex? Load-dependent processing in single and dual tasks: A functional magnetic resonance imaging study. NeuroImage 2003, 19, 210–225. [Google Scholar] [CrossRef]
  16. Mun, S.; Whang, M.; Park, S.; Park, M. Effects of mental workload on involuntary attention: A somatosensory ERP study. Neuropsychologia 2017, 106, 7–20. [Google Scholar] [CrossRef]
  17. Cinaz, B.; Arnrich, B.; La Marca, R.; Tröster, G. Monitoring of mental workload levels during an everyday life office-work scenario. Pers. Ubiquit. Comput. 2013, 17, 229–239. [Google Scholar] [CrossRef]
  18. Hogervorst, M.; Brouwer, A.; Van Erp, J. Combining and comparing EEG, peripheral physiology, and eye-related measures for the assessment of mental workload. Front. Neurosci. 2014, 8, 322. [Google Scholar] [CrossRef]
  19. Aghajani, H.; Garbey, M.; Omurtag, A. Measuring Mental Workload with EEG + fNIRS. Front. Hum. Neurosci. 2017, 11, 359. [Google Scholar] [CrossRef] [Green Version]
  20. McKendrick, R.; Mehta, R.; Ayaz, H.; Scheldrup, M.; Parasuraman, R. Prefrontal Hemodynamics of Physical Activity and Environmental Complexity during cognitive work. Hum. Factors 2017, 59, 147–162. [Google Scholar] [CrossRef]
  21. Helton, W.; Russell, P. Rest is still best: The role of the qualitative and Quantitative load of interruptions on vigilance. Hum. Factors 2017, 59, 91–100. [Google Scholar] [CrossRef]
  22. Flach, J.; Stappers, P.; Voothorst, F. Beyond Affordances: Closing the Generalization Gap Between Design and Cognitive Science. Des. Issues 2017, 33, 76–89. [Google Scholar] [CrossRef]
  23. Read, G.; Salmon, P.; Lenné, M.; Stanton, N. Designing sociotechnical systems with cognitive work analysis: Putting theory back into practice. Ergonomics 2015, 58, 822–851. [Google Scholar] [CrossRef]
  24. Naikar, N.; Elix, B. Integrated System Design: Promoting the Capacity of Sociotechnical Systems for Adaptation through Extensions of Cognitive Work Analysis. Front. Psychol. 2016, 7, 962. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Annett, J.; Duncan, K.D. Task Analysis and Training Design. In Proceedings of the Aston Working Conference on Learning Resources in Industrial and Commercial Education, Birmingham, UK, 5 July 1967. [Google Scholar]
  26. Annett, J. Hierarchical task analysis. In Handbook of Cognitive Task Design; Hollnagel, E., Ed.; CRC Press: Erlbaum Mahwah, NJ, USA, 2003; pp. 17–35. [Google Scholar]
  27. Karami, E.; Goodarzi, Z.; Rashidi, R.; Karimi, A. Assessing human errors in sensitive jobs using two methods, TAFEI and SHERPA: A case study in a high-pressure power post. J. Health Field 2020, 8, 58–69. [Google Scholar]
  28. Mohammadian, M.; Choobineh, A.; Mostafavi, N.; Hashemi, N. Human errors identification in the operation of meat grinder using TAFEI technique. J. Occup. Health 2012, 1, 171–181. [Google Scholar] [CrossRef]
  29. Stanton, N.A. Hierarchical task analysis: Developments, applications, and extensions. Appl. Ergon. 2006, 37, 55–79. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Felipe, S.; Adams, A.; Rogers, W.; Fisk, A. Training Novices on hierarchical Task Analysis. In Proceedings of the Human Factors and Ergonomics Society 54th annual meeting 2005–2009, San Francisco, CA, USA, 27 September–1 October 2010. [Google Scholar]
  31. Fargnoli, M.; Lombardi, M.; Puri, D. Applying Hierarchical Task Analysis to Depict Human Safety Errors during Pesticide Use in Vineyard Cultivation. Agriculture 2019, 9, 158. [Google Scholar] [CrossRef] [Green Version]
  32. Barajas-Bustillos, M.A.; Maldonado, A.A.; Ortiz, M.; Armenta, O.; García-Alcaraz, J.L. Analysis of Human Error in the task of changing a PC Power supply. In Ergonomía Ocupacional. Investigaciones y Soluciones; Sociedad de Ergonomistas de México, A.C.: Ciudad Juárez, Mexico, 2018; Volume 11, pp. 243–252. [Google Scholar]
  33. Bustamante, N.G.; Maldonado, A.A.; Durán, A.; Ortiz, N.J.; Quiñones, A. Usability Test and Cognitive Analyses During the Task of Using Wireless Earphones. In Handbook of Research on Ergonomics and Product Design; Hernández Arellano, J.L., Maldonado Macías, A.A., Castillo Martínez, J.A., Peinado Coronado, P., Eds.; IGI Global: Hershey, PA, USA, 2018; pp. 241–263. [Google Scholar] [CrossRef]
  34. Zeilstra, M.; van Wincoop, A.; Rypkerma, J. The WASCAL-Tool: Prediction of Staffing for Train Dispatching as Part of the Design Process of Track Yards. In Human Mental Workload: Models and Applications; Longo, L., Leva, M.C., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; pp. 143–160. [Google Scholar] [CrossRef]
  35. Salmon, P.; Jenkins, D.; Stanton, N.A.; Walker, G. Hierarchical task analysis vs. cognitive work analysis: Comparison of theory, methodology, and contribution to system design. Theor. Issues Ergon. Sci. 2010, 11, 504–531. [Google Scholar] [CrossRef]
  36. Baber, C.; Stanton, N.A. Task analysis for error identification: A methodology for designing error-tolerant consumer products. Ergonomics 1994, 37, 1923–1941. [Google Scholar] [CrossRef] [Green Version]
  37. Salmon, P.; Stanton, N.A.; Walker, G. Human Factors Design Methods Review; Human Factors Integration Defense Technology Centre: London, UK, 2003. [Google Scholar]
  38. Baber, C.; Stanton, N.A. Task analysis for error identification: Theory, method, and validation. Theor. Issues Ergon. Sci. 2002, 3, 212–227. [Google Scholar] [CrossRef]
  39. Barajas-Bustillos, M.A.; Maldonado, A.A.; Ortiz, M.; Hernández-Arellano, J.L.; García-Alcaraz, J.L. SHERPA and TAFEI, Comparison of two human error identification techniques: A case study. In Ergonomía Ocupacional. Investigaciones y Soluciones; Sociedad de Ergonomistas de México, A.C.: Ciudad Juárez, México, 2019; Volume 12, pp. 51–61. [Google Scholar]
  40. Kearney, P.; Li, W.; Zhang, J.; Braithwaite, G.; Lei, W. Human performance assessment of a single air traffic controller conducting multiple remote tower operations. Hum. Factors Manuf. 2020, 30, 114–123. [Google Scholar] [CrossRef]
  41. Yan, S.; Tran, C.C.; Wei, Y.; Habiyaremye, J.L. Driver’s mental workload prediction model based on physiological indices. JOSE 2017, 25, 476–484. [Google Scholar] [CrossRef]
  42. Guckenberger, M.; Sudol, A.; Mavris, D. Virtual workload Measurement for assessing systems Utilizing Automation Technology. In Proceedings of the AIAA Scitech 2021 Forum, Virtual Event, 11–22 January 2021. [Google Scholar] [CrossRef]
  43. Bommer, S.; Fendley, M. A theoretical framework for evaluating mental workload resources in human systems design for manufacturing operations. IJIE 2018, 63, 7–17. [Google Scholar] [CrossRef] [Green Version]
  44. Imtiaz, M.; Munsi, A.; Nayan, D.; Dei, S. Systematic Human Error Reductions and Prediction Approach while Drilling. IJSER 2014, 5, 808–813. [Google Scholar]
  45. Jiménez, R.; Cárdenas, D.; González-Anera, R.; Jiménez, J.R.; Vera, J. Measuring mental workload: Ocular astigmatism aberration as a novel objective index. Ergonomics 2018, 61, 506–516. [Google Scholar] [CrossRef]
  46. Fairclough, S.; Ewing, K. The effect of task demand and incentive on neurophysiological and cardiovascular markers of effort. Int. J. Psychophysiol. 2017, 119, 58–66. [Google Scholar] [CrossRef] [Green Version]
  47. Heine, T.; Lenis, G.; Reichensperger, P.; Beran, T.; Doessel, O.; Deml, B. Electrocardiographic features for the measurement of driver mental workload. Appl. Ergon. 2017, 61, 31–43. [Google Scholar] [CrossRef]
  48. Vera, J.; Jiménez, R.; García, J.A.; Cárdenas, D. Intraocular pressure is sensitive to cumulative and instantaneous mental workload. Appl. Ergon. 2017, 60, 313–319. [Google Scholar] [CrossRef]
  49. Kato, K.; Kim, R.; Mcvey, R.; Stukenborg, B.; Sharpe, M.; Guilbert, M.; Gore, B. NASA TLX EULA iOS (Version 1.0.3) [Mobile app]. NASA Ames Research Center 2016. Apple App Store. Available online: https://itunes.apple.com/us/app/nasa-tlx/id1168110608 (accessed on 19 January 2021).
  50. Chand, K.; Khosla, A. BioNES: A plug-and-play MATLAB-based tool to use NES games for multimodal biofeedback. SoftwareX 2022, 19, 101184. [Google Scholar] [CrossRef]
  51. Ninni, D.; Mendez, M.A. MODULO: A software for Multiscale Proper Orthogonal Decomposition of Data. SoftwareX 2020, 12, 100622. [Google Scholar] [CrossRef]
  52. Stanton, N.A.; Salmon, P.M.; Rafferty, L.A.; Walker, G.H.; Baber, C.; Jenkins, D.P. Human Factors Methods: A Practical Guide for Engineering and Design; Ashgate Publishing: Farnham, UK, 2013; p. 656. [Google Scholar]
  53. Prabaswary, A.; Basumerda, C.; Utomo, B. The Mental Workload Analysis of Staff in Study Program of Private Educational Organization. IOP Conf. Ser. Mater. Sci. Eng. 2019, 528, 012018. [Google Scholar] [CrossRef]
  54. NASA Ames Research Center. NASA Task Load Index (TLX) Manual; Computerized Version (Version 1.0); Human Performance Research Group: Mountain View, CA, USA, 1986. Available online: https://ntrs.nasa.gov/api/citations/20000021487/downloads/20000021487.pdf (accessed on 19 January 2021).
  55. Knierim, M.T.; Berger, C.; Reali, P. Open-source concealed EEG data collection for Brain-computer-interfaces-neural observation through OpenBCI amplifiers with around-the-ear cEEGrid electrodes. Brain-Comput. Interfaces 2021, 8, 161–179. [Google Scholar] [CrossRef]
Figure 1. Mental Workload Program block diagram.
Figure 1. Mental Workload Program block diagram.
Ijerph 20 01184 g001
Figure 2. Mental Workload Graphical User Interface MATLAB© ™ interface GUIDE. Source: Authors.
Figure 2. Mental Workload Graphical User Interface MATLAB© ™ interface GUIDE. Source: Authors.
Ijerph 20 01184 g002
Figure 3. Mental Workload Graphical User Interface task window in Spanish.
Figure 3. Mental Workload Graphical User Interface task window in Spanish.
Ijerph 20 01184 g003
Figure 4. (a) Mental Workload GUI screen showing performance in percentage in Spanish; (b) Mental Workload GUI screen showing performance over the session time (rights and wrongs).
Figure 4. (a) Mental Workload GUI screen showing performance in percentage in Spanish; (b) Mental Workload GUI screen showing performance over the session time (rights and wrongs).
Ijerph 20 01184 g004
Figure 5. TAFEI method for human error analysis.
Figure 5. TAFEI method for human error analysis.
Ijerph 20 01184 g005
Figure 6. NASA-TLX overall results Subjects 1 to 5.
Figure 6. NASA-TLX overall results Subjects 1 to 5.
Ijerph 20 01184 g006
Figure 7. NASA-TLX overall results subjects 6 to 10.
Figure 7. NASA-TLX overall results subjects 6 to 10.
Ijerph 20 01184 g007
Table 1. Subjects’ session schedule.
Table 1. Subjects’ session schedule.
03/03/2103/04/2103/05/2103/08/2103/09/2103/10/2103/11/2103/12/2103/15/2103/16/21
17:00S6-T1S3-T1S4-T3S2-T1S10-T1S3-T2S8-T2S3-T3S2-T3S5-T3
18:00S10-T2S7-T3S1-T1S7-T2S7-T3S3-T1S2-T1S8-T1S1-T2S2-T2
19:00S7-T1S5-T2S7-T1S4-T2S7-T2S9-T1S5-T1S6-T1S6-T3S4-T2
03/17/2103/18/2103/19/2103/22/2103/23/2103/24/2103/25/2103/26/2104/12/2104/13/21
17:00S8-T3S6-T2S1-T3S8-T2S2-T3S9-T2S2-T2S8-T3S4-T3S1-T3
18:00S3-T2S4-T1S4-T1S9-T3S5-T2S10-T3S9-T3S6-T3S10-T3S5-T3
19:00S6-T2S10-T2S9-T2S1-T2S1-T1S10-T1S5-T1S3-T3S8-T1S9-T1
Table 2. Excerpt of the hierarchical breakdown into the list of sub-tasks, elementary tasks, and specific activities.
Table 2. Excerpt of the hierarchical breakdown into the list of sub-tasks, elementary tasks, and specific activities.
SubtaskElementary TaskSpecific Activity
4. Training
Plan 4: do 4.1, 4.2, 4.3, 4.4, and 4.5 in order.
4.1 Beginning of the training4.1.1 Click the start button
4.2 Digit position training
Plan 4.2: 4 times ((Do 4.2.1 & 4.2.2 × 3 then 4.2.3, 4.2.4 & 4.2.5 in order))
4.2.1 Listen “Memorize.”
4.2.2 See 3 × 3 matrix with a number from 1 to 9. Memorize the digit and position
4.2.3 Observe writing “Same Position”.
(n)/(n 1)?
4.2.4 Select “no” or “Yes” as appropriate
4.2.5 The subject receives feedback
Table 3. Transition matrix for TAFEI.
Table 3. Transition matrix for TAFEI.
123456789
1L
2LIIII
3LI
4LII
5LI
6L
7L
8L
9
Table 4. NASA-TLX Results.
Table 4. NASA-TLX Results.
Low MWLMedium MWLHigh MWL
Mental demand161.5194.75249.25
Physical demand19.2532.2524
Temporal demand214.5176.25235.75
Performance57.25105.25172.5
Effort99.75113143
Frustration level112.25153169.25
Overall mental workload score (NASA TLX)44.2951.6361.78
Performance Average (Correct responses)89.22%80%76.1%
Table 5. Mental Workload Statistical Analysis results.
Table 5. Mental Workload Statistical Analysis results.
SourceDGSSRAdj. MSF Valuep-Value
MWL Level230831541.77.760.001
Subject910,3061145.15.760.000
Error489535198.6
Lack of adjustment183285182.50.880.608
Pure error306250208.3
5922,925
DG.: Degrees of Freedom SSR.: Adjusted sum of Squares MS Adjusted: Adjusted Mean Square.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arana-De las Casas, N.I.; De la Riva-Rodríguez, J.; Maldonado-Macías, A.A.; Sáenz-Zamarrón, D. Cognitive Analyses for Interface Design Using Dual N-Back Tasks for Mental Workload (MWL) Evaluation. Int. J. Environ. Res. Public Health 2023, 20, 1184. https://doi.org/10.3390/ijerph20021184

AMA Style

Arana-De las Casas NI, De la Riva-Rodríguez J, Maldonado-Macías AA, Sáenz-Zamarrón D. Cognitive Analyses for Interface Design Using Dual N-Back Tasks for Mental Workload (MWL) Evaluation. International Journal of Environmental Research and Public Health. 2023; 20(2):1184. https://doi.org/10.3390/ijerph20021184

Chicago/Turabian Style

Arana-De las Casas, Nancy Ivette, Jorge De la Riva-Rodríguez, Aide Aracely Maldonado-Macías, and David Sáenz-Zamarrón. 2023. "Cognitive Analyses for Interface Design Using Dual N-Back Tasks for Mental Workload (MWL) Evaluation" International Journal of Environmental Research and Public Health 20, no. 2: 1184. https://doi.org/10.3390/ijerph20021184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop