Next Article in Journal
Sign-Entropy Regularization for Personalized Federated Learning
Previous Article in Journal
Intrinsic and Measured Information in Separable Quantum Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures

1
School of Nuclear Science and Technology, University of South China, Hengyang 421001, China
2
Institute of Human Factors, University of South China, Hengyang 421001, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(6), 600; https://doi.org/10.3390/e27060600
Submission received: 17 April 2025 / Revised: 24 May 2025 / Accepted: 27 May 2025 / Published: 4 June 2025

Abstract

:
Within the scope of digital transformation in nuclear power plants (NPPs), task complexity in human–computer interaction (HCI) has become a critical factor affecting the safe and stable operation of NPPs. This study systematically reviews and analyzes existing complexity sources and assessment methods and suggests that complexity is primarily driven by core factors such as the quantity of, variety of, and relationships between elements. By innovatively introducing Halstead’s E measure, this study constructs a quantitative model of dynamic task execution complexity (TEC), addressing the limitations of traditional entropy-based metrics in analyzing interactive processes. By combining entropy metrics and the E measure, a task complexity quantification framework is established, encompassing both the task execution and intrinsic dimensions. Specifically, Halstead’s E measure focuses on analyzing operators and operands, defining interaction symbols between humans and interfaces to quantify task execution complexity (TEC). Entropy metrics, on the other hand, measure task logical complexity (TLC), task scale complexity (TSC), and task information complexity (TIC) based on the intrinsic structure and scale of tasks. Finally, the weighted Euclidean norm of these four factors determines the task complexity (TC) of each step. Taking the emergency operating procedures (EOP) for a small-break loss-of-coolant accident (SLOCA) in an NPP as an example, the entropy and E metrics are used to calculate the task complexity of each step, followed by experimental validation using NASA-TLX task load scores and step execution time for regression analysis. The results show that task complexity is significantly positively correlated with NASA-TLX subjective scores and task execution time, with the determination coefficients reaching 0.679 and 0.785, respectively. This indicates that the complexity metrics have high explanatory power, showing that the complexity quantification model is effective and has certain application value in improving human–computer interfaces and emergency procedures.

1. Introduction

With the accelerated digital transformation of nuclear power plants, digital control systems (DCS) have enhanced monitoring efficiency and also introduced new challenges to human–computer interaction (HCI) task complexity. Studies have indicated [1,2,3,4,5] that in emergency situations, operators are required to quickly perform critical operations through multi-layer menus, complex logical procedures, and information-dense interfaces. At such times, issues like the keyhole effect in interface information display and the distraction caused by switching between primary and secondary tasks can significantly reduce the speed and accuracy of operators’ decision-making, thereby directly affecting the safe operation of nuclear power plants. To address this issue, it is necessary to scientifically quantify the complexity of tasks involving human–computer interaction. By analyzing the actual workflow of operators, the difficulty of complex operations can be transformed into a measurable indicator that can clearly identify bottlenecks in interface design or operational procedures. For instance, improvements such as reducing unnecessary menu levels, optimizing information layout, and simplifying logical branches all need to be based on the results of complexity quantification. This research approach of problem identification–optimized design effect verification can provide important support for the human–computer collaborative safety efforts in digitalized nuclear power plants.
The essence of task complexity stems from the dynamic coupling of multi-dimensional factors. As Xing [6,7] described, complexity is constituted by the quantity of, variety of, and relationships between basic elements. Similarly, Bedeny [8] defined task complexity as an intrinsic characteristic that is determined by the amount of uncertainty and variety and the number of task elements’ coupling relationships. When mapped to nuclear power plant (NPP) HCI scenarios, these complexity dimensions manifest as hierarchical task structures, the quantity and diversity of interactive components, network topologies of logical relationships, and combinations of information volume and types, collectively forming the complexity landscape of HCI tasks.
Traditional evaluations of operating procedures and human–computer interfaces primarily rely on qualitative assessments. For example, the Human–Computer Interface Design Review Guide, developed by the U.S. Nuclear Regulatory Commission (NRC) [9], and the checklist for evaluating emergency procedures in nuclear power plants [10] are both qualitative tools. Although qualitative assessments can optimize user experience, they fail to identify the intrinsic indicators that affect usability. This highlights the need for quantifying complexity to assess the intrinsic factors influencing human–computer interface interactions and the quality of emergency operating procedures. Existing methods for quantifying complexity mainly follow the development paths of information theory and software science. One well-known method is McCabe’s cyclomatic complexity [11], which measures program complexity based on the number of linearly independent paths in a program’s control flow graph. However, this method is limited: it focuses only on the structural complexity of the program, ignoring its size. In contrast, Halstead’s E measure [12] considers the program’s size but overlooks the structural impacts. Park et al. [13,14,15,16] proposed a complexity measurement method for emergency procedures based on entropy theory. This method categorizes logical nodes in a program to calculate entropy values. However, using entropy theory to measure the complexity of human–computer interface interactions is challenging and often meaningless. From the perspective of interface interaction, the most significant factor affecting operational efficiency is the scale of operations. A well-designed human–computer interface should achieve operational goals with minimal steps. Therefore, the complexity of human–computer interface interactions is more aligned with Halstead’s E measure. This measure evaluates complexity from the perspectives of quantity and variety, considering operators and operands, as well as the types and frequencies of elements to assess scale and workload. Both entropy and E measures have been demonstrated to be useful for measuring task complexity. For instance, Xu et al. [17] validated the impact of different presentation styles of emergency operating procedures on personnel performance based on Park’s entropy measure. Zhang et al. [18] further extended entropy measures to aerospace tasks, verifying their correlation with operational time. Nieminen [19] used E measures and the WOOD task complexity measurement method to calculate the complexity of mobile application UI interaction tasks.
In recent years, research on task complexity measurements has expanded into other high-risk fields. For instance, in the aerospace field, Tamaskar [20] proposed a framework for measuring the complexity of aerospace systems, emphasizing modularity and coupling, which is similar to the static complexity indicators in our study. Moreover, Liu [21] developed a pilot workload measurement model based on task complexity, which is directly related to our use of NASA-TLX to evaluate subjective workload. These studies provide an important theoretical basis and methodological reference for our research.
In light of the above discussion, this study proposes a dual-perspective framework for quantifying task complexity, ranging from the intrinsic nature of the task to its execution. Building on Park’s entropy metrics [13], Halstead’s E measure is innovatively integrated to quantify task complexity in NPP DCS HCI from both static task inputs and dynamic task outputs. As shown in Figure 1, the framework evaluates task complexity from two perspectives: task ontology (based on procedures) and task execution (based on interface interactions). First, complexity is defined relative to human operators, as Xing [7] emphasized: complexity has an observer effect. The key to assessing interface complexity lies in whether it supports operators in achieving task goals efficiently. For instance, experienced operators are more familiar with the interface and emergency procedures; they can quickly identify key parameters and emergency handling pathways. Different individuals have varying perceptions of the same human–computer interface. Therefore, we believe that when measuring the complexity of human–computer interface interaction tasks, the layout of the interface should be taken into account. We have always held this view: if the interface effectively supports personnel in completing tasks, it indicates that the interface design is adequate. The key factor affecting personnel efficiency lies in the scale of interaction with the interface. This is because the larger the scale, the longer the operation time may be, and the more intense the cognitive resource competition for the group tasks.
Assuming a small-break loss-of-coolant accident (SLOCA) in the primary loop, Event-Based Emergency Operating Procedures (EOP) guide operators to interact with the DCS system. The DCS human–computer interface serves as the direct communication channel between operators and the system, receiving commands and providing feedback. Simultaneously, the DCS interacts with other plant components via controllers and sensors. Operators play a critical role in translating static procedures into dynamic actions. To reduce operator workload and enhance efficiency, task comprehensibility and interface usability are paramount. This study proposes a method that combines entropy and Halstead’s E measure to quantify task complexity and interface operation difficulty, providing theoretical support for optimizing the DCS’s interface design and EOPs. It is worth noting that this study focuses on the single-role interaction between individual operators and digital control systems. Due to the limitations of implementation conditions, we do not consider the interactive effects between people within teams. The research framework is limited to the task entity and interface operations.

2. Methodology

Task complexity quantification is key to understanding the potential risks in HCI. This study proposes a multi-dimensional framework that considers element quantity, variety, and relationships. Halstead’s E measure and entropy theory are integrated to evaluate complexity from task execution and intrinsic logic dimensions. Specifically, task complexity is decomposed into four core metrics as follows:
(1)
Task Execution Complexity (TEC): This quantifies the cognitive and operational load of the dynamic interactions between the operator and the interface. It is based on Halstead’s E measure, which statistically counts the types and usage frequencies of operators and operands to reflect the redundancy of interaction paths and the density of operational steps.
(2)
Task Logical Complexity (TLC): This measures the diversity of logical branches in the task process. It calculates the equivalent class distribution of nodes in the control flow graph of operational steps using first-order entropy. A higher entropy value indicates greater logical complexity.
(3)
Task Information Complexity (TIC): This evaluates the quantity and scale of information required for task execution. It is based on the second-order entropy calculation of the information structure diagram. A higher entropy value indicates greater information complexity.
(4)
Task Scale Complexity (TSC): This describes the volume characteristics of the task itself. It quantifies the number of operational steps and their logical dependencies using second-order entropy. A higher entropy value indicates a larger scale.
Finally, a weighted Euclidean norm integrates these metrics into a comprehensive task complexity (TC) value. The calculation process is illustrated in Figure 2.

2.1. E Measurement

Halstead’s E measurement is used to calculate program complexity, measuring the “effort” or workload required by programmers during development by considering the types and frequencies of operators and operands. Program complexity is proportional to the number of distinct operators and operands that programmers need to distinguish; the more operators and operands there are, the greater the difficulty in understanding and maintaining the program. We define the complexity generated by interface operations as the complexity of actions users perform on the interface to achieve task goals, termed task execution complexity (TEC), and introduce the E measurement to quantify this complexity. The calculation formula is shown in Equation (1):
E = η 1 N 2 N 1 + N 2 2 η 2 log 2 η 1 + η 2
where η 1 : represents the number of unique operators, i.e., operators refer to the interactive actions executed by users to achieve task goals. For example, in a DCS interface, operators include clicks, such as clicking on a valve icon to open an operation window; double-clicks, such as double-clicking on an input box to activate data entry; and long presses, such as holding down a reset button to execute an equipment reset. η 2 : represents the number of unique operands, i.e., operands refer to the functional components or information units on the interface that need to be operated. For example, operands include the interface elements corresponding to physical devices, such as the cooling water valve VP001 and the main pump 001PO, as well as logical components, such as input boxes and navigation icons. N 1 : is the total frequency of the operators. N 2 : is the total frequency of the operands.
Thus, the E measurement reflects the workload during task execution; the more complex the interface interaction logic and the more operators and operands there are, the higher the task load. Park [13] argues that the E measurement is an absolute complexity measurement method. Although using consistent operators and operands is challenging, it is highly effective in analyzing interface task complexity, as operators and operands in interfaces are typically explicitly defined by designers. Chewar [22] compared Halstead’s E measurement and McCabe’s measurement in evaluating software psychological complexity, finding that the E measurement effectively calculates the number of psychological discriminations in software maintenance. Nieminen [19] demonstrated the effectiveness of the E measurement in measuring interface task complexity by combining the E measurement with WOOD’s task complexity calculation using data from 1460 UI interfaces.
In terms of interface operation tasks, we define them as the process by which operators interact with the interface to achieve various task objectives while performing complex tasks. As shown in Figure 3, interface operation tasks are divided into two categories: interface management tasks and main tasks. Interface management tasks are auxiliary tasks that help operators efficiently manage the interface to support the execution of main tasks, such as adjusting window layouts and navigating menus. These tasks do not directly achieve the goals but facilitate information retrieval and operation. Main tasks are task types that directly achieve nuclear safety goals through interface interaction. Based on O’Hara’s framework [4], main tasks usually include the following four sub-goals: situation assessment, monitoring and detection, response planning, and response implementation. However, this study focuses on the task types that directly interact with the interface, namely, monitoring and detection: obtaining system status information through continuous interaction with the interface, such as real-time monitoring of the reactor containment–pressure curves and checking valve-open/close statuses; and response implementation: executing specific commands through interface operations, such as clicking on a safety valve icon and entering an opening value to relieve pressure. Situation assessment and response planning primarily engage the operator’s working memory, experience, or team discussions and involve internal cognitive activities rather than direct interface interaction. Therefore, they are not included in the “Main Tasks” category in Figure 3. This definition ensures that the complexity quantification model focuses on observable and recordable interface interaction behaviors, thus aligning with the experimental design based on operation logs and execution time in Section 4. In this study, operators first clarify the task objectives through operating procedures, i.e., the main task objectives to be accomplished. To achieve these objectives, they then perform a series of interface management tasks to support the execution of the main tasks, ultimately completing response implementation or monitoring and detection tasks to achieve the overall task goals. In the task analysis framework of this study, the Hierarchical Task Analysis (HTA) method was adopted to analyze and model tasks in depth, with specific details elaborated in Section 3.
Within the framework of defining interface operation tasks and based on the actual human–computer interaction process, we extend the definitions of operators and operands. In interface interactions, users typically need to perform multiple interactive actions such as information retrieval, navigation display, and window operations. For example, when assessing the pressure and temperature of containment, operators need to review relevant data to decide whether to activate the containment spray system. Here, the operator navigates the interface to locate the window displaying the relevant information and retrieves the required information through interactive actions like clicking. We define this information retrieval action, specifically performed to complete the main task, as an operator. Similarly, when an operator uses a spray valve for pressure reduction, they first need to open the spray valve operation window and enter the desired flow values in the input boxes. In this process, the typing action is defined as an interactive operator, while the keyboard serves as the operand for this interactive action. The interactive components, actions, and their corresponding functional descriptions involved in the actual operation process are detailed in Table 1. These elements will inform the modeling of interaction diagrams in the application examples in Section 3.
Note that, in order to enhance the consistency and representativeness of the classification, we have conducted the classification process by referring to the functional types and perceptual style differences of the interface elements. For example, all valve operations adopt a unified style and thus are classified as a single operand type. In contrast, controllers, which have significant differences in style and perception, are respectively categorized into multiple operand types, such as VB (borated water valve), VP (cooling water valve), and VN; these valve components share the same style and operation mode. Considering this, they do not significantly consume the operator’s cognitive resources. Therefore, in Table 1, all valve components are classified as the same type of operand. Similarly, although the controllers share the same interaction mode, their styles vary significantly, increasing perceptual complexity and requiring the operators to expend cognitive resources when performing tasks. Thus, in Table 1, the three controller styles are categorized into three separate classes. Although we have provided clear definitions and standardized annotations for operators and operands, there may still be slight differences in semantic interpretation in some boundary cases, which may cause minor deviations in the E value. However, this will not affect the identification of the regression trend between variables. This classification method facilitates the construction of interaction diagrams and the calculation of the E metric.
Below is a simple example of the E-metric calculation. Suppose we operate a valve for flow control and to start a pump. The interaction behavior for completing this task is shown in Figure 4. The nodes in the graph represent interactive components on the interface, i.e., operands in the E metric, while the edges represent interactive actions with the components, i.e., operators in the E metric. The direction of the edges indicates the flow of operations. The elliptical nodes represent interfaces or operation windows, which, along with the start and end nodes, are not included in the calculation but are essential for connecting operations. The dashed lines indicate that after completing a series of actions, the system screen is not covered by a new screen, and the next series of operations starts from this system screen. The interaction diagram visually shows the interaction flow, allowing for quick statistics of the number of operators and operands, and the calculation yields TEC = E ≈ 145.19.

2.2. Entropy Measurement

Nuclear power plants utilize accident procedures to guide operators in accident handling [23]. While experienced operators can execute routine tasks without procedural guidance [24], even moderately complex tasks like normal reactor startups and shutdowns, most operators report a significant cognitive load [25]. Therefore, in emergencies, operators must rely on procedures. The comprehensibility of procedures is crucial for operators to complete tasks successfully. Current nuclear power plant accident procedures are mainly divided into event-oriented EOP procedures and Symptom-Based Operating Procedures (SOPs). Although EOP and SOP procedures differ, their basic logical structures are similar, consisting of basic If-Then logic structures. At each decision point, operators make yes or no decisions based on the plant’s conditions, thereby determining the procedure’s direction. However, SOP procedures have more reasoning and decision points compared to EOP procedures. This study uses EOP emergency operating procedures for task complexity analysis.
Boring pointed out that procedure-based manual control actions may be delayed due to the complexity of procedure steps and other factors, and the amount of information that operators need to attend to is a major factor affecting the usability of procedures. Long [26] proposed that the amount of information required to execute steps is a key factor affecting procedure usability. Similarly, Macwan [27] and Peng [28] pointed out that the logical structure and step size of procedures are the main factors contributing to procedure complexity. Building on these findings, we use the concept of entropy to measure task complexity based on emergency operating procedures. The entropy concept was originally introduced by physicist Clausius to describe the degree of disorder or chaos in a system. Later, Shannon further developed the concept in information theory, where Shannon entropy [29] is used to measure the average information content of an information source. A higher entropy indicates greater uncertainty, while a lower entropy indicates more predictability. Shannon provided the mathematical definition of entropy in Equation (2):
H = i = 1 N P A i log 2 P A i
where
  • H represents information entropy.
  • N is the number of information sources.
  • A i represents the ii-th information source.
  • P A i represents the probability of the ii-th information source occurring.
The concept of entropy has been applied in various fields. In software engineering, entropy has been used to evaluate software complexity. Davis [30], Lew [31], and others have validated the applicability of entropy as a complexity metric and provided a theoretical foundation for quantifying complexity. Zhang [18] used entropy theory to measure the operational complexity of spaceflight and experimentally validated the effectiveness of this measure. Mowshowitz [32] proposed a graph complexity measurement method based on entropy. The author introduced two types of entropy for measuring graph complexity: first-order entropy and second-order entropy.
To explain the characteristics and calculation methods of first-order and second-order entropy, we use Davis’s software complexity measure [30] as an example, as shown in Figure 5. This figure illustrates two program control graphs. First, we calculate the first-order entropy by classifying nodes based on their in-degree and out-degree. If multiple nodes have the same in-degree and out-degree, they can be grouped into the same equivalence class. Graphs containing many nodes within the same equivalence class generally exhibit lower entropy values. This indicates that when the control logic has a certain regularity, the program’s context is easier to understand, and this regularity is quantified by first-order entropy. Based on this classification method, Table 2 identifies the equivalence classes for graphs Figure 5a,b.
As can be seen from Table 2, the nodes in Figure 5a can be divided into four categories, and the probability of each category of nodes can be obtained. For example, the probabilities of the Type-I nodes and the Type-II nodes are 1/6 and 4/6, respectively. Then, by substituting N = 4 and P(Ai) into Formula (2), the first-order entropy of Figure 5a can be calculated.
H a 1 = i = 1 4 P A i log 2 P A i = 1.664
Similarly, calculate H a 2 = 2.128. As expected, the value of H a 2 is greater than that of H a 1 because Figure 5a is more regular and has more regularity than Figure 5b. The first-order entropy is mainly used to measure the simple diversity in the system. The higher the diversity, the more complex the classification of node types, and the higher the first-order entropy value.
The calculation method for second-order entropy is similar. Second-order entropy is based on the neighborhood characteristics of nodes, taking into account the properties of their one-hop neighbors. If two nodes have the same neighboring nodes within one hop, they are considered to belong to the same class. This classification approach is more suitable for analyzing the global complexity of systems, especially in graphs or networks with nested relationships or complex interactions. Additionally, as the size of the graph increases, the number of classes also increases, as the structural complexity of the graph typically becomes more intricate. Thus, second-order entropy further incorporates contextual relationships, representing the amount of information required to understand the graph. The second-order entropy classification for graph (a) is shown in Table 3.
Table 3 shows the classification scheme required for calculating the second-order entropy. The second-order entropies calculated according to this classification scheme are H a 1 = 2.236 , H a 2 = 2.521 The second-order entropy of a 2 is higher than its first-order entropy because the structure of nodes in a 2 is more complex. In particular, the nested structure makes the neighbor characteristics of nodes more complex compared to those in a 1 .
We use information structure diagrams to measure the information complexity of the operating procedures. The second-order entropy of the information structure diagram is used to quantify its complexity. The reason for using second-order entropy is that its classification method more effectively captures the scale of the graph, thereby measuring the complexity of the information structure diagram. We classify the information types in nuclear power plant operating procedures into control information (for example, switch statuses and alarms) and process variables (for example, temperature and pressure trends). An example of an information structure diagram is shown in Figure 6.
In the information structure diagram, the bottom-level nodes represent information types, such as Boolean values for switch and alarm statuses and continuous variables for reactor temperature and containment pressure. Based on this, second-order entropy is used to calculate the complexity of the graph, thereby measuring the amount of information in the procedure. For example, in Figure 6, the second-order entropy is calculated as H′′ = 2.807.

3. Case Study

In this section, we analyze the small-break loss-of-coolant accident (SLOCA) in the primary coolant loop of a nuclear power plant. Based on the accident procedure and interface operations, we calculate the E value and entropy value. We extract step 25 from the SLOCA recovery procedure for detailed calculations. Table 4 shows part of the procedure, and the complete list of task steps is provided in Appendix A.

3.1. Hierarchical Task Analysis

In the task analysis framework of this study, Hierarchical Task Analysis is used to systematically deconstruct the emergency operating procedures in the context of a small-break loss-of-coolant accident. HTA helps break down the tasks and sub-goals and clarify the operators’ objectives and information needs at various stages, thereby identifying the critical nodes of cognitive load within the task sequence. Compared with the previous applications of HTA, which were mainly used for developing training manuals and interface process design [33,34], this study further integrates the HTA output with quantitative complexity measures to quantify the operational complexity of each task step. The improvement of this method lies in the fact that HTA is no longer merely used for structured task descriptions but serves as the input basis for quantitative analysis. Specifically, HTA clarifies the behavioral boundaries and information units of each task step, providing fundamental semantic support for the statistics of operands/operators and conditional branch counts in complexity calculations. Furthermore, by applying HTA, the textual task descriptions in the emergency operating procedures are linked with the actual executions on the human–computer interface, ensuring these two dimensions of tasks are no longer disconnected.
We decompose the task such that the top level represents the task goal, and the tasks set to achieve the goal are called sub-tasks, which are broken down until they are sufficiently detailed. Taking step 25 as an example, as shown in Figure 7, the top level is the task goal, and the second and third levels are sub-tasks, with the bottom level being interface operation tasks. According to Annett’s research [35], in HTA, the unit of analysis is the operation defined by the goal. These operations are activated by input actions and concluded by feedback. Within the scope of HTA, the goal is defined as the desired system state by humans, the task is the specific method to achieve the goal, and the operation is the behavioral unit executed to achieve the goal. As shown in Figure 7, the task goal is to establish normal feed-water flow, which requires four tasks. Accomplishing these sub-tasks requires operators to interact with the DCS human–computer interface. Interface operation tasks constitute the bottom level of the tasks, and they can be further divided into specific execution goals and operations. For example, step 3 in Figure 7 is analyzed using HTA, as shown in Figure 8.
In the interface operation task shown in Figure 8, the top layer of the figure is the task objective. To achieve this objective, a series of sub-tasks need to be executed, such as navigating to the system window and configuring the operation window. The bottom layer consists of the specific execution to complete these sub-tasks. In this way, we have decomposed a task in the procedure into the smallest components, which helps us to understand the complexity factors in the task process and facilitates the establishment of the operation structure diagram and the interface interaction diagram.

3.2. Complexity Measurement

We use the methods described in Section 2 to calculate the complexity, using the E metric to measure task execution complexity and entropy to measure task complexity. Based on step 25 and the task analysis in Section 3.1, we construct the operation structure diagram, information structure diagram, and interface interaction diagram. We then calculate the E value and entropy value based on these diagrams.
First, we execute the relevant operations for step 25 on the nuclear power plant DCS simulator, record the interface operations, and analyze the task and instrument information required for execution. We then construct the operation structure diagram and information structure diagram for step 25, as shown in Figure 9, and the interface interaction diagram for step 25, part C, as shown in Figure 10. Take the fifth item of step 2 (the task of adjusting the refueling flow rate) as an example, which is shown in Figure 11 for illustration. In the figure, the oval nodes represent the human–computer interface, and the square nodes represent the interaction objects, such as buttons and the mouse. The edges of the graph represent interaction behaviors, such as clicking and double-clicking. For example, when performing the task of adjusting the refueling flow rate, the operator must first enter the reactor coolant system display and then click the refueling valve button to enter the refueling valve operation window interface. At this time, the operator can double-click the input box in the operation window to enter the corresponding value using the keyboard keys. Finally, by clicking to confirm the execution, the entire task of adjusting the refueling valve flow rate is completed. This diagram clearly displays the operators and operands involved throughout the task. By counting the number and types of operators and operands, the TEC value can be calculated. Finally, we use entropy to calculate the entropy values of the operation structure and information structure diagrams. To construct the information structure diagram, we first analyze the operation information, as shown in Table 5. The operation information is classified into process variables (P), such as changes in the pressurizer water level and primary loop pressure, and Boolean values (B), such as switch and start/stop statuses. Operators must also understand equipment control information, such as the names, quantities, and types of control switches, such as the safety injection reset button, which is classified as type B.
Figure 9a displays the information structure diagram, with the top-level node as the root node, the bottom-level nodes as data types, and the middle layer as component information. The information structure diagram represents the amount of information required to execute each step, and the second-order entropy of the information structure diagram is used to measure the task information complexity. Figure 9b presents the operation structure diagram, which captures the procedural logic. The first-order entropy of this diagram assesses task logical complexity, while the second-order entropy evaluates task scale complexity.
We use second-order entropy to calculate the size of the information structure diagram, which reflects the task information complexity (TIC) of the step. First, we classify the nodes in Figure 9a using the method described in Section 2.2 and substitute the classification results into Equation (2). The calculation result is:
S T I C = i = 1 19 p A i log 2 p A i = 18 ( 1 23 log 2 1 23 ) + ( 5 23 log 2 5 23 ) = 4.019
Similarly, we use first-order entropy to calculate the task logical complexity (TLC). We classify the nodes in Figure 9b using the method described in Section 2.2 and substitute the probabilities of different classes into Equation (2). The calculation result is:
S T L C = i = 1 5 p A i log 2 p A i = 2 ( 1 11 log 2 1 11 ) + 2 ( 2 11 log 2 2 11 ) + ( 4 11 log 2 4 11 ) = 2.054
We use the second-order entropy calculation method described in Section 2.2 to classify the nodes in Figure 9b and substitute the classification results into Equation (2) to calculate the second-order entropy of Figure 9b. The size of the action control diagram, i.e., the size of Figure 9b, is used to measure the task scale complexity (TSC). The calculation result is:
S T S C = i = 1 11 p A i log 2 p A i = 11 ( 1 11 log 2 1 11 ) = 3.459
From the action control diagram in Figure 10, we count the number of unique operators, unique operands, total operator frequency, and total operand frequency and substitute these values into Equation (1) to calculate the TEC value:
S T E C = η 1 N 2 N 1 + N 2 2 η 2 log 2 η 1 + η 2 = 5 × 44 ( 45 + 44 ) 2 × 10 log 2 5 + 10 = 3824.846
Based on these values, we use the Euclidean norm to determine the final task complexity (TC) value. The TC value formula is as follows:
T C = ( ω 1 S T I C ) 2 + ( ω 2 S T L C ) 2 + ( ω 3 S T S C ) 2 + ( ω 4 S T E C ) 2   #
where ω 1 , ω 2 , ω 3 ,   a n d ω 4 are the weighting factors. We will use factor analysis to calculate these weights in the following section.
We take the SLOCA accident as an example, selecting 30 steps from the SLOCA procedure as the analysis objects. We calculate the TIC, TLC, and TSC for each step and simulate the operations on the DCS simulator. We record the interaction actions with the simulator during the operation process and count the operators and operands for each step to calculate the TEC value. Since the calculation results have different dimensions, we normalize the data, as shown in Table 6.
Prior to conducting factor analysis, we first conducted the KMO and Bartlett’s tests on the data. All statistical analyses were conducted using the SPSS data analysis software (version R27.0.1.0). The results of the data analysis are as follows: The KMO value is 0.844, and Bartlett’s sphericity test yields p < 0.001. A KMO value greater than 0.6 indicates that the data are suitable for factor analysis, and Bartlett’s sphericity test further confirms the data’s suitability. Through factor analysis, we extracted one main factor, with a rotated variance explanation rate of 78.995%, indicating that this factor has good explanatory power for the indicator data. In the factor loading coefficient matrix, all indicators have communality values higher than 0.4, indicating strong correlations between the indicators and the factor. We used linear combination coefficients to calculate the weights. First, we calculated the linear combination coefficients by dividing the loading coefficients by the square root of the corresponding eigenvalues. Next, we calculated the comprehensive score coefficients by multiplying the linear combination coefficients by the variance explanation rate and summing them, then dividing by the cumulative variance explanation rate. Finally, we normalized the comprehensive score coefficients to obtain the weight values for each indicator. After the calculations, the weights for TIC, TLC, TSC, and TEC are 25.87%, 23.27%, 25.54%, and 25.32%, respectively.

4. Experimental Validation

To validate the effectiveness of the measurement method, we hypothesize that task complexity is positively correlated with task execution time and workload. As task complexity increases, task execution time and workload should also increase. To test this, we compare the subjective scale evaluation results with the average execution time of the steps and analyze them in conjunction with the STC value. In the experiment, we recruited six participants, including three males and three females, who are all graduate students from our laboratory. All participants had accumulated simulator operation experiments through weekly experiments over the past 2–3 years, so they had some experience with simulator operations. Before the experiment, we provided detailed explanations of the experiment’s purpose, process, and precautions to ensure that the participants fully understood the experiment. The experiment was conducted in six sessions, with one participant performing the experiment in each session. After completing the tasks, each participant filled out the NASA-TLX scale to subjectively evaluate the workload. We also recorded the simulator operation logs and video data during the experiment to statistically analyze the operation time for each step. This approach allows us to more accurately measure the task execution time and workload, thereby validating the effectiveness of the task complexity measurement.

4.1. Comparison of Task Complexity and NASA-TLX Scores

Subjective evaluation techniques have developed significantly over the past few decades, with several methods proposed, most of which focus on workload assessment [36,37]. Here, we selected NASA-TLX as the subjective evaluation method. Hill [38] compared four subjective workload rating scales across four dimensions: sensitivity, operator acceptance, resource requirements, and special procedures. The authors found that NASA-TLX and the OW scale performed better in terms of sensitivity and operator acceptance, with NASA-TLX having the highest user acceptance. Additionally, NASA-TLX provides more detailed and diagnostic data.
The NASA-TLX scale measures workload across six dimensions: mental demand, physical demand, temporal demand, performance, effort, and frustration. The weighted average score across these dimensions is used to evaluate the participants’ workload. First, participants determine the relative importance of the six dimensions to assign weights to each dimension. Then, the weighted sum of the participant’s ratings for each dimension is calculated to obtain the workload score for each step. The average workload score for each step is calculated by averaging the scores of the six participants. Table 7 presents the average workload scores and task complexity values for each step.
First, we conducted an error analysis of the NASA-TLX scores, with the results shown in Table 8. We used SPSS software (version R27.0.1.0) to calculate the mean, standard deviation, and credibility of the results at the 95% confidence interval. The data in Table 8 show that the standard deviation of the NASA-TLX scores is 11.61, indicating some fluctuation around the mean but an overall relatively concentrated distribution. This may suggest that the factors affecting the score are relatively stable. In contrast, the standard deviation of the operation time reaches 34.49, a relatively large value, indicating a high degree of dispersion around the mean for operation time. However, the overall experimental data demonstrate a certain degree of acceptable consistency and stability.
To verify the reliability of the scores, we performed an Intraclass Correlation Coefficient (ICC) analysis [39]. The ICC is a statistical indicator used to measure data reliability and consistency. Typically, an ICC value greater than 0.75 indicates high consistency, a value between 0.40 and 0.75 indicates moderate consistency, and a value below 0.40 indicates poor consistency. The calculated ICC value is 0.804, indicating that the NASA-TLX scores are reliable.
We used SPSS (Version R27.0.1.0) to perform linear regression analysis on the STC values and NASA-TLX scores, as shown in Figure 12. Table 9 shows the analysis of variance (ANOVA) results. From Figure 12, we can see that task complexity is positively correlated with the NASA-TLX scores, with a coefficient of determination R2 = 0.679. The model has a good fit, and the regression coefficients are statistically significant (p < 0.001 p < 0.001). The regression equation is NASA-TLX Score = 23.160·STC+35.459. The ANOVA results show that the model is significant (F(1, 28) = 59.281, p < 0.001).

4.2. Comparison of Task Complexity and Operation Time

We used the SLOCA emergency accident operating procedure, extracting 30 operation steps, mainly divided into reactor shutdown procedures and cooldown and depressurization procedures. We defined the time for each step as the period from when the operator starts reading the steps’ instructions to when all sub-tasks are completed. The final time is the average operation time of the six participants, as different participants have varying levels of experience, reaction speed, and knowledge. Table 10 shows the operation time and task complexity for each step. The regression analysis of the operation time and task complexity is shown in Figure 13, and the ANOVA results are shown in Table 11.
From Figure 13, we can see that the model has a good fit, with a coefficient of determination R = 0.785. Task complexity is significantly positively correlated with operation time, indicating that the task execution time is significantly influenced by task complexity. As task complexity increases, the task execution time also increases. The ANOVA results show that the model is significant F(1, 28) = 96.706, p < 0.001, and the regression coefficients are statistically significant (p = 0.004, p < 0.001). The regression equation is Operation Time = 74.016·STC-25.685.

4.3. Analysis and Discussion

In this study, we constructed a quantitative framework for measuring task complexity in human–computer interactions based on Halstead’s E measure and task entropy metrics. The validity of this framework, as well as the feasibility of complexity calculation, was verified through experiments. During the experimental phase, college students with an engineering background were recruited to participate in simulated process tests to preliminarily assess the explanatory power of the quantified results for operational performance indicators. From the experimental data, it is evident that the operation time and STC values exhibit a strong linear relationship (R2 = 0.785), indicating that task complexity has a direct and significant impact on execution time. The fit between NASA-TLX scores and STC values is not as good as that of the operation time. This may be because the STC values are objective measures of task complexity. Higher task complexity requires more operations and logical judgments, directly leading to longer operation times. Unlike subjective scores, the operation time is not influenced by individual perceptions. Therefore, we believe that the stronger fit between operation time and STC values is reasonable. However, it is important to note that the participants in this experiment were graduate students majoring in nuclear engineering whose professional experience is significantly lower than that of actual nuclear power plant operators. Studies have shown [40,41] that in human–computer interaction tasks with a high cognitive load, experience and professional knowledge are significant variables affecting decision-making speed and accuracy. Compared with experts, students have a lower level of experience and knowledge, so they allocate more cognitive resources to task processing when making decisions, resulting in greater delays between actions. Consequently, they may exhibit higher workload or error rates in some tasks. Therefore, there are errors in the actual task execution time and load ratings, which, in turn, affect the validity of the model.
Subjective scores reflect individuals’ perceptions of task complexity, which can be influenced by factors such as experience, skill level, and psychological state. Therefore, different individuals may perceive the same task as having different levels of complexity. For example, more experienced operators may require less detailed descriptions of operating procedures, while less experienced operators may prefer more detailed descriptions. For instance, in the task of opening spray valves for cooldown and depressurization, less experienced operators may expend cognitive resources to determine which system the spray valves belong to and what their symbols are. They may prefer descriptions such as “Open 001VP and 002VP for cooldown and depressurization.” Therefore, even steps with low STC values may impose a higher workload on less experienced operators. According to Rasmussen’s Skill–Rule–Knowledge (SRK) theory [42], when individuals are in a skill-based behavior state, their actions are highly automated, consuming fewer cognitive resources, such as continuously opening valves, starting pumps, or stopping pumps. However, when individuals face ambiguous scenarios or complex problems, they must rely on long-term memory for reasoning, entering a knowledge-based behavior state that requires significant cognitive resources. For example, in the step “Check if safety injection is needed; if needed, manually activate it; if not, execute procedure ES-0.1,” operators must make decisions based on the system’s status and their experience. Even if the step has low logical and information complexity, it may still impose a high cognitive load on operators.
The efficiency of operators in performing tasks, namely interacting with the interface, largely depends on the quality of the interface design. The total time for operators to perform interface tasks is the sum of information search time and operation execution time. A well-designed interface with effective information distribution and superior operation logic can significantly reduce the operator’s workload and interaction time. Complex interfaces and operation logic increase the information search time and the number of operation steps, leading to higher TEC values, longer task execution times, and an increased workload. Therefore, the quality of interface design is a major factor contributing to variations in operation time data.

5. Summary and Prospects

This study focuses on the complexity of human–computer interaction tasks in the digital control system (DCS) of nuclear power plants and proposes a multi-dimensional quantitative model that integrates dynamic interaction and static analysis. Through theoretical development, methodological improvements, and experimental validation, the mechanisms by which task complexity affects operational efficiency and cognitive load are revealed, and its practical application value in interface optimization is explored.
The core of the task complexity proposed in this study lies in the nonlinear coupling of static structure and dynamic interaction. As discussed in Section 2, traditional entropy measurement methods can effectively evaluate static complexity, but they are limited in quantifying dynamic interaction processes. For example, the task in step 25 involves five types of operators and ten types of operands. The cognitive load amplification caused by redundant interface navigation paths and operator reuse cannot be fully explained by static entropy values. To address this, Halstead’s E metric is introduced to dynamically quantify the interaction load between operators and operands. Experimental results demonstrate that the task execution complexity (TEC) value is significantly correlated with the task execution time, validating the independent contribution of dynamic interaction complexity.
From a practical perspective, this model provides a quantitative optimization pathway for the design of human–computer interfaces in nuclear power plants. For instance, high TEC values can be reduced by merging redundant operation windows, thereby improving operational efficiency and reducing the task execution time. Simultaneously, optimizing the entropy of information structure diagrams (TIC) can be achieved by modularizing information layouts, such as grouping nineteen types of nodes by function, thereby reducing the cognitive load. These measures align with Endsley’s [43] situation awareness theory, which emphasizes that simplifying information retrieval and operation paths enhances operators’ real-time perception of system states. Additionally, a task complexity grading based on task complexity (TC) values can guide targeted training design, such as increasing the simulation frequency for high-complexity steps or introducing decision flowchart aids.
However, this study has certain limitations. First, the experimental participants were trainees, and their skill levels may differ from those of professional nuclear power plant operators, potentially leading to prediction bias in the model. Future work could optimize the experiments by recruiting experienced operators to participate and compare their NASA-TLX scores and operation times with those of student participants. This could lead to the establishment of an experience-based correction factor for the TC values. Alternatively, complexity quantification tools could be embedded in routine training to analyze how the relationship between TC values and operational error rates evolves with increasing experience. Second, for future work, experimental optimization could be carried out by recruiting experienced operators for experiments to compare their NASA-TLX scores and operation times with those of student participants. This could lead to the establishment of an empirical correction coefficient for TC values. Alternatively, complexity quantification tools could be embedded in routine training to analyze the correlation between TC values and operation error rates and how this correlation evolves with increasing experience. At present, the complexity quantification framework mainly focuses on modeling the human-computer interaction process of a single operator and does not yet cover the “team complexity” factors under multi-role collaboration conditions. However, in the actual emergency operations of nuclear power plants, multiple roles need to collaborate to complete tasks within stringent time constraints, and the quality of information transfer, communication styles, and task load distribution among personnel have a profound impact on overall operational performance [44,45]. Therefore, in the future, a team performance model could be established, incorporating indicators such as the quality of team information transfer and the completeness of information sharing. Third, the weight coefficients are determined through static factor analysis and do not consider dynamic changes across task phases, such as the potential amplification of TEC effects under high-pressure conditions during early accident stages. Future work should validate the model’s robustness in real-world scenarios and explore the dynamic weight adjustment mechanisms, such as adaptive algorithms based on real-time workload feedback. Moreover, while this method has the potential to be extended to fields like aviation control and chemical systems, its generalizability requires verification through cross-domain experiments.
In summary, this study integrates Halstead’s E metric and entropy theory to construct a task complexity quantification framework that encompasses dynamic and static dimensions. Its theoretical value lies in improving the methodological system for complexity analysis, while its practical significance lies in providing actionable optimization tools for human factors engineering in nuclear power. Future work will focus on iterating the dynamic model, validating it across multiple scenarios, and expanding the research into team collaboration complexity to enhance the method’s applicability in high-risk industrial systems.

Author Contributions

Conceptualization, E.P. and L.D.; methodology, E.P. and L.D.; software, E.P.; validation, E.P. and L.D.; formal analysis, E.P.; investigation, E.P.; resources, L.D.; data curation, L.D.; writing—original draft preparation, E.P.; writing—review and editing, L.D.; visualization, E.P.; supervision, L.D.; project administration, L.D.; funding acquisition, L.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported partly by the Hunan Provincial Natural Science Foundation under the project titled “Study on High-Risk Tight-coupled Industrial System Human Error Analysis” (Project No. 2025JJ70172).

Institutional Review Board Statement

Since the results of this study are intended solely for academic discussion, and the study does not involve psychological or physiological experiments; does not involve personal privacy, health information, or biometric data; does not involve risky experimental environments; and will not have any direct or potential impact on participants or society, ethical review is not required.

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare they have no potential conflicts of interest with respect to the research, authorship, or publication of this paper.

Appendix A

Table A1. Example SLOCA Emergency Operating Procedure.
Table A1. Example SLOCA Emergency Operating Procedure.
StepTask Content
1Confirm reactor trip
2Check if the safety injection is activated
3Confirm the isolation of feed-water
4Confirm the operation of the auxiliary feed-water pump
5Confirm the operation of the following equipment
6Check whether the main steam pipeline should be isolated
7Confirm that no containment spray is required
8Confirm the safety injection flow rate
9Confirm that the total flow rate of the auxiliary feed-water is normal
10Check the pressurizer relief valve and spray valve
11Check whether the main feed-water pump has stopped
12Check whether the secondary side of the steam generator has a fault
13Check whether the heat-transfer tubes of the steam generator are ruptured
14Check the following pressures are normal
15Check whether the secondary side pressure of the intact steam generator should be reduced to the system pressure
16Check whether the main system is intact
17Check the state of the pressurizer relief valve and its isolation valve
18Check whether the safety injection can be terminated
19Safety injection reset, containment ventilation isolation reset
20Stop one high-pressure safety injection pump
21Depressurize the main system to refill the pressurizer
22Check whether one main pump should be started
23Check whether one safety injection pump can be stopped
24Check whether normal charging can be established
25Establish normal charging and maintain the pressurizer water level
26Check the operation status of the main pump
27Confirm that no more safety injection flow is required
28Check whether the safety injection tank should be isolated
29Continuously cool and depressurize the main system
30Continuously monitor the main system pressure and the pressurizer water level

References

  1. O’Hara, J.M.; Higgins, J.C.; Brown, W.S.; Fink, R.; Persensky, J.; Lewis, P.; Kramer, J.; Szabo, A.; Boggi, M.A. Human Factors Considerations with Respect to Emerging Technology in Nuclear Power Plants; US Nuclear Regulatory Commission: Washington, DC, USA, 2008.
  2. Ulrich, T.A.; Boring, R.L. Example user centered design process for a digital control system in a nuclear power plant. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Perth, Australia, 2–4 December 2013; Sage Publications: Los Angeles, CA, USA, 2013; Volume 57, pp. 1727–1731. [Google Scholar]
  3. Zou, Y.; Zhang, L.; Dai, L.; Li, P.; Qing, T. Human reliability analysis for digitized nuclear power plants: Case study on the LingAo II nuclear power plant. Nucl. Eng. Technol. 2017, 49, 335–341. [Google Scholar] [CrossRef]
  4. O’Hara, J.M. The Effects of Interface Management Tasks on Crew Performance and Safety in Complex, Computer-Based Systems; US Nuclear Regulatory Commission, Office of Nuclear Regulatory Research: North Rockville, MD, USA, 2002.
  5. Woods, D.D. On taking human performance seriously in risk analysis: Comments on dougherty. Reliab. Eng. Syst. Saf. 1990, 29, 375–381. [Google Scholar] [CrossRef]
  6. Xing, J.; Manning, C.A. Complexity and Automation Displays of Air Traffic Control: Literature Review and Analysis; Office of Aerospace Medicine: Washington, DC, USA, 2005. [Google Scholar]
  7. Xing, J. Information complexity in air traffic control displays. In Proceedings of the Human-Computer Interaction. HCI Applications and Services: 12th International Conference, HCI International 2007, Beijing, China, 22–27 July 2007; Part IV 12. Springer: Berlin/Heidelberg, Germany; pp. 797–806. [Google Scholar]
  8. Bedny, G.Z.; Karwowski, W.; Bedny, I.S. Complexity evaluation of computer-based tasks. Int. J. Hum.-Comput. Interact. 2012, 28, 236–257. [Google Scholar] [CrossRef]
  9. O’Hara, J.M.; Fleger, S. Human-System Interface Design Review Guidelines; Brookhaven National Lab. (BNL): Upton, NY, USA, 2020. [Google Scholar]
  10. Brune, R.L.; Weinstein, M. Checklist for Evaluating Emergency Procedures Used in Nuclear Power Plants; Division of Program Development and Appraisal, Office of Inspection and Enforcement: Thousand Oaks, CA, USA, 1981; Volume 88. [Google Scholar]
  11. McCabe, T. A complexity measure. IEEE Trans. Softw. Eng. 1976, 4, 308–320. [Google Scholar] [CrossRef]
  12. Halstead, M.H. Elements of Software Science; Operating and Programming Systems Series; Elsevier Science Inc.: Amsterdam, The Netherlands, 1977. [Google Scholar]
  13. Park, J.; Jung, W.; Ha, J. Development of the step complexity measure for emergency operating procedures using entropy concepts. Reliab. Eng. Syst. Saf. 2001, 71, 115–130. [Google Scholar] [CrossRef]
  14. Park, J.; Jung, W. A study on the validity of a task complexity measure for emergency operating procedures of nuclear power plants—Comparing with a subjective workload. IEEE Trans. Nucl. Sci. 2006, 53, 2962–2970. [Google Scholar] [CrossRef]
  15. Park, J.; Jung, W. A study on the validity of a task complexity measure for emergency operating procedures of nuclear power plants—Comparing task complexity scores with two sets of operator response time data obtained under a simulated SGTR. Reliab. Eng. Syst. Saf. 2008, 93, 557–566. [Google Scholar] [CrossRef]
  16. Park, J.; Cho, S. Investigating the effect of task complexities on the response time of human operators to perform the emergency tasks of nuclear power plants. Ann. Nucl. Energy 2010, 37, 1160–1171. [Google Scholar] [CrossRef]
  17. Xu, S.; Li, Z.; Song, F.; Luo, W.; Zhao, Q.; Salvendy, G. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures. Reliab. Eng. Syst. Saf. 2009, 94, 670–674. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Li, Z.; Wu, B.; Wu, S. A spaceflight operation complexity measure and its experimental validation. Int. J. Ind. Ergon. 2009, 39, 756–765. [Google Scholar] [CrossRef]
  19. Nieminen, S. Task Complexity Analysis: A Mobile Application Case Study. Master’s Thesis, 2022. Available online: https://urn.fi/URN:NBN:fi:aalto-202206194037 (accessed on 20 August 2024).
  20. Tamaskar, S.; Neema, K.; DeLaurentis, D. Framework for measuring complexity of aerospace systems. Res. Eng. Des. 2014, 25, 125–137. [Google Scholar] [CrossRef]
  21. Wang, Z.; Liu, S.; Wanyan, X.; Dang, Y.; Chen, X.; Zhang, X. Pilot workload measurement model based on task complexity analysis. Int. J. Ind. Ergon. 2024, 104, 103637. [Google Scholar] [CrossRef]
  22. Chewar, C.M.; McCrickard, D.S.; Ndiwalana, A.; North, C.; Pryor, J.; Tessendorf, D. Secondary task display attributes: Optimizing visualizations for cognitive task suitability and interference avoidance. In Proceedings of the ACM International Conference Proceeding Series, Minneapolis, MN, USA, 20–25 April 2002; pp. 165–171. Available online: http://spis.hnlat.com/scholar/redirect?url=https%3A%2F%2Fdoi.org%2F10.5555%2F509740.509766 (accessed on 20 August 2024).
  23. Husseiny, A.A.; Sabri, Z.A.; Packer, D.; Holmes, J.W.; Keith Adams, S.; Rodriguez, R.J. Operating procedure automation to enhance safety of nuclear power plants. Nucl. Eng. Des. 1989, 110, 277–297. [Google Scholar] [CrossRef]
  24. Chang, S.H.; Choi, S.S.; Park, J.K.; Heo, G.; Kim, H.G. Development of an advanced human–machine interface for next generation nuclear power plants. Reliab. Eng. Syst. Saf. 1999, 64, 109–126. [Google Scholar] [CrossRef]
  25. Ross, M.A.; Iwaki, K.; Makino, M.; Miyake, M. Control Room Design and Automation in the Advanced BWR (ABWR); IEEE Service Center: Piscataway, NJ, USA, 1990. [Google Scholar]
  26. Long, A. Computerized operator decision aids. Nucl. Saf. 1984, 25. Available online: https://www.osti.gov/biblio/6188148 (accessed on 20 August 2024).
  27. Macwan, A.; Mosleh, A. A methodology for modeling operator errors of commission in probabilistic risk assessment. Reliab. Eng. Syst. Saf. 1994, 45, 139–157. [Google Scholar] [CrossRef]
  28. Peng, C.-C.; Hwang, S.-L.J.E. The design of an emergency operating procedure in process control systems: A case study of a refrigeration system in an ammonia plant. Ergonomics 1994, 37, 689–702. [Google Scholar] [CrossRef] [PubMed]
  29. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  30. Davis, J.S.; LeBlanc, R.J. A study of the applicability of complexity measures. IEEE Trans. Softw. Eng. 1988, 14, 1366–1372. [Google Scholar] [CrossRef]
  31. Lew, K.S.; Dillon, T.S.; Forward, K.E. Software complexity and its impact on software reliability. IEEE Trans. Softw. Eng. 1988, 14, 1645–1655. [Google Scholar] [CrossRef]
  32. Mowshowitz, A. Entropy and the complexity of graphs: I. An index of the relative complexity of a graph. Bull. Math. Biophys. 1968, 30, 175–204. [Google Scholar] [CrossRef] [PubMed]
  33. Samia, A.; Tahani, A.; Haduth, L.; Asma Abd, A.; Rafig, A. Task Analysis in Human-Computer Interaction: A Comparison between Four Task Analysis Techniques. AlQalam J. Med. Appl. Sci. 2023, 7, 296–307. [Google Scholar] [CrossRef]
  34. Stanton, N.A. Hierarchical task analysis: Developments, applications, and extensions. Appl. Ergon. 2006, 37, 55–79. [Google Scholar] [CrossRef]
  35. Annett, J. Hierarchical task analysis. In Handbook of Cognitive Task Design; CRC Press: Boca Raton, FL, USA, 2003; pp. 17–36. [Google Scholar]
  36. Nygren, T. Psychometric properties of subjective workload measurement techniques: Implications for their use in the assessment of perceived mental workload. Hum. Factors 1991, 33, 17–33. [Google Scholar] [CrossRef]
  37. Hendy, K.C.; Hamilton, K.M.; Landry, L.N. Measuring subjective workload: When is one scale better than many? Hum. Factors 1993, 35, 579–601. [Google Scholar] [CrossRef]
  38. Hill, S.G.; Iavecchia, H.P.; Byers, J.C.; Bittner, A.C., Jr.; Zaklade, A.L.; Christ, R.E. Comparison of four subjective workload rating scales. Hum. Factors 1992, 34, 429–439. [Google Scholar] [CrossRef]
  39. Bartko, J.J. On various intraclass correlation reliability coefficients. Psychol. Bull. 1976, 83, 762. [Google Scholar] [CrossRef]
  40. Shobe, K.K.; Fiore, S.M. Similarity and Priority of the Submarine Officer of the Deck: Assessing Knowledge Structures. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Denver, CO, USA, 13–17 October 2003; Sage Publications: Los Angeles, CA, USA; Volume 47, pp. 297–301. [Google Scholar]
  41. Johnson, R.R.; Stone, B.T.; Miranda, C.M.; Vila, B.; James, L.; James, S.M.; Rubio, R.F.; Berka, C. Identifying psychophysiological indices of expert vs. novice performance in deadly force judgment and decision making. Front. Hum. Neurosci. 2014, 8, 512. [Google Scholar] [CrossRef]
  42. Rasmussen, J. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybern. 1983, SMC-13, 257–266. [Google Scholar] [CrossRef]
  43. Endsley, M.R. Toward a Theory of Situation Awareness in Dynamic Systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  44. Chung, Y.H.; Yoon, W.C.; Min, D. A model-based framework for the analysis of team communication in nuclear power plants. Reliab. Eng. Syst. Saf. 2009, 94, 1030–1040. [Google Scholar] [CrossRef]
  45. Hwang, S.-L.; Liang, G.-F.; Lin, J.-T.; Yau, Y.-J.; Yenn, T.-C.; Hsu, C.-C.; Chuang, C.-F. A real-time warning model for teamwork performance and system safety in nuclear power plants. Saf. Sci. 2009, 47, 425–435. [Google Scholar] [CrossRef]
Figure 1. Nuclear power plant DCS human–computer interaction framework.
Figure 1. Nuclear power plant DCS human–computer interaction framework.
Entropy 27 00600 g001
Figure 2. Task complexity calculation process in human–computer interactions.
Figure 2. Task complexity calculation process in human–computer interactions.
Entropy 27 00600 g002
Figure 3. Classification of interface operation tasks.
Figure 3. Classification of interface operation tasks.
Entropy 27 00600 g003
Figure 4. Interaction diagram example.
Figure 4. Interaction diagram example.
Entropy 27 00600 g004
Figure 5. Example diagram of graph entropy calculation.
Figure 5. Example diagram of graph entropy calculation.
Entropy 27 00600 g005
Figure 6. Information structure diagram example.
Figure 6. Information structure diagram example.
Entropy 27 00600 g006
Figure 7. Hierarchical Task Analysis of step 25.
Figure 7. Hierarchical Task Analysis of step 25.
Entropy 27 00600 g007
Figure 8. Hierarchical analysis of interface operation tasks.
Figure 8. Hierarchical analysis of interface operation tasks.
Entropy 27 00600 g008
Figure 9. Operation structure and information structure diagrams.
Figure 9. Operation structure and information structure diagrams.
Entropy 27 00600 g009
Figure 10. Interaction diagram.
Figure 10. Interaction diagram.
Entropy 27 00600 g010
Figure 11. Step 25e, interaction action diagram example.
Figure 11. Step 25e, interaction action diagram example.
Entropy 27 00600 g011
Figure 12. Linear regression graph of NASA-TLX scores and STC values.
Figure 12. Linear regression graph of NASA-TLX scores and STC values.
Entropy 27 00600 g012
Figure 13. Linear regression graph of operation time and STC values.
Figure 13. Linear regression graph of operation time and STC values.
Entropy 27 00600 g013
Table 1. Definitions of interactive component actions.
Table 1. Definitions of interactive component actions.
Interaction ComponentAbbreviationInteractive ActionsInteractive Functionality Description
ValveVClickClicking the valve opens the operation window to perform related tasks
PumpPClickClicking the pump opens the operation window to perform related tasks
ControllerCkgClickClicking the controller opens the operation window to perform related tasks
CkuClick
CkcClick
Navigation IconsNIClickClick the navigation window icon to enter the system screen after powering on
Shortcut Navigation IconsSNIClickThe quick navigation icon enables fast screen transitions within the system with a single click
Execute ButtonEBClickBy clicking, confirm the execution of the operation
Exit ButtonEXBClickExit the window by clicking
Input BoxIBDouble-clickSelect the window by double-clicking to enter data
KeyboardKbdInputEnter data by typing
Information PanelsIPCheckRetrieve information by visual inspection
Operation WindowOW A secondary window superimposed over the system interface for the operation of related components
System WindowSW System screen
Reset ButtonRBLong pressPerform the reset operation of the related components by long-pressing
Switch ButtonSBLong pressPerform the on/off action of the related components by long-pressing
M/A ButtonMABClickToggling between automatic and manual modes for the component
signal indicator lightSLCheck“Determine system status by checking the signal lights.”
Table 2. First-order entropy classification of graphs.
Table 2. First-order entropy classification of graphs.
Graph (a)ClassGraph (b)
InOutNodeInOutNode
02aI02a
11b,c,d,eII11b,d,e
22dIII12c
20gIV21f
20g
Table 3. Second-order entropy classification of graphs.
Table 3. Second-order entropy classification of graphs.
Graph (a)ClassGraph (b)
NodeNeighbor NodeNodeNeighbor Node
ab,cIab,c
b,ca,dIIba,g
db,c,e,fIIIca,d,c
e,fd,gIVd,ec,f
ge,fVfd,e,g
VIgb,f
Table 4. Operation steps of some examples.
Table 4. Operation steps of some examples.
Action or Expected ResponseUnintended Response
Step 23: Check if a train of safety injection pumps can be stopped
  a. Check the safety injection pumps-runninga. Execute Step 24
  b. Determine the sub-cooling at the reactor core outlet required for pump shutdown according to Table XX
  c. Reactor core outlet sub-cooling-greater than the sub-cooling required by Table XXc. Execute Step 27
  d. Pressurizer water level-greater than 2.0 md. Do not stop the safety injection pump and return to Step 21
  e. Stop a train of safety injection pumps
  f. Return to Step 23 a
Step 24: Check if normal charging can be established
  a. Check the following items:a. Execute Step 27
  - All safety injection pumps are stopped
  - Centrifugal charging pump: one in operation and one on standby
  b. Reactor core outlet sub-cooling-greater than 20°Cb. If the reactor core outlet sub-cooling is less than 20 °C, start a high-pressure safety injection pump
  c. Pressurizer water level-greater than 2 mc. Return to Step 21
  Step 25: Establish normal charging and maintain the pressurizer water level
  a. Place the RRA to RCV flow control valve in manual
  b. Check if the safety injection has been resetb. Manually reset the safety injection
  c. Manually fully open the charging valve
  d. Check the status of the following valves:d. Restore the valves to the correct status
  - Minimum flow valve of the charging pump-open
  - Charging line isolation valve-open
  - Boron injection isolation valve-closed
  e. Adjust the charging flow to maintain the pressurizer water level
Table 5. Step 25 operation information.
Table 5. Step 25 operation information.
Task InformationOperator ActionsComponents and Information
1. Place the flow—control valve for RRA to RCV in manual mode.1. Switch the valve to manual.Valve: Data type B
2. Check if the safety injection has been reset.2. Confirm that the safety-injection signal is cut off.Signal indicator light: Data type B
3. Manually fully open the charging valve.4.1 Set the charging valve to manual mode.
4.2 Adjust the charging-flow data.
Operating mode: Data type B
Valve: Data type B
4. Check the status of the following valves.4.1 Check that the minimum-flow pipeline isolation valve of the charging pump is in the open state.
4.2 Check that the charging-line isolation valve is in the open state.
4.3 Check that the boric-acid injection isolation valve is in the closed state.
Valve: Data type B
Valve: Data type B
Valve: Data type B
5. Adjust the charging flow to maintain the pressurizer water level.5. Adjust the flow of the charging-flow valve and constantly monitor the pressurizer water level.Charging flow: Data type P (process variable)
Pressurizer water level: Data type P (process variable)
Table 6. Calculation results of step task complexity.
Table 6. Calculation results of step task complexity.
StepTICTLCTSCTECStepTICTLCTSCTEC
10.5660.5901.9910.464161.0061.9281.9910.521
20.2010.6901.4060.173172.1641.9282.3961.953
31.6682.8742.7241.568183.4172.5004.3082.715
40.8200.9831.2980.588190.5660.5901.9910.775
51.5561.9282.5301.376201.9312.5182.1411.145
61.6682.5002.4131.769212.3712.5912.4132.820
71.8993.0212.0440.641222.5583.1212.8601.385
82.3712.8741.9911.067232.5582.5452.2342.347
90.0000.9781.2960.051242.3711.3982.6401.037
102.0011.3981.9910.558253.3223.1214.4343.523
112.3712.1522.4132.102260.5660.5900.0001.124
120.8911.9282.3890.508272.1642.1521.9910.096
132.3710.8592.2341.174281.6680.0002.5301.357
141.9311.3982.1411.441292.3710.7681.8600.793
154.4683.9484.8073.971301.0060.5900.1960.000
Table 7. NASA-TLX scores and task complexities of steps.
Table 7. NASA-TLX scores and task complexities of steps.
StepNASA-TLXSTCStepNASA-TLXSTC
159.860.561648.220.74
239.550.401759.281.06
365.501.131872.101.67
441.070.491941.670.58
544.850.952058.250.99
642.581.052164.521.28
753.281.022260.051.27
871.101.072372.521.21
944.170.402462.581.00
1052.320.812576.931.83
1169.321.142643.480.35
1259.900.802756.520.91
1363.970.912862.530.85
1457.070.892958.020.82
1580.552.173040.880.30
Table 8. Error analysis of NASA-TLX scores and operation time.
Table 8. Error analysis of NASA-TLX scores and operation time.
IndexMeanStandard DeviationLower Limit of 95% Confidence IntervalUpper Limit of 95% Confidence Interval
NASA-TLX score57.4211.6153.0861.75
Operation time44.4734.4831.5957.35
Table 9. Analysis of variance results for NASA-TLX scores and STC.
Table 9. Analysis of variance results for NASA-TLX scores and STC.
ModelDegrees of FreedomFSignificanceRR2
Regression159.281 < 0.0010.8240.678
Residual28
Total29
Table 10. Operation time and task complexity of steps.
Table 10. Operation time and task complexity of steps.
StepOperation Time (unit:S)STCStepOperation Time (unit:S)STC
114.230.561622.550.74
28.200.401726.741.06
336.651.131874.331.67
418.740.491921.480.58
534.860.952066.340.99
644.641.052167.451.28
720.121.022248.961.27
869.341.072383.441.21
915.670.402448.691.00
1028.740.8125135.591.83
1163.411.142618.860.35
1216.400.802721.440.91
1349.480.912859.970.85
1414.560.892943.450.82
15150.892.17309.000.30
Table 11. Analysis of variance results for operation oime and STC.
Table 11. Analysis of variance results for operation oime and STC.
ModelDegrees of FreedomFSignificanceRR2
Regression196.706 < 0.0010.8800.785
Residual28
Total29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pang, E.; Dai, L. Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures. Entropy 2025, 27, 600. https://doi.org/10.3390/e27060600

AMA Style

Pang E, Dai L. Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures. Entropy. 2025; 27(6):600. https://doi.org/10.3390/e27060600

Chicago/Turabian Style

Pang, Ensheng, and Licao Dai. 2025. "Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures" Entropy 27, no. 6: 600. https://doi.org/10.3390/e27060600

APA Style

Pang, E., & Dai, L. (2025). Research on Task Complexity Measurements in Human—Computer Interaction in Nuclear Power Plant DCS Systems Based on Emergency Operating Procedures. Entropy, 27(6), 600. https://doi.org/10.3390/e27060600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop