Next Article in Journal
A Multi-Criteria Decision-Making Approach Integrated with Machine Learning for Energy Resource Supply
Previous Article in Journal
Vulnerability in Bank–Asset Bipartite Network Systems: Evidence from the Chinese Banking Sector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition

1
School of Public Administration, Central South University, Changsha 410083, China
2
School of Safety Engineering, Jiangxi University of Science and Technology, Nanchang 330000, China
3
School of Economics and Trade, Hunan University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Systems 2026, 14(2), 199; https://doi.org/10.3390/systems14020199
Submission received: 5 January 2026 / Revised: 6 February 2026 / Accepted: 9 February 2026 / Published: 12 February 2026
(This article belongs to the Section Systems Theory and Methodology)

Abstract

Unsafe behavior remains a dominant contributor to accidents in complex socio-technical systems (STSs), yet it is still frequently interpreted as an individual-level information failure. This study argues that unsafe behavior is more accurately understood as a systemic outcome shaped by multi-level technological, organizational, and environmental conditions. To address this gap, an integrated human factor risk analysis framework is proposed by combining the STS perspective with safety information cognition (SIC) theory. The framework conceptualizes unsafe behavior as the result of risk transmission through safety information flows, linking system-level risk sources to individual perception, cognition, decision-making, and action. Within this perspective, human factor risk does not arise directly from individual error, but from deficiencies and asymmetries in the generation, transmission, and utilization of safety-related information embedded in the STS. Based on this conceptualization, a system-oriented human factor risk analysis (HRFA) approach is developed to support the identification, assessment, and control of unsafe behaviors across both accident scenarios and operational contexts. The framework is applied to road transportation of dangerous goods in China, a typical high-risk STS. The application results demonstrate that the proposed approach can effectively distinguish the comprehensive risk characteristics of different unsafe behaviors and reveal their underlying systemic causes. This study contributes to systems thinking in safety governance by shifting the analytical focus from individual behavior correction to upstream system conditions and information processes. The proposed framework provides a transferable approach for understanding and managing human factor risk in complex STSs and offers practical implications for proactive, system-oriented safety governance.

1. Introduction

Humans are the core of the social–technical system (STS), who perform their duties and jointly affect the system safety. However, in terms of safety behavior, unsafe behavior is always the dominant factor of accidents whether compared with cognition or organizational behavior [1,2]. Therefore, the optimization of safety behavior means the absolute improvement of system safety. Despite the continuous progress of science and technology, the continuous improvement of the economic level, and the gradual improvement of safety laws and regulations and safety management systems, the total number of human accidents is still relatively high [3]. On one hand, the state’s current development stage affects the situation of production safety [4,5]; on the other hand, the gap between the safety behavior and the requirements of national laws and regulations, enterprise management systems, and on-site safety technical measures is large, so safety work is often in a passive situation [6]. Therefore, trying to find ways to solve the problem is imperative.
Existing research on human factor risk has developed along two relatively independent strands. Human risk analysis (HRA) focuses on estimating and reducing individual error probability, and organization-oriented HRFA emphasizes the influence of organizational structures, management practices, and safety culture on human behavior. Traditional HRA methods primarily conceptualize unsafe behavior as a consequence of individual cognitive failure or execution error [7,8]. By modeling human behavior through error probabilities, task sequences, or cognitive modes, HRA provides useful tools for quantifying human error in well-defined operational contexts, such as the method of Human Error Rate Prediction [9], the Cognitive Reliability and Error Analysis Method [10], and so on. However, these methods often rely on simplified assumptions regarding task stability, information completeness, and rational decision-making. As a result, they face inherent limitations in explaining unsafe behaviors that emerge under dynamic, uncertain, and tightly coupled socio-technical conditions, where individual actions are strongly shaped by organizational constraints, technological interfaces, and information environments.
In contrast, organization-oriented HRFA approaches extend the analytical focus beyond individuals by incorporating the organizational climate [11], safety culture [12], management systems [13], and regulatory frameworks [14]. These studies have demonstrated that unsafe behavior is frequently rooted in latent organizational deficiencies rather than isolated operator errors. Nevertheless, many organization-oriented approaches remain descriptive or macro-level in nature, and often lack a clear analytical mechanism to explain how organizational and technical conditions are translated into concrete individual actions. Consequently, the causal pathway from system-level conditions to micro-level behavior remains insufficiently specified.
This methodological fragmentation reflects a broader theoretical gap in the literature: existing approaches tend to address either where behavioral risks originate within the system (organizational or structural analyses) or how individuals cognitively process information and execute actions (cognitive or reliability-based analyses), but rarely both in an integrated manner. As a result, unsafe behavior is either treated as an individual failure detached from the system context, or as a structural outcome without a clear behavioral transmission mechanism.
The STS perspective offers a robust framework for identifying system-level risk sources across technological, organizational, and environmental dimensions. However, STS-based accident models typically stop short of explaining how these macro-level conditions are translated into individual behavior at the operational level [15,16,17]. Conversely, safety information cognition (SIC) theory provides a detailed account of how individuals perceive, interpret, and utilize safety-related information, but it often abstracts away from the organizational and technological conditions that shape information availability and quality.
This study argues that integrating STS and SIC is both theoretically necessary and analytically complementary. STS theory explains the structural origins of behavioral risk, while SIC theory explains the cognitive and informational transmission of such risk to individual behavior. By linking system-level conditions to micro-level unsafe behavior through safety information flows, the proposed integration addresses a critical gap in existing human factor research: the lack of a systematic mechanism that connects socio-technical risk sources with individual safety behavior.
Based on this integration, the present study moves beyond traditional method-oriented comparisons and proposes a system-oriented human factor risk analysis framework that explicitly incorporates technological, organizational, and environmental risk sources, as well as their transmission through safety information cognition processes. The remaining content of the paper is as follows: theoretical foundations are developed in Section 2; the methodology and method design are proposed in Section 3; the method application is conducted in Section 4; and a conclusion and discussion are presented in Section 5.

2. Theoretical Foundation and Theoretical Framework

2.1. Theoretical Foundation

2.1.1. STS Theory

In complex high-risk domains, accidents and unsafe behaviors rarely arise from isolated technical failures or individual mistakes. Instead, they emerge from dynamic interactions among human actors, organizational arrangements, technological components, and institutional environments. The STS perspective provides a comprehensive framework for understanding these interactions by emphasizing that system safety is determined by the joint functioning of social and technical elements rather than by any single factor alone [18,19].
From this perspective, an operational system is composed of two interdependent subsystems: a social subsystem and a technical subsystem. The social subsystem includes individual cognition and behavior, organizational structures, management practices, regulatory institutions, and the safety culture. The technical subsystem consists of physical equipment, operational technologies, work processes, technical procedures, and information systems. These subsystems operate within specific environmental and task contexts characterized by uncertainty, time pressure, and complexity. System performance and safety outcomes depend on how well these elements are coordinated and aligned over time.
A fundamental assumption of the STS perspective is that safety is an emergent property of the system. Safety does not reside inherently in individuals, technologies, or formal rules. Rather, it emerges from the continuous interaction among people, organizations, and technologies during system operation [15,20]. Even when individual components perform as designed, system-level risk may still increase if coordination among subsystems is inadequate. For example, unclear institutional rules, insufficient organizational oversight, poorly designed technical interfaces, or delayed feedback mechanisms may collectively shape conditions in which unsafe behaviors become more likely, even among experienced and well-trained personnel.
The STS perspective has played a critical role in shifting safety research away from linear cause–effect explanations that focus on individual error. Instead of treating accidents as the direct result of frontline operator mistakes, this perspective emphasizes the influence of higher-level organizational and institutional conditions on individual behavior. Decisions made at regulatory, managerial, and design levels constrain and shape the range of actions available to individuals at the operational level. As a result, unsafe behavior is often a downstream manifestation of latent system deficiencies rather than a purely individual failure.
Within the STS, individual safety behavior should therefore be understood as the behavioral output of the system under specific informational, organizational, and technical conditions. Operators do not act in isolation; their decisions are influenced by the availability, accuracy, and timeliness of safety-related information, the clarity and enforceability of rules and procedures, the effectiveness of supervision and feedback, and the design of technological systems with which they interact. When these system-level conditions are misaligned or poorly designed, the likelihood of unsafe behavior increases regardless of individual intentions or awareness.
Consequently, improving safety performance from an STS perspective requires interventions that extend beyond individual-level behavior modification. Effective safety governance must address institutional arrangements, organizational management, technological design, and information flows simultaneously. By situating individual behavior within the broader system context, the STS perspective provides a necessary theoretical foundation for subsequent HFRA.

2.1.2. SIC Theory and Safety Information Risk Flow

With the advent of the era of information and big data, information has become a buzzword, which can eliminate the uncertainty in production and life [21], thereby providing a basis for decision-making. However, a lack and asymmetry of information will lead to the emergence of social problems, which are the result of improper information processing. In the field of safety, the lack and asymmetry of safety information will become the main causation of accidents [22]. In the past, scholars have linked safety information processing with human behavior from the perspective of individuals, such as Wickens C’s information processing model [23] and Wu’s “7-6-5-4” model [24], both of which involve the internal processing of safety information, namely cognitive processes. With the development of society, the amount of information is increasing and the information is becoming more and more complicated, so a pure cognitive process is not enough for individuals who need to face the status quo. For example, when a safety evaluation agency conducts a safety pre-evaluation on a project, it must identify the potential hazards and harmful factors of the evaluation object based on the collected data, and determine its compliance with the laws and regulations, rules, standards, and norms of safety production, so as to predict the possibility and severity of the accident and propose scientific, reasonable, and feasible safety countermeasures [25]. Therefore, as far as this work is concerned, even if there is a division of labor and cooperation between members, safety cannot be accomplished by only internal processing of information, as external processing of information is an indispensable part. Here, the internal processing of information refers to a series of psychological behaviors of the individual’s perception and cognition of information; the external processing of information refers to a series of external behaviors of the acquisition, analysis, and utilization of information. Therefore, starting from the social–technical system, this work summarizes the safety information processing process as “safety information flow—the acquisition of safety information—safety perception—the analysis of safety information—safety cognition—the utilization of safety information—safety behavior”. All the acquired and perceived safety information is part of the safety information flow, which finally produces safety behavior under the comprehensive interaction of internal processing and external processing.
Risk is the uncertainty of future events and results, while safety risk is a comprehensive representation of the possibility of an accident and the severity of its consequences. Based on its origin, the risk was divided into natural risk, technical risk, social risk, political risk, economic risk, cultural risk, behavioral risk, etc. [26]. Therefore, safety behaviors also have risks, and their comprehensive representation is the key conclusion that needs to be determined. Due to the possibility of information asymmetry, the information flow is accompanied by the generation of risk flow [27,28]. Therefore, in the process of the safety information processing, a safety information risk flow, as shown in Figure 1, is generated, which is driven by the requirement of safety information processing, safety theories, safety technologies, safety methods, etc., transforming risks of safety information flow into risks of internal safety information processing and risks of external safety information processing, and ultimately manifesting as safety behavior risk and system safety risk.
Due to the impact of the environment, the information itself, and personal factors, the risk of a certain link during safety information processing is objective [1]. It can be seen from Figure 1 that there are seven elements to an asymmetric model of the safety information cognition process, including “safety information flow: the acquisition of safety information”, “the acquisition of safety information: safety perception”, “safety perception: the analysis of safety information”, “the analysis of safety information: safety cognition”, “safety cognition: the utilization of safety information”, “the utilization of safety information: safety behavior”, and “safety behavior: safety information flow”. As a result, the adjacent links influence each other, and the risk out of control of the former link will increase the risk of the latter link, thus leading to the trend of a domino effect and the occurrence of unsafe behaviors. Furthermore, safety information risk is not only related to the correctness of its processing; untimely safety information processing will also increase safety behavior risks. Leveson classified the causes of accidents from the perspective of execution defects, mainly including the following aspects: unidentified hazards, inappropriate, ineffective, or missing control actions for identified hazards, communication flaws, inadequate actuator operation, and time lags [15]. Therefore, unsafe behavior is the direct manifestation of the risk consequences of information processing links. In a word, the risk of losing control of one or more links in the process will lead to the occurrence of unsafe behavior, and reducing the risk of safety behavior will be a fundamental way to prevent the human accident.

2.2. Theoretical Framework

Within complex STS, safety behavior does not arise solely from individual choice or cognitive capacity. Instead, it is shaped by multiple sources of risk originating from higher-level system conditions. From a socio-technical perspective, risks influencing individual safety behavior are generated across technological, environmental, and organizational levels, and are subsequently transmitted to the micro level through information, control, and interaction mechanisms [29,30]. Understanding these system-level sources of risk is therefore essential for explaining why unsafe behaviors occur and for identifying effective intervention points; the theoretical framework is shown in Figure 2.
It is important to emphasize that the integration of the STS perspective and SIC theory in this study is not a simple conceptual juxtaposition, but a mechanism-level coupling established through the concept of safety information risk flow. Existing STS-based models, such as STAMP, have highlighted cross-level control and constraint failures, while information-flow-based accident models have revealed the role of information distortion and delay in accident causation. However, these approaches primarily function as accident explanation models rather than behavior-oriented risk quantification frameworks.
In the proposed HFRA framework, safety information risk flow serves as an explicit analytical bridge that connects system-level risk sources with micro-level unsafe behavior through a structured transmission chain consisting of information generation, acquisition, perception, analysis, cognition, utilization, and execution. This transmission chain is not only descriptive but operationalized in the subsequent risk identification, probability estimation, and risk control stages. As a result, system-level technological, organizational, and environmental risks are not treated merely as background conditions, but are analytically mapped to specific information-processing asymmetry modes and corresponding behavioral risk mechanisms. This mechanism-level linkage distinguishes the present integration from parallel multi-perspective explanations.
Technological systems constitute a primary source of safety behavior risk at the micro level. Equipment design, interface layout, automation logic, and feedback mechanisms directly influence how individuals perceive system states and respond to operational demands. Poorly designed human–machine interfaces, ambiguous signals, delayed alarms, or excessive automation complexity may increase the cognitive workload, reduce situation awareness, and constrain effective decision-making. In such conditions, unsafe behavior may emerge not from individual negligence, but from rational adaptation to technological constraints. Moreover, technical failures or degradation, even when partial or latent, may distort safety information and create mismatches between perceived and actual system states, thereby increasing the likelihood of erroneous actions.
The operational environment represents another critical source of risk affecting individual safety behavior. Physical environmental factors such as noise, lighting, temperature, weather conditions, and spatial constraints can impair perception, attention, and motor performance. At the same time, task environments characterized by time pressure, a high workload, uncertainty, or dynamic change impose additional cognitive and emotional stress on individuals. These environmental conditions influence how safety information is acquired and processed, often limiting the individual’s capacity to accurately assess risks and select appropriate actions. Consequently, unsafe behavior may arise as an adaptive response to environmental stressors rather than as a deliberate violation of safety rules.
Organizational structures and management practices constitute a fundamental source of safety behavior risk at the micro level. Organizational factors such as unclear responsibilities, inconsistent procedures, insufficient training, inadequate supervision, and misaligned incentive systems shape the context in which individuals interpret and prioritize safety information. When organizational rules are ambiguous, conflicting, or weakly enforced, individuals may face competing goals between safety and productivity. In such situations, unsafe behavior may reflect organizational signals rather than individual intent. Furthermore, deficiencies in communication, coordination, and feedback mechanisms may lead to information asymmetry, delayed risk awareness, and fragmented decision-making, thereby amplifying behavioral risk.
Importantly, technological, environmental, and organizational sources of risk do not operate independently. Instead, they interact dynamically within the STS, generating compounded effects at the micro level. Organizational decisions influence technology design and deployment; technological systems shape how environmental conditions are monitored and controlled; environmental disturbances may expose organizational and technical vulnerabilities. These interactions are mediated through safety information flows, which transmit system-level conditions to individual cognition and behavior. As a result, micro-level unsafe behavior should be understood as the endpoint of multi-level risk transmission rather than as an isolated phenomenon.
From this perspective, micro-level safety behavior risk is not an inherent property of individuals, but a systemic outcome shaped by technological, environmental, and organizational conditions. Effective risk governance therefore requires identifying and managing these upstream risk sources rather than focusing solely on individual behavior modification. By explicitly linking system-level risk sources to micro-level safety behavior, this framework provides a theoretical foundation for subsequent human factor risk analysis and for the development of targeted safety behavior optimization strategies.

3. Methodology and Method Design

3.1. Methodology

The potential hazard, safety risk, and accident show a one-way linear relationship, and thus accidents can be prevented as long as one link of a potential hazard or safety risk is eliminated [26]. However, many potential hazards have objective existence. Therefore, managing risks effectively has become the key to prevent accidents. The task of risk management is to determine the risks existing in the production and operation of enterprises through risk analysis, to formulate risk control and management measures, and to prevent accidents or reduce losses [26]. The safety behavior risk, unsafe behavior, and accident also present a one-way linear relationship. The prevention of unsafe behavior before was based on safety management, the construction of a safety culture, and other aspects, and there are few quantitative methods to prevent the occurrence of unsafe behavior. Therefore, this work puts forward a HRFA procedure, as shown in Figure 3.
The method is developed based on the definition of behavioral safety risk. The safety risk is the comprehensive representation of the possibility and consequences/severity of accidents [31]. Therefore, safety behavior risk as the prerequisites of the safety risk, which can be understood as the comprehensive representation of the possibility and consequences/severity of accidents caused by the unsafe behavior. The procedure of safety behavior risk management is introduced as follows:
Step 1: Human factor risk identification
This work can determine the risk factors leading to hazardous events, involving humans, technology, the environment, and management, which can be done by referring to the potential hazards and accident types of similar risky works or using appropriate causal analysis methods [32]. The purpose of HFRA is to improve system safety, which can be applied to prevent an accident, promote the safety operation of a risk work, and determine the comprehensive importance of unsafe behaviors for a certain accident and a certain risk work according to different safety demands. Therefore, the first task is to determine whether the research object is a certain accident, a certain risk work, or a combination of both. On this basis, the risk factors can be summarized through causal analysis methods such as a fishbone diagram (FD), fault tree analysis (FTA), accident investigation report analysis, etc. The set of unsafe behaviors involved is represented by the following formula:
U b = { U 1 , U 2 , , U n , U n + 1 , , U k }
Step 2: Human factor risk assessment
The purpose of risk assessment is to predict the risk level of unsafe behavior. According to the different assessment objects, this step is divided into the following three types:
Step 2.1: Human factor risk assessment (accident)
When the purpose is to prevent an accident, the probability of the accident— P n caused by the unsafe behavior— U n needs to be determined. The common methods to link the cause with the accident and to calculate the probability of the accident include FTA and event tree analysis (ETA). In the past, scholars combined fuzzy mathematics (FM) and FTA to calculate the probability of the top event and used it for safety assessment, which improved the probability accuracy of the top event of the FTA [33]. Formulas (2) and (3) are respectively the risk function of safety behavior and the consequence calculation formula of an accident caused by the unsafe behavior.
R = f ( P n , C )
C = P n · H n + P n · M n + P n · E n
where R is the risk level of the unsafe behavior, C is the total loss of the accident, P n is the fuzzy probability of the accident caused by unsafe behavior, H n is the loss of casualties caused by the accident, M n is the economic loss caused by the accident, and E n is the loss of environmental damage caused by the accident.
Step 2.2: Human factor risk assessment (risk activities)
When the purpose is to optimize risk activities, the set of probabilities of accidents— P as shown in Formula 4 caused by the unsafe behavior— U n needs to be determined. An unsafe behavior may lead to different accidents in different situations of risk activities; for example, fatigue driving may hit pedestrians or rush down the valley. Accordingly, this risk assessment calculates the probability of accidents and losses differently from the above, as shown in Formulas (5) and (6).
P = { P n 1 , P n 2 , , P n m }
P n = i = 1 m P n i
C = i = 1 m ( P n i · H i + P n i · M i + P n i · E i ) m
where P n is the probability of accidents due to unsafe behavior, C is the possible loss of the accident, P n i is the fuzzy probability of an accident caused by unsafe behavior, H i is the loss of casualties caused by the accident, M i is the economic loss caused by the accident, and E i is the loss of environmental damage caused by the accident.
Step 2.3: Human factor risk assessment (combination)
The comprehensive importance of safety behavior is not only reflected in the risk of causing an accident but also in the risk of affecting the safety operation of risk activities. For example, the risk of an accident caused by an unsafe behavior is high, but the risk of an activity is low; or there is a great risk that an unsafe action will affect a risk activity, but it is not the primary goal in terms of preventing an accident. Therefore, the two assessment results are combined, and the maximum risk level of unsafe behavior is the comprehensive risk, as shown in Formula (7).
R C n = m a x ( R A n , R W n )
where R C n is the comprehensive risk level of the unsafe behavior, R A n is the risk level of the unsafe behavior for an accident, and R W n is the risk level of the unsafe behavior for a risk work.
Step 3: Risk control
According to the safety information risk flow as shown in Figure 1, there are seven phases of information asymmetry during safety information processing, and the prerequisites of unsafe behaviors can be determined through the causation analysis of information asymmetry, as shown in Table 1. As a result, safety behavior risk control will be conducted through safety information risk reduction: the reduction in negative effects in the process of safety information processing will reduce the possibility of unsafe behaviors, according to factors including human, environment, and safety information in Table 1.

3.2. Method Design

This work will be carried out through a combination of the following methods: accident investigation reports analysis (AIRA), fishbone diagram (FD), fault tree analysis (FTA), event tree analysis (ETA), fuzzy mathematics (FM), and risk matrix (RM). As two objects of accident and risk work are involved in the safety behavior risk assessment, different methods will be adopted when studying different objects, as shown in Figure 4. The selection of specific analytical tools is also methodologically motivated. FTA is employed to support structured decomposition and logical aggregation of multi-causal behavioral risk factors. FM is introduced to handle linguistic assessments and expert judgment uncertainty where precise statistical frequencies are unavailable. The RM is used as a decision-translation layer that converts analytical results into operationally interpretable risk categories. The combined toolchain emphasizes structural traceability, uncertainty tolerance, and governance interpretability rather than purely statistical prediction accuracy.
(1) When assessing the risk of an accident caused by unsafe behavior, the cause of the accident should first be summarized through FD, and then we calculate the probability of each cause of an event in combination with FM, and calculate the probability of the accident through FTA based on this, and finally obtain the assessment results through RM.
(2) When assessing the risk of accidents in a risk work caused by unsafe behaviors, the accident paths should first be summarized through ETA under the analysis of FD, and then we calculate the probability of each cause of an event in combination with FM, and calculate the probability of an accident through ETA based on this, and finally obtain the assessment results through RM.
The above methods are introduced as follows:
(1) AIRA: Through the summary analysis of the relevant accident investigation reports, the types of accidents that may occur in the risk work and the losses caused by the related accidents can be determined, which has a fundamental role in the subsequent use of FD and the prediction of accident consequences.
(2) FD: Starting from a specific accident, it started with the operator, safety management, environment, materials, methods, etc., and we carried out in-depth development of it, to systematize and organize the complex causes. Finally, the cause of the accident is displayed layer by layer through a fishbone [26].
(3) FTA: It is a directed logic tree that describes the occurrence of events from results to causes, which starts from the results and involves conducting a deductive analysis to determine the intermediate events and basic events and their logical relationships under the top event. Based on determining the probability of each basic event, we find the minimum cut set of the accident tree, where the occurrence probability of the top event is equal to the sum of the probability product of each minimum cut set, as shown in Formula (8).
g = r = 1 N G x i G r q i 1 r < s N G x i G r G s q i + + ( 1 ) N G 1 r = 1 N G q i
where r and s are the ordinal number of the minimum cut set; N G is the ordinal number of the minimum cut set in the system; x i is the i-th basic event; G r is the r-th minimum cut set; and q i is the probability of the i-th basic event.
(4) ETA: It is an analysis process from cause to result, which can start from a certain unsafe behavior (initial event) in risky work and describe all the paths leading to each accident [26]. By combining the probability of the cause event in each path, the probability of each path can be obtained. The sum of the probability of each path is the probability that unsafe behaviors cause accidents in risky work.
(5) FM: Since the probability of each cause event is vague and difficult to quantify directly, FM is needed to deal with this problem [34]. In the process,
I. Firstly, set the tone operator, including very low (VL), low (L), relatively low (RL), medium (M), relatively high (RH), high (H), and very high (VH), and determine the membership function as shown in Figure 5 on the domain [0,1] [35].
II. Secondly, the natural language description of each cause of an event is carried out by experts, which is transformed into the comprehensive fuzzy number through Formula (9):
P ~ j = i = 1 n w i × P ~ i j
where P ~ j is the comprehensive fuzzy number of event j; n is the number of experts; w i is the weight of expert i; and P ~ i j is the fuzzy number of event j given by expert i.
III. Third, the maximum and minimum set method is used to convert the fuzzy number into a fuzzy probability score [36], involving Formulas (10)–(13):
f m a x ( x ) = { x ,             0 x 1 0 ,             otherwise
f m i n ( x ) = { 1 x ,             0 x 1   0 ,                             otherwise
F M R = s u p [ f M ( x ) f m a x ( x ) ]
F M L = s u p [ f M ( x ) f m i n ( x ) ]
where F M R is the right utility score of the fuzzy number; and F M L is the left utility score of the fuzzy number.
IV. Finally, calculate the failure probability F according to the following empirical formulas, including Formulas (14)–(16) [37].
F M = F M R + 1 F M L 2
F = { 1 10 k ,       F M 0   0 ,             F M = 0  
k = ( 1 F M F M ) 1 / 3 × 2.301
To improve methodological transparency, additional analysis was conducted on uncertainty propagation and parameter robustness in the fuzzy mathematics-based probability conversion process. In the fuzzy number synthesis stage (Formula (9)), expert judgments are aggregated through weighted averaging. To examine the influence of the weight distribution, a perturbation analysis was performed by varying expert weights within a ±20% range while maintaining normalization constraints. Each probability value was proportionally perturbed within ±20% to examine the robustness of the evaluation results. Under the −20% scenario, the four key unsafe-behavior probabilities decreased to 0.00130, 0.02399, 0.00623, and 0.02399, respectively; under the +20% scenario, they increased to 0.00195, 0.03599, 0.00934, and 0.03599. Despite these variations, the relative ranking of risk magnitude remained unchanged, and no cross-level shifts occurred in the corresponding risk matrix classifications. This indicates that the prioritization results are structurally stable with respect to moderate parameter uncertainty.
Regarding uncertainty transfer, the fuzzy membership intervals are preserved through the left–right utility score transformation (Formulas (10)–(13)), which maps fuzzy intervals into bounded utility ranges before probability conversion. Therefore, uncertainty is not collapsed at the aggregation stage but structurally propagated through interval-based transformation. The final probability estimates represent central tendency values within these bounded intervals, supporting stable comparative risk ranking rather than precise point prediction.
For the empirical conversion coefficient used in Formula (16), the parameter form follows established fuzzy reliability conversion practice reported in prior fuzzy reliability modeling studies. To test its stability, coefficient perturbation analysis was conducted by adjusting the multiplier term within a ±15% range. The recalculated probabilities showed proportional numerical variation but did not change the minimum cut set structure, dominant event ranking, or final risk level classification in the risk matrix. This result suggests that the empirical coefficient mainly acts as a scale transformation factor rather than a structural determinant of risk ordering, and the evaluation conclusions remain robust under reasonable parameter variation.
RM: RM is a method used to output assessment results, which contains risk levels composed of different accident possibilities and different consequences. The risk level is the basis for taking risk management measures. The safety behavior risk matrix as shown in Figure 6 is constructed by referring to the previous standards of accident probability rating and consequence severity rating [38]. It divides the safety behavior risk into seven risk levels, from low to high, which are risk-free, relatively low risk, low risk, medium risk, relatively high risk, high risk, and major risk, as shown in Table 2.
To examine the robustness of the RM classification results, a sensitivity analysis was conducted on the probability–consequence interval settings. Specifically, the probability interval thresholds were perturbed within a ±15% range, and the consequence severity interval boundaries were adjusted by one classification level upward and downward. The recalculated risk levels of the main unsafe behaviors were then compared with the baseline results.
The analysis shows that for high-risk behaviors such as overspeed driving and fatigue driving, the comprehensive risk level classification remains stable under interval perturbations, with no cross-level downgrading observed. Medium-to-high risk behaviors exhibit limited sensitivity near boundary regions, but their relative ranking order remains unchanged. This indicates that the RM-based classification in this study is structurally robust and that the prioritization of key unsafe behaviors is not driven by narrow parameter settings. The sensitivity results support the reliability of the risk grading outcomes for decision-support purposes.

4. Application to the Road Transportation of Dangerous Goods in China

4.1. Application Background

To demonstrate the applicability and explanatory power of the proposed human factor risk analysis framework, this study applies the method to the road transportation of dangerous goods in China. This domain represents a typical high-risk socio-technical system characterized by strong coupling between human behavior, technical systems, organizational management, regulatory oversight, and complex operational environments. Accidents in dangerous goods transportation often lead to severe consequences, including fires, explosions, toxic releases, and large-scale social and environmental impacts, making effective risk governance a critical priority.
In the context of road transportation of dangerous goods, safety performance depends heavily on micro-level human behavior, particularly the actions of drivers, loading and unloading personnel, inspectors, and safety managers. These behaviors are shaped by multiple system-level factors, such as vehicle and equipment conditions, transportation routes and traffic environments, enterprise management practices, regulatory requirements, and emergency response arrangements. At the same time, the transportation process is highly dynamic, involving continuous information exchange related to the vehicle status, cargo characteristics, road conditions, weather, and regulatory constraints. Any deficiencies in the generation, transmission, or interpretation of safety-related information may rapidly propagate through the system and manifest as unsafe behavior.
China’s dangerous goods transportation sector provides a representative and analytically valuable context for applying the proposed method. On the one hand, the sector operates under a comprehensive regulatory framework and large-scale industrial demand, resulting in complex organizational and technical arrangements. On the other hand, accident investigation reports indicate that human unsafe behavior remains a dominant contributing factor in many incidents, suggesting that traditional control measures focusing solely on compliance and technical standards are insufficient. This combination of strong institutional control and persistent behavioral risk makes the sector particularly suitable for examining how socio-technical conditions and safety information risk flow jointly shape micro-level safety behavior.
By applying the proposed human factor risk analysis method to this domain, this study aims to illustrate how system-level technological, organizational, and environmental risks are transmitted through safety information processes and ultimately influence individual behavior. The application serves not only to validate the feasibility of the method, but also to demonstrate its potential for identifying critical behavioral risk points and informing targeted safety behavior optimization strategies in complex socio-technical systems.

4.2. Analysis of Accident Investigation Reports

Based on GB6441-1986 [39] and the regulation on administration of road transport of dangerous goods for the definition of dangerous goods, the types of accidents occurring in road transportation of dangerous goods can be generally summarized as burning, fire, explosion, and poisoning. To ensure transparency and reduce sampling bias, the accident report dataset used in this study was constructed based on explicit selection criteria. The 76 dangerous goods road transportation accident investigation reports were collected from the official database of the Dangerous Chemicals Logistics Branch of the China Federation of Logistics and Purchasing, covering the period from 2016 to 2023. The reports include cases from multiple provinces across eastern, central, and western China, involving highway, urban, and regional transport scenarios.
The inclusion criteria were as follows: (1) the accident must be officially investigated and publicly reported; (2) the report must contain relatively complete descriptions of the accident process, causation factors, and responsibility attribution; (3) the accident type must fall within the regulatory definition of dangerous goods road transportation; and (4) the report must contain identifiable human behavioral or operational factors. Duplicate reports, brief notices without causal analysis, and cases with missing core information were excluded.
Based on the analysis of the accident investigation report, it is concluded that the road transportation accidents of dangerous goods occur as follows: (1) cracking or breach of storage devices such as tank/compartment/gas cylinder is caused by the driver’s mistake or violation, the vehicle’s own reasons, and the collision of other vehicles, and the dangerous goods are spilled out; (2) vehicle spontaneous combustion; (3) a chemical reaction occurs between the storage device and the dangerous goods, and the dangerous goods spill out; (4) there is corrosion, cracking, sagging, bulging, thinning, abnormal hardness, illegal welding, and poor welding defects in the storage device, and dangerous goods are spilled out; (5) there are safety risks in the parking lot, and the vehicle is ignited or exploded; (6) violent loading and unloading of dangerous goods by loading and unloading personnel; (7) the wrong packaging by the packager leads to the burning of dangerous substances.

4.3. The Analysis of FD and Probability Calculation of Event

The secondary accident caused by traffic accidents is taken as an example to analyze the accident causation by FD (as shown in Figure 7) and summarizes the basic events of relevant accidents (as shown in Table 3). All basic events are described by the language of 20 experts (very low, low, relatively low, medium, relatively high, high, and very high), and the probability of each cause event is calculated. Among them, the weight of the three experts is 0.2328, 0.3534, and 0.4138 respectively, calculated by the analytic hierarchy process (AHP).
Expert judgment was used to support fuzzy probability estimation of basic events. A panel of 20 experts participated in the evaluation process. The experts were drawn from three groups: dangerous goods transportation enterprise safety managers, regulatory and inspection personnel, and academic researchers in safety engineering and risk analysis. Their average professional experience exceeded 12 years, and all had direct experience in accident investigation, safety assessment, or operational risk management related to hazardous materials transportation.
To reduce individual bias and improve evaluation reliability, a structured two-round consultation process similar to a Delphi procedure was adopted. In the first round, experts independently provided linguistic probability assessments. Aggregated results and distribution summaries were then fed back anonymously, and experts were invited to revise their judgments in the second round. Convergence of opinions improved in the second round, and no extreme outlier judgments remained.
Expert weights were determined using the AHP based on experience level, domain relevance, and professional qualification. A consistency ratio (CR) check was conducted for the pairwise comparison matrix, and the CR value was below the accepted threshold of 0.1, indicating acceptable consistency of the weighting structure.
The probability calculation of the basic event takes X42 as an example, and its description results are medium, relatively low, and low. As a result, its comprehensive membership function is
f ( x ) = 0.2328 f M ( x ) + 0.3534 f R L ( x ) + 0.4138 f L ( x )
The result is
f ( x ) = { x 0.21 0.1         ( 0.21,0.31 )               1                         ( 0.31,0.34 ) 0.44 x 0.1           ( 0.34,0.44 )     0                             ( Others )
Therefore, through Formulas (10)–(13), the following is obtained: F M R = 0.4 , F L R = 0.72 , then F M = 0.34 , k = 2.87 , and the probability of X42 is 0.001349 according to Formula (15). Therefore, all the basic events and their fuzzy probabilities as shown in Table 3 are obtained.

4.4. Results of Human Factor Risk Assessment

Due to the dynamic process of road transport of dangerous goods, the consequences of accidents are related to the concentration degree of people, money, and things exposed to accidents. Moreover, road transportation of dangerous goods is characterized by great harm. As a result, this paper assumes that the loss caused by accidents is more than one million RMB.
In the baseline assessment, this study assumes that the economic loss associated with dangerous goods transportation accidents exceeds one million RMB, reflecting the typical severity level reported in major accident investigation records. To examine the influence of this assumption on risk classification results, a scenario sensitivity analysis was conducted by adjusting the loss parameter to three levels: 0.5 million RMB, 1 million RMB, and 3 million RMB. The recalculated risk levels show that while absolute risk scores vary proportionally with consequence magnitude, the relative ranking and high-risk classification of key unsafe behaviors—such as overspeed driving and fatigue driving—remain unchanged across scenarios. Only behaviors located near risk level boundaries show limited classification fluctuation by one level. This result indicates that the main conclusions regarding priority unsafe behaviors are not driven by a single loss assumption and are robust under reasonable consequence interval variation.
Taking the unsafe behaviors of drivers as the object, the risk assessment of safety behaviors is carried out. According to the above analysis, in the process of dangerous goods transportation, the driver plays an important role in the dynamic transportation, and the unsafe behavior of the driver will lead to fire, explosion, and poisoning; the occurrence probability, risk level, and comprehensive risk level are shown in Table 4.

4.5. Control Measures of Risky Behavior

It can be seen from the above results that all unsafe behaviors are controlled according to their comprehensive risk levels. The range of risk levels includes VI and VII; that is, large-scale and high-frequency unsafe behaviors need to be found in the entire social–technical system, and large-scale, high-frequency, and high-impact unsafe behaviors require timely, comprehensive, and regular rectification. Based on this, the causes of the above unsafe behaviors are analyzed, and corresponding optimization measures are formulated and implemented, as shown in Table 5.
Unlike generic safety management recommendations, the proposed HFRA framework enables targeted intervention design by mapping unsafe behaviors to specific safety information asymmetry modes and processing-stage failure points. Based on the safety information risk flow structure, intervention strategies can be aligned with distinct information-processing stages rather than applied uniformly at the behavioral level.
For risks concentrated in the safety information acquisition stage, such as drivers receiving incomplete or delayed road and vehicle status information, system-specific interventions include deployment of integrated vehicle–road–cloud monitoring platforms, real-time telematics feedback systems, and automated hazard alert interfaces. These technologies directly reduce acquisition asymmetry by improving information timeliness and completeness.
For risks located in the safety information analysis stage, where drivers misinterpret or insufficiently analyze available safety signals, targeted measures include intelligent decision-support systems, in-vehicle risk prediction dashboards, and AI-assisted anomaly detection prompts. Organizationally, this stage also requires standardized interpretation protocols and scenario-based cognitive training to improve analytical consistency under time pressure.
For safety cognition and decision stages affected by cognitive overload and judgment bias, model-driven interventions include adaptive human–machine interface redesign, workload-sensitive alert prioritization, and dynamic information filtering mechanisms to reduce cognitive burden. At the organizational level, shift scheduling optimization and fatigue-risk-informed dispatch systems are recommended.
For execution-stage asymmetry, where correct decisions fail to translate into safe action, interventions include closed-loop supervision systems, real-time behavioral compliance monitoring, and automated enforcement triggers. These measures directly address the execution gap identified in the information-to-action transition node.
By structuring interventions along the information risk transmission chain, the proposed framework supports traceable and stage-specific mitigation strategies. This model-driven intervention mapping demonstrates that the HFRA results do not merely indicate which behaviors are risky, but also identify where and how system-level controls should be strengthened.

4.6. Comparative Validation with Traditional HFRA Approaches

To further examine the analytical value of the proposed STS–SIC integrated HFRA framework, a structured comparative analysis was conducted against traditional HRA-oriented approaches represented by CREAM-type context-based reliability assessment and probability-focused HRA methods. Rather than fully re-implementing alternative models, which require different input structures and experimental data, this study performs a comparative re-interpretation of the same accident dataset under different analytical logics and compares the resulting risk insights.
Under traditional HRA-oriented analysis, unsafe behaviors such as overspeed driving and fatigue driving are primarily interpreted as high-risk actions due to an elevated human error probability under adverse performance-shaping factors (e.g., workload, time pressure, environmental stress). The analytical output mainly emphasizes operator reliability reduction and recommends individual-level control measures such as training, supervision, and procedural enforcement.
In contrast, the STS–SIC integrated HFRA framework produces a multi-layer risk decomposition result. In addition to identifying high-risk behaviors, the method further traces their upstream risk sources across organizational supervision gaps, information acquisition asymmetry, real-time monitoring deficiencies, and institutional control weaknesses. For example, in the present case, overspeed driving is not only classified as high-risk behavior but is also analytically linked to insufficient real-time safety supervision, weak information feedback loops, and incomplete enterprise safety regulation constraints. These upstream contributors are explicitly represented in the information asymmetry modes and control mapping table.
Comparatively, traditional HRA methods provide behavior-level reliability estimation, while the proposed framework provides cross-level causal traceability and governance-oriented risk mapping. This difference leads to distinct intervention implications: HRA-oriented results mainly support operator-focused reliability improvement, whereas the STS–SIC HFRA results support system-level intervention prioritization across supervision, information systems, and organizational controls.
Therefore, the advantage of the proposed method lies not in replacing classical HRA probability estimation, but in extending behavioral risk assessment toward system-level explanatory depth and governance actionability, especially in complex socio-technical operational environments where unsafe behavior is strongly shaped by information and organizational conditions.

5. Conclusions and Discussion

5.1. Key Findings

This study develops an integrated human factor risk analysis framework by combining the socio-technical systems perspective with safety information cognition theory, aiming to explain the generation and control of micro-level unsafe behavior in complex systems. The findings demonstrate that unsafe behavior should not be interpreted as a purely individual failure. Rather, it emerges from the interaction of technological, organizational, and environmental conditions, which are transmitted to the behavioral level through safety information flows and cognitive processes.
By explicitly linking system-level risk sources to individual behavior via safety information processing, this study clarifies how deficiencies in information acquisition, perception, analysis, cognition, utilization, and execution collectively shape safety behavior risk. This perspective shifts the analytical focus from isolated behavioral outcomes to the upstream conditions that constrain and influence individual decision-making and action.
The application to road transportation of dangerous goods in China shows that the proposed method can effectively distinguish the risk characteristics of different unsafe behaviors across accident scenarios and operational contexts. The results indicate that behaviors such as overspeed driving and fatigue driving exhibit consistently high comprehensive risk levels, reflecting not only their accident-inducing potential but also their systemic roots in supervision, information feedback, and organizational management. These findings confirm the practical value of the framework for identifying critical behavioral risk points and supporting targeted risk governance.

5.2. Theoretical and Practical Contributions

This research contributes to the literature by offering a systematic integration of socio-technical systems theory and safety information cognition. Existing human factor studies often emphasize either macro-level structures or micro-level cognition in isolation. By introducing safety information risk flow as a connecting mechanism, this study provides a coherent explanation of how system conditions are translated into behavioral outcomes. The analytical focus is extended beyond observable unsafe acts to include the informational and organizational preconditions of behavior. This expansion enables the identification of latent deficiencies in regulations, management practices, technological design, and information transmission that are typically overlooked in traditional human reliability analysis.
From a methodological perspective, this study proposes a safety behavior risk management approach that combines qualitative causal analysis with quantitative risk assessment. The integration of accident-oriented and task-oriented assessments allows unsafe behaviors to be evaluated from multiple risk dimensions, supporting more refined prioritization of governance interventions. The framework also aligns with contemporary safety governance concepts that emphasize proactive risk management and individual safety initiative rather than reactive error correction.
Compared with existing approaches, the proposed HFRA framework introduces several additional analytical dimensions. Traditional STS+HRA combinations and methods such as CREAM mainly focus on human reliability under contextual performance conditions, while SICHFA emphasizes cognitive information processing at the individual level. In contrast, the proposed framework simultaneously incorporates system-level risk source decomposition across technological, organizational, and environmental dimensions, explicit safety information risk flow modeling as a cross-level transmission mechanism, and behavior-oriented risk quantification through combined accident-oriented and task-oriented assessment structures.
The framework differs from STAMP in that it is designed not only for post hoc accident causation analysis but also for ex ante behavioral risk evaluation and prioritization. It differs from CREAM in that it embeds organizational and environmental risk sources into a structured information-risk transmission chain rather than treating context primarily as performance-shaping factors. It differs from SICHFA by extending safety information cognition analysis from individual-level cognitive error diagnosis to system-level information risk governance and quantitative risk grading. Therefore, the proposed HFRA approach is particularly applicable in operational contexts that require both cross-level causal tracing and quantitative prioritization of unsafe behaviors for proactive risk control.
Moreover, beyond producing risk scores and rankings, the proposed HFRA framework improves decision support in three specific ways compared with conventional practice. First, traditional unsafe-behavior control decisions are typically based on accident frequency statistics or general experience-based judgment, which mainly indicate which behaviors are risky but provide limited insight into why they are risky within the system structure. In contrast, the HFRA framework links each high-risk behavior to specific information-processing failure nodes and cross-level risk sources, enabling cause-oriented rather than symptom-oriented intervention design. Second, conventional HRA-oriented assessment usually outputs error probability estimates at the operator level, which mainly support training and supervision decisions. The HFRA results extend decision support to the system and organizational level by identifying where information acquisition, analysis, cognition, or execution asymmetries dominate. This allows decision-makers to choose targeted interventions such as supervision platform enhancement, decision-support interface redesign, or information feedback restructuring, rather than relying only on generic behavioral control measures.

5.3. Limitations

Despite its contributions, this study has several limitations that warrant consideration. The implementation of the method relies partly on expert judgment, particularly in the estimation of fuzzy probabilities and risk levels. Variations in expert experience and contextual understanding may influence assessment outcomes, which introduces uncertainty into the results. The empirical application focuses on a single high-risk domain. Although road transportation of dangerous goods represents a typical socio-technical system, differences in organizational structures, regulatory environments, and information architectures across sectors may affect the generalizability of the findings.
In addition, the analysis is primarily based on accident investigation reports and expert evaluations. Dynamic operational data, real-time behavioral monitoring data, and digital system logs are not fully incorporated, which limits the ability to capture the temporal evolution of safety behavior risk. The study also concentrates on risk identification and control strategy formulation, while the long-term effectiveness of these measures and their feedback effects on system behavior are not empirically examined.
Finally, the proposed framework is most applicable to complex operational environments with strong information mediation and organizational coupling. Its direct applicability to highly automated systems or purely individual-task settings may be limited without methodological adaptation.

5.4. Future Research Directions

Future research may deepen the theoretical framework by incorporating complexity and resilience perspectives to better capture nonlinear interactions, cascading effects, and adaptive feedback mechanisms within safety information risk flows. Such an extension would support a transition from static risk assessment toward dynamic risk governance.
Methodological development could benefit from integrating digital technologies such as big data analytics, intelligent sensing, and artificial intelligence. The inclusion of real-time operational and behavioral data would enhance the accuracy and responsiveness of human factor risk assessment, while reducing dependence on subjective judgment. Broader application of the framework across multiple high-risk domains, including energy systems, chemical production, urban transportation, and emergency management, would provide opportunities for comparative analysis and validation of its general applicability.
From a governance perspective, future studies may focus on embedding HFRA into routine safety management processes by establishing closed-loop mechanisms that link assessment, intervention, feedback, and continuous optimization. Such an approach would enable HFRA to function not only as an analytical tool but also as a core component of systemic safety governance.
Overall, the optimization of safety behavior remains a persistent challenge in increasingly complex socio-technical systems. This study offers a foundational step toward a more integrated and system-oriented understanding of human factor risk, while highlighting avenues for continued theoretical refinement and practical advancement.

Author Contributions

Conceptualization, C.X. and Y.M.; methodology, C.X.; formal analysis, C.X.; investigation, Y.M.; writing—original draft preparation, C.X.; writing—review and editing, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, Y.; Feng, W.; Jiang, Z.; Duan, L.; Cheng, S. An accident causation model based on safety information cognition and its application. Reliab. Eng. Syst. Saf. 2021, 207, 107363. [Google Scholar] [CrossRef]
  2. Wu, C.; Huang, L. A new accident causation model based on information flow and its application in Tianjin Port fire and explosion accident. Reliab. Eng. Syst. Saf. 2019, 182, 73–85. [Google Scholar] [CrossRef]
  3. Shi, W.; Jiang, F.; Zheng, Q.; Cui, J. Analysis and Control of Human Error. In Proceedings of the ISMSSE 2011, Beijing, China, 21–23 September 2011. [Google Scholar]
  4. Feng, Q.; Chen, H.J.S.S. The safety-level gap between China and the US in view of the interaction between coal production and safety management. Saf. Sci. 2013, 54, 80–86. [Google Scholar] [CrossRef]
  5. Zaoshui, H.; Jiao, C. Analysis of the coordination balance degree between the economic development and the production safety. J. Saf. Environ. 2013, 13, 261–265. [Google Scholar]
  6. Guo, J.P.; Chen, H.W.; Zhao, J.N. Analysis of safety execution force based on interpretation structure model. China Saf. Sci. J. 2009, 3, 79–83. [Google Scholar]
  7. Xie, X.; Guo, D.J.P.S.; Protection, E. Human factors risk assessment and management: Process safety in engineering. Process Saf. Environ. Prot. 2018, 113, 467–482. [Google Scholar] [CrossRef]
  8. Musharraf, M.; Khan, F.; Veitch, B.; Mackinnon, S.; Imtiaz, S. Human Factor Risk Assessment During Emergency Condition in Harsh Environment. In Proceedings of the ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering, Nantes, France, 9–14 June 2013. [Google Scholar]
  9. Swain, A.D.; Guttmann, H.E. Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications; NUREG/CR-1278; USNRC: Rockville, MD, USA, 1983. [Google Scholar]
  10. Marseguerra, M.; Zio, E.; Librizzi, M. Quantitative developments in the cognitive reliability and error analysis method (CREAM) for the assessment of human performance. Ann. Nucl. Energy 2006, 33, 894–910. [Google Scholar] [CrossRef]
  11. Biggs, A.T.; Jameson, J.; Seech, T.R.; Markwald, R.; Paight, C.; Russell, D.W. Safety climate and fatigue have differential impacts on safety issues Safety climate, fatigue, and safety issues. J. Saf. Res. 2025, 92, 142–147. [Google Scholar] [CrossRef]
  12. Curti, S.; Gallo, M.; Nocilla, M.R.; Montagnani, A.; Mattioli, S.; Gnoni, M.G.; De Merich, D. Safety culture maturity models in occupational safety and health: An updated scoping review. Saf. Sci. 2025, 192, 107003. [Google Scholar] [CrossRef]
  13. Goncalves, A.; Dutra, A.; Mussi, C.C. Occupational risks and health and safety management strategies in the port sector: A systematic literature review. Saf. Sci. 2025, 184, 106767. [Google Scholar] [CrossRef]
  14. Fan, D.; Yeung, A.C.L.; Yiu, D.W.; Lo, C.K.Y. Safety regulation enforcement and production safety: The role of penalties and voluntary safety management systems. Int. J. Prod. Econ. 2022, 248, 108481. [Google Scholar] [CrossRef]
  15. Leveson, N. A new accident model for engineering safer systems. Saf. Sci. 2004, 42, 237–270. [Google Scholar] [CrossRef]
  16. Ge, J.; Zhang, Y.; Xu, K.; Li, J.; Yao, X.; Wu, C.; Li, S.; Yan, F.; Zhang, J.; Xu, Q. A new accident causation theory based on systems thinking and its systemic accident analysis method of work systems. Process Saf. Environ. Prot. 2022, 158, 644–660. [Google Scholar] [CrossRef]
  17. Guo, S.; Feng, W.; Zhang, G.; Wen, Y. Evolutionary Game Analysis of Government–Enterprise Collaboration in Coping with Natech Risks. Systems 2024, 12, 275. [Google Scholar] [CrossRef]
  18. Subedi, A.; Bucelli, M.; Paltrinieri, N. Reframing safety barriers as socio-technical systems: A review of the hydrogen sector. Saf. Sci. 2025, 192, 106995. [Google Scholar] [CrossRef]
  19. Mohsendokht, M.; Li, H.; Kontovas, C.; Chang, C.-H.; Qu, Z.; Yang, Z. Systemic risk analysis of complex socio-technical systems from the safety-II perspective. Reliab. Eng. Syst. Saf. 2026, 270, 112200. [Google Scholar] [CrossRef]
  20. Zhang, G.; Feng, W.; Lei, Y.; Wang, S. Generation and evolution mechanism of systemic risk (SR) induced by extreme precipitation in Chinese Urban system: A case study of Zhengzhou “7 20” incident. Int. J. Disaster Risk Reduct. 2022, 83, 103401. [Google Scholar] [CrossRef]
  21. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  22. Luo, T.; Wu, C. Safety information cognition: A new methodology of safety science in urgent need to be established. J. Clean. Prod. 2019, 209, 1182–1194. [Google Scholar] [CrossRef]
  23. Wickens, C. Engineering Psychology and Human Performance; HarperCollins Publishers: New York, NY, USA, 1984. [Google Scholar]
  24. Wu, C. Construction of universal model on safety information cognition and its enlightenment. J. Saf. Sci. Technol. 2017, 13, 5–11. [Google Scholar]
  25. Xu, Z. Safety System Engineering; China Machine Press: Beijing, China, 2007. [Google Scholar]
  26. Luo, Y. Risk Analysis and Safety Evaluation; Chemical Industry Press: Beijing, China, 2009. [Google Scholar]
  27. Qui, Y. Study on Risk Transfer and Control in Supply Chain. Ph.D. Thesis, Wuhan University of Technology, Wuhan, China, 2010. [Google Scholar]
  28. Hu, Y.; Sun, Y. Research on Channel Benefit of Fresh Agricultural Products Based on Risk Flow. Stat. Decis. Mak. 2018, 34, 62–66. [Google Scholar]
  29. Roque, G.; Nascimento, J.; Souza, R.; Alves, C.; Araujo, J. Trust requirements in sociotechnical systems: A systematic literature review. Inf. Softw. Technol. 2025, 186, 107796. [Google Scholar] [CrossRef]
  30. Polojaervi, D.; Palmer, E.; Dunford, C. A systematic literature review of sociotechnical systems in systems engineering. Syst. Eng. 2023, 26, 482–504. [Google Scholar] [CrossRef]
  31. Rausand, M. Risk Assessment Theory, Methods, and Application; John Wiley & Sons Inc.: New York, NY, USA, 2014. [Google Scholar]
  32. Teng, K.Y.; Thekdi, S.A.; Lambert, J.H. Identification and evaluation of priorities in the business process of a risk or safety organization. Reliab. Eng. Syst. Saf. 2012, 99, 74–86. [Google Scholar] [CrossRef]
  33. Luo, T.; Wu, C.; Duan, L. Fishbone diagram and risk matrix analysis method and its application in safety assessment of natural gas spherical tank. J. Clean. Prod. 2018, 174, 296–304. [Google Scholar] [CrossRef]
  34. Yang, L.; Gao, Y.; Lin, W. Principle and Application of Fuzzy Mathematics, 5th ed.; South China University of Technology Press: Guangzhou, China, 2011. [Google Scholar]
  35. Shi, S.; Jiang, B.; Meng, X. Assessment of gas and dust explosion in coal mines by means of fuzzy fault tree analysis. Int. J. Min. Sci. Technol. 2018, 28, 991–998. [Google Scholar] [CrossRef]
  36. Chen, S.-J.J.; Hwang, C.L.; Beckmann, M.J.; Krelle, W. Fuzzy Multiple Attribute Decision Making: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  37. Onisawa, T. An application of fuzzy concepts to modelling of reliability analysis. Fuzzy Sets Syst. 1990, 37, 267–286. [Google Scholar] [CrossRef]
  38. Zhu, Q.; Kuang, X.; Shen, Y. A review of risk matrix methods and applications. Eng. Sci. 2003, 5, 89–94. [Google Scholar]
  39. GB 6441-1986; Classification and Coding of Production Accident. Standardization Administration of China: Beijing, China, 1986.
Figure 1. Safety information risk flow.
Figure 1. Safety information risk flow.
Systems 14 00199 g001
Figure 2. Theoretical framework.
Figure 2. Theoretical framework.
Systems 14 00199 g002
Figure 3. The HRFA procedure.
Figure 3. The HRFA procedure.
Systems 14 00199 g003
Figure 4. The HFRA model.
Figure 4. The HFRA model.
Systems 14 00199 g004
Figure 5. Membership functions under different tone operators.
Figure 5. Membership functions under different tone operators.
Systems 14 00199 g005
Figure 6. The safety behavior risk matrix.
Figure 6. The safety behavior risk matrix.
Systems 14 00199 g006
Figure 7. Analysis of FD in secondary accident caused by traffic accidents.
Figure 7. Analysis of FD in secondary accident caused by traffic accidents.
Systems 14 00199 g007
Table 1. Information asymmetry phases and their causation.
Table 1. Information asymmetry phases and their causation.
Phases of Information AsymmetryCausation
Safety information flowMisunderstanding; lack of information; lack of or insufficient communication in the process of safety information flow.
The acquisition of safety informationLack of access to secure information; the acquisition object of safety information is not clear; poor acquisition environment of safety information; the acquisition method of safety information is not suitable.
Safety perceptionTheir physiological and psychological state is not good; poor perception environment; safety information is complex and fuzzy.
The analysis of safety informationThe analysis method of safety information is not suitable; error in association, synthesis, prediction, or evaluation; lack of proficiency in analytical methods; lack of analysis technology safety information.
Safety cognitionThe safety knowledge structure has defects; their physiological and psychological state is not good; poor cognitive environment; lack of reasoning and learning ability.
The utilization of safety informationThe purpose of safety information is not clear; the safety decision method is not suitable; indecisiveness in safety decisions.
Safety behaviorLack of motivation for safety execution; insufficient capacity for safety operations; their physiological and psychological state is not good; poor execution environment.
Table 2. Division of risk level and its description.
Table 2. Division of risk level and its description.
Risk LevelDescription
IRisk-freeUnsafe behavior does not occur and no action is required;
IIRelatively low riskThere is a tendency to unsafe behaviors, which can be managed according to specific conditions;
IIILow riskUnsafe behaviors may occur, requiring regular management;
IVMedium riskThere is a small amount and low frequency of unsafe behavior, which requires detailed analysis and targeted management;
VRelatively high riskThere is a relatively larger-scale or higher-frequency unsafe behavior, which necessitates finding problems from the entire micro-system and implementing optimization measures in a timely and effective manner;
VIHigh riskLarge-scale and high-frequency unsafe behaviors occur, which necessitate finding problems from the entire social–technical system and implementing optimization measures in a timely and effective manner;
VIIMajor riskLarge-scale, high-frequency, and high-impact unsafe behaviors occur, which require timely, overall, and periodic rectification
Table 3. Basic events and their probabilities.
Table 3. Basic events and their probabilities.
Serial NumberBasic EventsProbabilitySerial NumberBasic EventsProbability
X1Illegal overtaking0.001622X26Heavy fog while driving0.000335
X2Overspeed driving0.029991X27Strong wind while driving0.000704
X3Fatigue driving0.007782X28Dark environment when driving0.001622
X4Illegal parking0.029991X29Slippery road0.000335
X5The driver is in poor physical condition0.000704X30Uneven road0.000139
X6The driver has an unsafe mentality0.000335X31Narrow road0.001622
X7Visual/hearing defects0.000139X32Hot weather while driving0.007782
X8Drivers lack professional qualifications0.000335X33Poor welding of storage device0.000335
X9The driver received the wrong information0.000223X34Cracked storage device0.000335
X10Wrong packaging procedure was executed0.007047X35The tank can react chemically with dangerous substances0.000335
X11Packaging worker is trusting to luck0.001622X36Vehicle line ageing0.000335
X12Perform abnormal loading and unloading procedures0.000704X37Short circuit of vehicle line0.000223
X13Violent loading and unloading0.020917X38Poor connection of vehicle lines0.000223
X14Check result is wrong0.000704X39Ground of vehicle lines0.000223
X15Violation of normal inspection procedures0.003466X40Irregular parking in parking lot0.007047
X16Lack of safety inspection0.003466X41Potential fire hazard in parking lot0.001622
X17Lack of cargo attendants0.000335X42Parking lot safety regulations are not perfect0.001349
X18Cargo attendant is trusting to luck0.007047X43Inadequate parking safety management0.001622
X19Failure of brake device0.000335X44Drivers lack real-time safety supervision0.029991
X20Failure of power system0.000335X45Enterprise safety regulations are not perfect0.001622
X21Steering wheel out of order0.000335X46Lack of corporate safety education0.003466
X22Tyre wear0.000223X47Lack or failure of emergency facilities0.007047
X23Wear deformation of frame0.000223X48Enterprise emergency plan is not sound0.007047
X24Heavy rain while driving0.007782X49Lack of emergency training0.003466
X25Smog while driving0.000335X50Lack of emergency drills0.029991
Table 4. Risk level of unsafe behaviors.
Table 4. Risk level of unsafe behaviors.
Unsafe BehaviorConsequenceProbabilityRisk LevelComprehensive Risk Level
Illegal overtakingFire4.4 × 10−6VIVI
Explosion2.2 × 10−7V
Poisoning8.4 × 10−5VI
Overspeed drivingFire9.6 × 10−4VIIVII
Explosion4.8 × 10−5VI
Poisoning1.5 × 10−3VII
Fatigue drivingFire7.0 × 10−5VIVII
Explosion3.5 × 10−6VI
Poisoning4.0 × 10−4VII
Illegal parkingFire2.5 × 10−6VIVI
Explosion1.2 × 10−7V
Poisoning6.1 × 10−5VI
Error behavior of driversFire1.2 × 10−6VIVI
Explosion6.0 × 10−8IV
Poisoning3.6 × 10−5VI
Table 5. Risk behaviors and relevant control measures.
Table 5. Risk behaviors and relevant control measures.
Unsafe
Behavior
Dominant
Information Risk Stage
Key Asymmetry Pattern Identified by HFRATargeted Technical MeasuresTargeted Organizational Measures
Illegal overtakingInformation acquisition & cognition
  • Real-time supervision information gap;
  • Risk perception underestimation
  • Vehicle telematics;
  • Lane-change risk detection;
  • Overtaking hazard alerts
  • Real-time supervision platform;
  • Dynamic violation feedback;
  • Targeted risk-awareness training
Overspeed drivingInformation acquisition & utilization
  • Speed-limit & supervision signal delay;
  • Weak control feedback
  • Intelligent speed warning;
  • Geo-fence speed control;
  • Dynamic speed-limit push
  • Centralized speed monitoring;
  • Graded compliance assessment system
Fatigue drivingSafety cognition & analysis
  • Cognitive overload;
  • Self-state misjudgment
  • AI fatigue detection;
  • Physiological monitoring;
  • Adaptive alert escalation
  • Fatigue-risk scheduling;
  • Mandatory rest enforcement;
  • Fatigue reporting mechanism
Illegal parkingInformation acquisition & decision
  • Safe parking information asymmetry;
  • Decision ambiguity
  • Smart dangerous goods parking guidance;
  • Hazard-zone auto warning
  • Dedicated dangerous goods parking allocation;
  • Standardized parking decision protocol
Error behavior of driversInformation analysis & execution
  • Signal misinterpretation;
  • Information overload;
  • Decision–action gap
  • Decision-support dashboard;
  • Risk visualization interface;
  • Operation confirmation prompts
  • Scenario-based cognitive training;
  • Double-check operating procedure;
  • Supervision reinforcement
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, C.; Ma, Y. Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition. Systems 2026, 14, 199. https://doi.org/10.3390/systems14020199

AMA Style

Xiong C, Ma Y. Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition. Systems. 2026; 14(2):199. https://doi.org/10.3390/systems14020199

Chicago/Turabian Style

Xiong, Changqin, and Yiling Ma. 2026. "Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition" Systems 14, no. 2: 199. https://doi.org/10.3390/systems14020199

APA Style

Xiong, C., & Ma, Y. (2026). Human Factor Risk Analysis (HFRA) Based on an Integrated Perspective of Socio-Technical Systems and Safety Information Cognition. Systems, 14(2), 199. https://doi.org/10.3390/systems14020199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop