Classiﬁcation and Quantiﬁcation of Human Error in Manufacturing: A Case Study in Complex Manual Assembly

: Manual assembly operations are sensitive to human errors that can diminish the quality of ﬁnal products. The paper shows an application of human reliability analysis in a realistic manufacturing context to identify where and why manual assembly errors occur. The techniques SHERPA and HEART were used to perform the analysis of human reliability. Three critical tasks were selected for analysis based on quality records: (1) installation of three types of brackets using fasteners, (2) ﬁxation of a data cable to the assembly structure using cushioned loop clamps and (3) installation of cap covers to protect inlets. The identiﬁed error modes with SHERPA were: 36 action errors, nine selection errors, eight information retrieval errors and six checking errors. According to HEART, the highest human error probabilities were associated with assembly parts sensitive to geometry-related errors (brackets and cushioned loop clamps). The study showed that percep-tually engaging assembly instructions seem to offer the highest potential for error reduction and performance improvement. Other identiﬁed areas of action were the improvement of the inspection process and workers’ provision with better tracking and better feedback. Implementation of assembly guidance systems could potentially beneﬁt worker’s performance and decrease assembly errors. by most determining the that inﬂuence performances. The paper also discusses potential strategies for the reduction of errors, including new technological approaches.


Introduction
An efficient and reliable assembly process is a critical aspect of manufacturing, ensuring that the final product meets the required quality level. Engineers usually consider several variables when selecting an appropriate assembly system. Among them are flexibility, productivity, product variants and production volume [1]. Figure 1 shows the relationship between some of these variables and the level of automation in the assembly system. The development of industrial robotics has induced a significant increase in the automation and productivity of manufacturing processes, including assembly [2]. However, in manufacturing domains where product complexity and variety present particular challenges, manual work remains a viable alternative. This is the case of such manufacturing domains as consumer electronics [3], aerospace manufacturing [4,5], combustion engine assembly [6], automotive manufacturing [7,8] and the production of industrial machines and tools [9,10]. Hence, for manual assembly to yield a final product of the appropriate level of quality, several operations must be executed properly. For example: selecting, handling and fitting parts; checking; applying force; and retrieving and analyzing information. Unfortunately, these operations are susceptible to human error and represent potential sources of defects. Swain and Guttmann [11] define human errors as "any member of a set of human actions that exceed some limit of acceptability, i.e., an out-of-tolerance action, where the limits of tolerable performance are defined by the system". Common error-related defects in manual assembly processes include loose connections, missing components, the installation of wrong components, improper application of force to fasteners, With the introduction of Industry 4.0, manufacturing will experience an increase in product customization under the conditions of highly flexible (large scale) production [13]. According to Thoben et al. [14], this mass customization is expected to escalate production complexity and could increase the demands on human operators' skills. In this context, optimizing the working system is necessary to minimize human error. Identifying and understanding the factors that affect assemblers' performance is the first step in the implementation of effective strategies. Bubb [15] argues that human reliability is crucial to improve quality in the manufacturing sector. However, human reliability research has mostly focused on safety, while less attention has been paid to the manufacturing sector and to manual assembly specifically [16]. To close this gap, this paper applies classic human reliability analysis (HRA) techniques to a realistic case study of complex manual assembly. It assumes that some HRA techniques, developed and mostly used in safety-critical domains, can also be useful in the analysis of human error in manual assembly context. The study's main goal is to understand where and why manual assembly errors occur by identifying the most common error modes, evaluating them, and determining the factors that influence the assemblers' performances. The paper also discusses potential strategies for the reduction of errors, including new technological approaches.
The research is articulated around a case study of complex manual assembly in a manufacturing setting (described in detail later in the paper). This case study is intended to provide new insights into human-related effects of assembly processes, which might lead to new work design solutions and generate new research questions. Additionally, previous results of human reliability analysis may be critically questioned. The case study With the introduction of Industry 4.0, manufacturing will experience an increase in product customization under the conditions of highly flexible (large scale) production [13]. According to Thoben et al. [14], this mass customization is expected to escalate production complexity and could increase the demands on human operators' skills. In this context, optimizing the working system is necessary to minimize human error. Identifying and understanding the factors that affect assemblers' performance is the first step in the implementation of effective strategies. Bubb [15] argues that human reliability is crucial to improve quality in the manufacturing sector. However, human reliability research has mostly focused on safety, while less attention has been paid to the manufacturing sector and to manual assembly specifically [16]. To close this gap, this paper applies classic human reliability analysis (HRA) techniques to a realistic case study of complex manual assembly. It assumes that some HRA techniques, developed and mostly used in safety-critical domains, can also be useful in the analysis of human error in manual assembly context. The study's main goal is to understand where and why manual assembly errors occur by identifying the most common error modes, evaluating them, and determining the factors that influence the assemblers' performances. The paper also discusses potential strategies for the reduction of errors, including new technological approaches.
The research is articulated around a case study of complex manual assembly in a manufacturing setting (described in detail later in the paper). This case study is intended to provide new insights into human-related effects of assembly processes, which might lead to new work design solutions and generate new research questions. Additionally, previous results of human reliability analysis may be critically questioned. The case study method is recognized as having an impact on the generalization of theoretical and empirical findings and it offers the possibility of studying a defined situation in great detail [17]. This methodological approach is especially suitable when researchers require a holistic, in-depth investigation that demands that research goes beyond the study of isolated variables. The examination of the manual assembly context object of study is an integral aspect of the understanding of the interactions within. This examination favors the collection of data in real settings [18]. In the present paper, a complex manual assembly is defined as an assembly task that requires specialized knowledge and skills for the worker to be able to complete the task. Additionally, the assembled object is composed of a high number of parts with a high number of possible choices among them, while the geometry of the assembled object has several symmetric planes [19]. The cognitive demands on the assembler as a consequence of information perception and information processing are relatively high [20]. The conceptual debate around definitions of complexity is beyond the scope of this article and is the focus of different scientific teams; we, therefore, invite the reader to consult the literature on this behalf [21][22][23].

Some Considerations about Manual Assembly
According to Swift and Booker [24], manual assembly "involves the composition of previously manufactured components or sub-assemblies into a complete product or unit of a product, primarily performed by human operators using their inherent dexterity, skill and judgment". Richardson et al. [19] describe manual assembly as a spatial problem-solving activity that requires that workers build a mental model to understand and manipulate spatial information. The quality of the information contained in work instructions, the way this information is presented and how the worker interacts with this information are particularly important in manual assembly processes [25][26][27][28]. To take full advantage of the assemblers' cognitive abilities, work instructions should clearly and unambiguously describe which components to use and how they should be assembled [27]. One accepted principle is that these instructions must be presented in such a way that anyone can understand them and conduct assembly accordingly [29]. This way, work instructions can help to reduce the assemblers' cognitive workloads, particularly by minimizing dynamic complexity.
In modern manual assembly processes, work instructions are generally provided electronically and presented on a computer screen, as text supported by visual information [25,28]. However, according to Mattsson et al. [30], instructions need to be more perceptual, which means that richer and more immediate sensory inputs should be provided to the assembler. Using three-dimensional models in work instructions can show the assembly process in a more realistic, accurate and intuitive fashion. Such model-based instructions (MBI) may present multiple views and easy-to-follow assembly procedures [31,32]. Recent technologies, such as augmented reality (AR), seem to promise even better means of delivering work instructions to assemblers. In a recent literature review, AR was identified as one of the two most promising technologies to support humans in manufacturing contexts, along with collaborative robots [33]. However, applications of AR are still in development and the technology has not reached full technological maturity [34,35]. Choosing the right parts during the assembly requires a certain amount of information processing and decision making, which is why the use of kitting systems has been explored as a strategy to minimize this cognitive load [36]. Parts kitting is a logistics technique used to deliver groups of parts together in one or more kit containers [37]. Generally, kits are prepared in a warehouse and delivered to the assembly line, to specific workstations, according to the production schedule. When kits are prepared properly, parts are supposed to be easily available, checked, and prepositioned so that they can be removed rapidly from the container [38]. According to Brolin [25], a kit can be considered as a "carrier of information" for assembly, meaning that work instructions are embedded in the kit itself. Medbo [39] argues that appropriately structured kits can support assemblers and even facilitate learning. According to Caputo et al. [37], kitting systems provide the opportunity for in-process quality control and additional checks, both in the kitting room or at the workstation. Kitting can therefore reduce the risk of a wrong part being assembled or of a part being omitted by providing direct feedback to the assembler. Past research suggests that kitting may yield better quality and productivity when compared to other parts feeding policies [25,39,40].
It has been acknowledged that errors cannot be entirely eliminated because they are considered to be a normal consequence of human variability [41]. Thus, during manual assembly, inspection is required to verify that a product is free of defects before it is transferred to the next level of assembly or shipped to the customer [42]. In this way, inspection allows the system to recover from human error. Historically, workers have performed these inspections visually. However, the limitations of humans as visual inspectors have long been recognized [43,44]. For this reason, automated visual inspection (AVI) has long been used in the manufacturing industry for quality control and monitoring [45,46]. As gains in computational power yield enhanced image acquisition, processing and analysis capabilities, automation replaces human visual inspection more and more often [42,47]. For example, a robot can take pictures of the final product to detect deviations and nonconformities. Such a system can validate the quality of final assembly based on its ability to perform optical characteristics recognition, such as detecting whether a specific component is absent. However, even though machine vision has been used in quality inspection for several years, it seems that technological challenges remain. According to Sobel [48] "Many of the advances we take for granted in modern computing-ubiquitous connectivity, unlimited data storage in the cloud, insights drawn from massive unstructured data sets-have yet to be applied systematically to the factory floor in general and to computer vision specifically". One drawback of centralized automated inspection at the end of the assembly line is that the identification of nonconformities is made too late and the cost of reworking may be considerable. Lean approaches seek to ensure the quality of the assembled object before it leaves the workstation to minimize costs and delays by eliminating waste [49]. As machine vision continues to evolve, it would be reasonable to expect that automated visual inspection might become sufficiently flexible to come to the assembly workstation. Thus, a centralized automated inspection would shift to an in-process, ubiquitous automated inspection carried out with a smart assistance system. These systems could combine, for example, wearable augmented reality with automated visual inspection [50,51] or use collaborative robots for automated visual inspection [52,53].

Human Reliability Analysis: Origins and Applications in Assembly Systems
According to Embrey [54], the essence of human reliability analysis (HRA) is the prediction and mitigation of error to optimize safety, reliability and productivity. HRA's main focus is the identification of sources of human error and the quantification of the likelihood of such errors [55]. Over the years, the discipline of HRA has proposed several techniques [56,57], which stem from the need to quantify human error for probabilistic safety analysis (PSA) in the nuclear sector [58,59]. Other industries where HRA has been applied include aerospace, offshore oil extraction and chemical processing, all of them safety-critical domains [60,61]. Human reliability analysis represents the intersection of reliability engineering and human factors research [62]. Reliability engineering seeks to predict overall system reliability based on system configuration and probabilistic estimates of component failures [63]. According to Boring et al. [55], human factors research provides the empirical basis to support predicting human performance in HRA. In practical terms, human performance is predicted by calculating human error probability (HEP). Mathematically, human error is treated as the failure of a technical component. However, some HRA techniques focus on human error identification rather than the calculation of human error probabilities [64,65].
Failure mode and effects analysis (FMEA) is a well-established reliability engineering technique used to assess vulnerabilities in a system proactively [66]. It has been used in manufacturing and manual work [67]. Quality function deployment (QFD), a quality research technique, has also been used to redesign product functionality to reduce complexity in assembly [68]. Despite these examples, a recent literature review by Pasquale et al. [16] concluded that: "a prospective analysis of human reliability in the manual assembly systems until now has been neglected in the literature and few papers investigated the range of human error and the type of errors that occur in this field". However, this conclusion is based on a rather small number of relevant documents, i.e., 20 peer-reviewed papers published in English between 2005 and 2017. Authors from manufacturing powerhouses like Germany, France and Japan, who published in their native languages, were not considered. While non-exhaustive, Table 1 shows additional developments and applications of HRA to manufacturing and manual assembly.
Broadly speaking, there are two main approaches to human reliability in manual assembly: the development of context-specific techniques [37,69,70] and the application of classic HRA techniques with or without modifications [6,15,71]. Context-specific methods are expected to provide more precision in the calculation of HEP, but they are more resourceintensive. For example, methods time and quality measurement (MTQM), developed in Germany, requires in-depth knowledge of predetermined motion time systems (PMTS) and high investments in training time and money [72]. Little information about context-specific techniques is available, which hinders replication of the analysis process. On the other hand, classic HRA techniques like THERP (technique for human error rate prediction), HEART (human error assessment and reduction technique) or SHERPA (systematic human error reduction and prediction approach) are well represented in the literature and enough information is available to carry out an analysis. Furthermore, the importance of obtaining a precise final value of HEP depends on the intended final use of this numerical value. Kern and Refflinghaus [73] used assembly specific databases as part of MTQM, to calculate precise estimates of HEP. This is necessary because HEP values are used as part of a production planning tool to conduct cost-benefit analysis. More recently, Torres et al. [74] proposed an intervention framework for the analysis of human error in manual assembly. This framework is based on the use of well-known HRA techniques and is in line with the idea proposed by Bubb [15], who suggests using HRA to identify error modes and calculate human error probabilities (HEP) in manual assembly. The framework has two modules. Module 1 discusses the selection of critical tasks to be analyzed (based on data, statistics and initial solutions). Module 2 is a modification of classic human reliability analysis techniques. No application was reported.
No [67] Conference paper A novel technique developed specifically to identify manual assembly errors is proposed: "Assembly FMEA".
Assembly defect levels are related to assembly complexity, which can be estimated using "Design for Assembly" (DFA) time penalties. Hence, Assembly FMEA uses a series of DFA-related questions to elicit potential assembly defects.
No [15] Journal paper A reflection is conducted on how HRA can help to improve quality in manual assembly through the use of literature and examples.
The article describes the major tenets of HRA and its relationship with quality in production. THERP (technique for human error rate prediction) is used in a study case from manual electronic assembly.

Yes [75] Thesis Dissertation
A methodology is proposed to support engineers and technicians in the selection of the best error-proofing actions during product development. This is intended to minimize errors from the design phase.
The proposed methodology is based on historical data and the FMEA technique. It offers a list of 36 error-proofing approaches to consider. The selection of the best approaches is based on cost calculations and the impact on quality. The methodology is based on Toyota's production system. Application reported in a mixed production assembly of a three-wheeled motorcycle.
No [6] Conference paper A "novel" mixed methodology is proposed to analyze quality issues related to human errors in engine assembly.
The methodology combines a modified version of CREAM (cognitive reliability and error analysis method) with FTA (fault tree analysis). Application reported on an assembly line of automobile engines.
No [69] Journal paper An assembly planning method is developed: MTQM (methods time and quality measurement) that allows the calculation of human error probabilities linked to predetermined motion times.
The taxonomy of error types was harmonized with nomenclature from MTM. Specific human error probabilities are based on data from German automotive manufacturing. Application reported in the automotive industry [72].
Yes [76] Conference paper A human reliability model for assembly line manual workers is developed based on statistical analysis of personal factors.
The model was built using Cox proportional-hazards regression. Nine factors were evaluated in 120 assembly line operators using psychometric tests. Factors included in the model were: stress, motivation, memory, and personality. The model was developed in an electronic assembly. [77] Thesis Dissertation

No
The objective was to capture the structure of human errors in a production line and to describe the frequency in which these errors occur. Principles of Resilience Engineering were also explored.
A detailed analysis of error types and error probabilities; the findings are intended to be incorporated into future planning and development operations. A system theory model was used to understand specific human behaviors and their adaptation to disturbance variables. Application at an engine production facility.
No [37] Conference paper A quantitative model is developed to assess errors in kitting processes for assembly line feeding. The model allows quantifying costs of error-related quality issues.
Event trees are adopted to keep track of unwanted events and error correction opportunities during the entire logistic process, starting from material picking in the warehouse to kit delivery at workstations and parts assembly. An application example is included.
No [70] Journal paper A new human reliability analysis (HRA) method is presented: The simulator for human error probability analysis (SHERPA).
A theoretical framework is provided, and human reliability is estimated as a function of the performed task, the performance shaping factors (PSF) and the time worked.

Materials and Methods
We focused on one case study of complex mechanical manual assembly. The case study was a realistic case distilled from the actual case that is confidential. Several data collection and analysis methods were used to support the human error analysis process. This includes searching quality records to select critical tasks based on statistical analysis, familiarization and analysis of work instructions from the manufacturing execution system (MES), field observations of the tasks execution, unstructured interviews and focus group meetings with line supervisors, quality specialists and assemblers. This methodological approach is known as mixed-methods [78]. The whole process was led and supervised by experienced engineers and researchers in ergonomics and manufacturing. The participation was completely voluntary and the whole study received approval from École de technologie supérieure's research ethics committee (7 November 2018).

Human Error Analysis Process
The five steps of the human error analysis process used in this study are presented in Figure 2. Although these steps are presented sequentially, following a logical order, the approach employed in the field was iterative and holistic. Thus, the diagram in Figure 2 should be seen as a simplification of the actual analysis process associated with the case study. The techniques included in the human error analysis process (HTA, SHERPA and HEART) were selected according to operational criteria from the literature (Holroyd and Bell 2009;Lyons 2009) and considerations crucial to the manufacturing context: time, simplicity, availability of information, level of validation and analyst-oriented techniques. Further information about HTA [79], SHERPA [80] and HEART [81] can be found in the literature. A brief description of each of the five steps in Figure 2 is presented subsequently.

Materials and Methods
We focused on one case study of complex mechanical manual assembly. The case study was a realistic case distilled from the actual case that is confidential. Several data collection and analysis methods were used to support the human error analysis process. This includes searching quality records to select critical tasks based on statistical analysis, familiarization and analysis of work instructions from the manufacturing execution system (MES), field observations of the tasks execution, unstructured interviews and focus group meetings with line supervisors, quality specialists and assemblers. This methodological approach is known as mixed-methods [78]. The whole process was led and supervised by experienced engineers and researchers in ergonomics and manufacturing. The participation was completely voluntary and the whole study received approval from École de technologie supérieure's research ethics committee (7 November 2018).

Human Error Analysis Process
The five steps of the human error analysis process used in this study are presented in Figure 2. Although these steps are presented sequentially, following a logical order, the approach employed in the field was iterative and holistic. Thus, the diagram in Figure 2 should be seen as a simplification of the actual analysis process associated with the case study. The techniques included in the human error analysis process (HTA, SHERPA and HEART) were selected according to operational criteria from the literature (Holroyd and Bell 2009;Lyons 2009) and considerations crucial to the manufacturing context: time, simplicity, availability of information, level of validation and analyst-oriented techniques. Further information about HTA [79], SHERPA [80] and HEART [81] can be found in the literature. A brief description of each of the five steps in Figure 2 is presented subsequently. Human error analysis process used in the study. HTA = hierarchical task analysis; SHERPA = systematic human error reduction and prediction approach; HEART = human error assessment and reduction technique; GTT = generic task type, EPC = error producing condition; APOA = assessed proportion of effect; HEP = human error probability.

Step 1: Selection of Critical Tasks
The selection of the tasks to be analyzed was based on data from the company's records. To this, quality records were explored for a 36-month period prior to the study. The objective was to find a group of tasks that were a frequent source of error-related quality Human error analysis process used in the study. HTA = hierarchical task analysis; SHERPA = systematic human error reduction and prediction approach; HEART = human error assessment and reduction technique; GTT = generic task type, EPC = error producing condition; APOA = assessed proportion of effect; HEP = human error probability.

Step 1: Selection of Critical Tasks
The selection of the tasks to be analyzed was based on data from the company's records. To this, quality records were explored for a 36-month period prior to the study. The objective was to find a group of tasks that were a frequent source of error-related quality issues. Firstly, descriptive statistics allowed one to identify the assembly parts that represented a high proportion of quality issues. Secondly, the rate of quality issues per 10 thousand labor hours allowed one to select the specific assembly line and workstations where the research should focus on.

Step 2: Task Description
Once the tasks to be analyzed were chosen, each of them was broken down into the subtasks required to achieve the overall goal of the task. This was achieved using the hierarchical task analysis technique developed by Anett [79]. The description of the tasks was based on the study of work procedures, as described in the manufacturing execution system (MES). Additionally, a total of 40 h of field observations supported this and other steps within the human error analysis process. Five assemblers were observed during an entire workday (eight hours) at their respective workstations. However, these field observations focused primarily on the execution of tasks. The main goal was to understand the structure and sequencing of tasks and subtasks in a real context (work as done).
No personal data about the assemblers was collected. The assembly line and workstations to be observed were identified based on results from the analysis of quality records.

Step 3: Identification of Human Errors
SHERPA (systematic human error reduction and prediction approach) was used to identify human error modes associated with the tasks analyzed [82]. For this, each bottomlevel subtask from the HTA was associated with one or more error modes proposed in the SHERPA taxonomy of errors. The error modes are described, and corrected strategies are identified. Information is then compiled and presented in the tabular form. In this study, a focus group composed of line supervisors, quality specialists and experienced assemblers provided the necessary information for the process of classifying human errors. Three meetings of approximately 1.5 h each were carried with the focus group.

Step 4: Quantification of Human Error
The quantitative analysis in this step was performed based on HEART (human error assessment and reduction technique). HEART contains three main elements, which are the following:

•
Generic task type (GTT): the analyst should specify which of the eight generic task types proposed in HEART best match the task object of analysis and determines the nominal HEP for the task based on the mean HEP value associated with the corresponding GTT. • Error producing condition (EPC): these are conditions that may influence the human reliability, mathematically they represent modification weights for the nominal HEP.
HEART proposes a table with a list of 40 EPCs and their relative maximum effect on performance in the form of a numerical value (EPCs are also known as performance shaping factors or PSFs). • Calculation method: the method for calculation of human error probability evaluates the EPCs weights based on their relative importance to each other in the task context. In this manner, an assessed proportion of the error producing conditions (APOA) is obtained. The final HEP can be yielded, as shown in Equations (1) and (2): where WF j = weighting effect for the jth EPC; EPC j = error-producing condition for the jth condition; APOA j = assessed proportion of 'affect' for that condition; HEP = final human error probability; GTT i = central value of the HEP task (distribution) that is associated with a task i; GTT = generic task type.

Description of the Case Study
The case consists of a mechanical assembly worker installing various components to the main structure of the object being assembled. Instructions are delivered by the manufacturing execution system (MES) and displayed on a computer terminal at the workbench (PC stationary screen), which is located approximately 2 m from the assembly position. Instructions consist of text and 2D drawings providing basic visual descriptions of assembly steps. Parts are stored in bins at the workbench. The worker moves around the assembly position at the center of the workstation as needed. Figure 3 shows the schematic representation of the assembly workstation's layout and the general workflow. The assembly line consists of six workstations with similar characteristics to the workstation shown in Figure 3. The main assembly structure is transferred from one workstation to the next using a dolly. The height and direction of the main structure of assembly can be modified by the worker to gain access more easily.
i. 2021, 11, x FOR PEER REVIEW 10 of 23 error probability; GTTi = central value of the HEP task (distribution) that is associated with a task i; GTT = generic task type.

Description of the Case Study
The case consists of a mechanical assembly worker installing various components to the main structure of the object being assembled. Instructions are delivered by the manufacturing execution system (MES) and displayed on a computer terminal at the workbench (PC stationary screen), which is located approximately 2 m from the assembly position. Instructions consist of text and 2D drawings providing basic visual descriptions of assembly steps. Parts are stored in bins at the workbench. The worker moves around the assembly position at the center of the workstation as needed. Figure 3 shows the schematic representation of the assembly workstation's layout and the general workflow. The assembly line consists of six workstations with similar characteristics to the workstation shown in Figure 3. The main assembly structure is transferred from one workstation to the next using a dolly. The height and direction of the main structure of assembly can be modified by the worker to gain access more easily.

Selection and Description of Critical Tasks
Numerous manual assembly tasks are performed on the assembly line, but for the purpose of the case study presented in this paper, three basic tasks were selected as critical tasks. These tasks are related to assembly parts that represent a substantial proportion of the company's quality issues. Results showed that, during the 36-month period prior to the study, 67% of all wrongly installed parts were associated with brackets (27%), loop clamps (25%) and bolts (15%). Similarly, 65% of all missing parts were associated with cap covers (40%), brackets (13%) and loop clamps (12%). Following this, the three basic critical tasks selected were: 1. Install three brackets at specific locations according to the work procedure.

Selection and Description of Critical Tasks
Numerous manual assembly tasks are performed on the assembly line, but for the purpose of the case study presented in this paper, three basic tasks were selected as critical tasks. These tasks are related to assembly parts that represent a substantial proportion of the company's quality issues. Results showed that, during the 36-month period prior to the study, 67% of all wrongly installed parts were associated with brackets (27%), loop clamps (25%) and bolts (15%). Similarly, 65% of all missing parts were associated with cap covers (40%), brackets (13%) and loop clamps (12%). Following this, the three basic critical tasks selected were:

1.
Install three brackets at specific locations according to the work procedure.

2.
Secure a data cable to the main structure with three cushioned loop clamps.

3.
Install four cap covers to protect access points to the structure.
To provide a comparative idea of the different cycle times, these were estimated using MTM-1 (method-time measurement), a predetermined motion time system (PMTS) [83]. The estimated cycle times for the tasks analyzed were 155 s, 51 s and 28 s, respectively. Basic cycle times, calculated with PMTS, are often used as part of different models developed for complexity assessment in manual assembly operations [84,85]. However, the cycle times calculated here are not part of any formal complexity assessment of the assembly tasks under study. Therefore, these values must be interpreted with this element in mind even when, to some extent, they might express differences in complexity between the tasks analyzed. Brackets are secured with nuts and bolts, while cushioned loop clamps are attached to the main structure with tie wraps. Torque is applied with a wrench according to specifications provided in the assembly instructions. Parts used in the assembly tasks are presented in Figure 4. A check for missing parts is carried out by the worker at the end of each task cycle. The selected tasks can be considered representative, to some extent, of the manual assembly process under study.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 11 of 2. Secure a data cable to the main structure with three cushioned loop clamps. 3. Install four cap covers to protect access points to the structure.
To provide a comparative idea of the different cycle times, these were estimated u ing MTM-1 (method-time measurement), a predetermined motion time system (PMT [83]. The estimated cycle times for the tasks analyzed were 155 s, 51 s and 28 s, respe tively. Basic cycle times, calculated with PMTS, are often used as part of different mode developed for complexity assessment in manual assembly operations [84,85]. Howeve the cycle times calculated here are not part of any formal complexity assessment of th assembly tasks under study. Therefore, these values must be interpreted with this eleme in mind even when, to some extent, they might express differences in complexity betwee the tasks analyzed. Brackets are secured with nuts and bolts, while cushioned loop clam are attached to the main structure with tie wraps. Torque is applied with a wrench accor ing to specifications provided in the assembly instructions. Parts used in the assemb tasks are presented in Figure 4. A check for missing parts is carried out by the worker the end of each task cycle. The selected tasks can be considered representative, to som extent, of the manual assembly process under study. An example of a 2D drawing provided to the assembler in the work instruction task No. 1 (install three brackets) is shown in Figure 5. Work instructions are displaye on a 19-inch stationary PC monitor. The computer's input devices are a mouse and a ke board, both connected by cables to the PC. Figure 6 shows bracket A installed and secure with two bolts to the assembly structure. Similarly, Figure 7 shows the data cable and th cap covers installed on the assembled object. The data cable was secured to the structu using cushioned loop clamps and tie wraps. Cushioned loop clamps were used to avo damage to the data cable in an operational context. Cap covers were installed to prote different inlets. Figure 8 shows the breakdown of the critical basic task No.1 (install brackets). Th HTA diagram is presented as an extract of the diagrams developed for the breakdown each of the critical tasks under analysis. Subtasks 1.1-1.4 repeat for each of the brackets B and C, representing 12 subtasks. For the sake of diagram simplification, repetitions a not included. An example of a 2D drawing provided to the assembler in the work instruction of task No. 1 (install three brackets) is shown in Figure 5. Work instructions are displayed on a 19-inch stationary PC monitor. The computer's input devices are a mouse and a keyboard, both connected by cables to the PC. Figure 6 shows bracket A installed and secured with two bolts to the assembly structure. Similarly, Figure 7 shows the data cable and the cap covers installed on the assembled object. The data cable was secured to the structure using cushioned loop clamps and tie wraps. Cushioned loop clamps were used to avoid damage to the data cable in an operational context. Cap covers were installed to protect different inlets.

Identification and Description of Error Modes
HTA and SHERPA techniques were applied to each task under evaluation, yielding a total of 59 identified error modes. The installation of brackets accounted for the highest number of error modes with 34 (58%), followed by the installation of cushioned loop clamps with 19 error modes (32%), and the installation of cap covers with six error modes (10%). We found errors in four of the five categories proposed by the SHERPA taxonomy: action, retrieval, checking and selection. The absence of communication errors could be explained by the limited amount of teamwork in the context under study where the necessary information is usually supplied to the worker electronically by the MES. For the sake of simplification and the need to avoid repeated tables, only an extract of the SHERPA output table associated with critical task No.1 (install brackets) is presented in Table 2. Similar tables were obtained as a result of SHERPA analysis of the rest of the critical tasks object of study. The consolidated results of the error modes obtained during the SHERPA analysis are presented in Table 3. Estimated cycle times are also presented at the end of Table 3 for comparison purposes. A more detailed description of each error mode is presented subsequently.

Identification and Description of Error Modes
HTA and SHERPA techniques were applied to each task under evaluation, yielding a total of 59 identified error modes. The installation of brackets accounted for the highest number of error modes with 34 (58%), followed by the installation of cushioned loop clamps with 19 error modes (32%), and the installation of cap covers with six error modes (10%). We found errors in four of the five categories proposed by the SHERPA taxonomy: action, retrieval, checking and selection. The absence of communication errors could be explained by the limited amount of teamwork in the context under study where the necessary information is usually supplied to the worker electronically by the MES. For the sake of simplification and the need to avoid repeated tables, only an extract of the SHERPA output table associated with critical task No.1 (install brackets) is presented in Table 2. Similar tables were obtained as a result of SHERPA analysis of the rest of the critical tasks object of study. The consolidated results of the error modes obtained during the SHERPA analysis are presented in Table 3. Estimated cycle times are also presented at the end of Table 3 for comparison purposes. A more detailed description of each error mode is presented subsequently. A different bracket installed. A missing bracket in the assembly Step 3 Clearly identifiable and labeled bins or validation with a barcode system. Introduction of kitting feeding system.

1.3
Pick fasteners from bins at the workstation S2: Wrong selection made. A8: Operation omitted.

Selection of a different fastener. Fasteners not picked.
A different fastener installed, e.g., shorter bolt same caliber. A missing fastener in the assembly.
Step 1.4 Better tracking and guidance during the assembly process. Introduction of kitting feeding system.

1.4
Fix the bracket in proper position.
Bracket installed in the wrong direction. Bracket installed in the wrong position (wrong holes).
A misplaced bracket installed on the assembly not according to design specifications.
Step 3 Access to more realistic work instructions directly or near the assembly place.
Torque applied but out of specifications. Torque not applied.

Loose fasteners or fasteners too tight. Damage to nuts and bolts
Step 3 Torque wrench & Error proofing system. Better tracking and guidance during the assembly process.
Omission to check previously installed parts.
Check not done thoroughly.
An undetected quality issue which crosses the recovery barrier.
None Avoid distractions and interruptions. In situ visual automated inspection. Cycle times 155 s 51 s 28 s 1 Symbol -indicates that no error mode of the associated category (row) was found during the analysis of the task under consideration (column).
A3 Operation in the wrong direction: The six errors in this category were identified in the installation of brackets and the installation of cushioned loop clamps (secure data cable). All these parts must follow a specific spatial orientation according to drawings shown in the assembly instructions.
A4 Too little/much operation: Fasteners must be tightened by applying torque with a specific amount of force. An operation is considered out of specifications if the fastener is too loose or too tight. Five bolts are required to install brackets, which accounts for the five error modes in this category.
A5 Misalignment: Like operation in the wrong direction (A3), misalignment is especially prevalent during the installation of brackets and cushioned loop clamps. A bracket can be fastened to one of several holes. It is, therefore, possible to install a bracket in a shifted position from the one specified on the instructions. Cushioned loop clamps are also subject to this kind of error mode because the correct position of the cushioned loop clamps could be hard to determine. Six error modes have been identified in this category.
A8 Operation omitted: Workers can omit to install brackets, bolts, cushioned loop clamps, tie-wraps or cap covers. They can also omit to apply the required force to fasteners. These multiple causes explain why this error mode is the most prevalent with 19 counts.
R2 Wrong information obtained: This error mode covers instances in which the work instructions were incorrectly assimilated. The installation of each bracket requires the retrieval of specific spatial information. All cushioned loop clamps are considered part of the same subassembly and information is captured once. In total, four error modes were counted.
R3 Information retrieval incomplete: Like wrong information obtained (R2), this kind of error is related to work instructions. In this case, the worker interpreted the work instructions correctly but only read part of them. Four error modes were identified in the same circumstances as for R2. C1 Check omitted: Checking is considered a normal part of the assembly cycle, as opposed to an inspection activity. Three error modes were identified, one for the verification expected at the end of each task. C2 Check incomplete: One or more component(s) were not checked during the verification at the end of a task cycle. Incomplete checking is equivalent to a partial omission of the check. There are three error modes in this category, the same as for C1.
S2 Wrong selection made: The assembler must obtain the required part from a bin at the workbench. Parts are often similar, which could lead the worker to pick the wrong one: several brackets, cushioned loop clamp and fastener variants can be found in the bins. Nine error modes were counted.

Human Error Probabilities Calculation
A human error probability (HEP) was calculated for the incorrect installation of each of four types of parts: brackets, cushioned loop clamps, cap covers and the generic part fasteners. The calculations followed the guidelines provided by William [81] in the original article describing the HEART technique. The generic task type assigned to each part installation was routine, highly practiced, rapid task involving relatively low level of skill, which corresponds to task type E in the HEART nomenclature. The generic error probability for this task type is 0.02. The number of error producing conditions (EPCs, also known as performance shaping factors or PSFs) was limited to no more than five for each of the parts analyzed. Although HEART provides a list of 40 error producing conditions, Boring [86] acknowledges that three to nine PSFs are generally sufficient to arrive at a screening value in the quantification phase. For the sake of simplification and the need to avoid repeated calculations, only an extract of the HEART calculation table for the critical task of installing a bracket is presented in Table 4. The consolidated results of the HEART analysis are presented in Table 5.  Table 5.
The first and second columns of Table 5 show the identified EPC and the proposed multiplier according to HEART analysis. The percentages in the other columns represent the contribution of each EPC to the probability of failure for each component. Table 5's bottom row shows the total human error probabilities.

Discussion
SHERPA analysis shows that brackets and cushioned loop clamps are particularly sensitive to operation in the wrong direction (A3) and to misalignment (A5). These error modes can be considered geometry-related since the proper installation of these parts requires some spatial abilities. Information errors such as the acquisition of the wrong information (R2) and incomplete information retrieval (R3), also identified during SHERPA analysis, can be considered precursors to geometry-related errors because of the impact of the information system on the assembler's cognitive load [25]. Simultaneously, HEART results show that brackets and cushioned loop clamps had the highest human error probabilities (0.62 and 0.47 respectively) compared to fasteners (0.29) and cap covers (0.09). Interestingly, the lack of a means of conveying spatial and functional information to operators in a form that they can readily assimilate is a major error producing condition (EPC) in the HEP calculation. This EPC was the single most important in the HEART analysis (×8 multiplier) and represented 49% and 42% of the total contribution to human error probabilities for brackets and cushioned loop clamps respectively. The results obtained by using HEART were consistent with results from an empirical study in truck engine assembly where the potential error reduction obtained as a consequence of changing the information system was in the order of 40% [28].
It is important to highlight that the work instructions were provided as text and 2D drawing sheets on a stationary PC display, which has been found to be less effective than 3D instructions when spatial abilities are required [87]. It has been acknowledged that work instructions are often deficient or underused in final assembly [25,26]. Another point of concern is how the assembler interacts with the work instructions. The computer screen displaying the instructions in the study case is located on the workbench. However, the worker is frequently at some distance from the screen during the actual assembly. Instructions may not be easily readable from a distance, nor may the worker be able to figure out details on the drawings. This will demand frequent trips from the assembly position to the workbench to retrieve information. Larger screens could help, although the worker's movement around the assembly position might still frequently leave the screens out of sight. This situation was reported by Mayrhofer et al. [35]: a large overhead screen was installed on an aircraft part assembly line, but the worker had to move to retrieve the information because the screen was not visible from all angles and working positions. Neither do wide screens solve the need to move to the workbench to interact with the work instructions (validate, close instructions, go to following instruction, etc.), which is done primarily with a mouse and keyboard. Optimizing the content of work instructions, their display and the interaction process between worker and instructions seems to be paramount to complex assembly performance.
As the SHERPA analysis shows, two major error modes are associated with the systematic application of force to fasteners during assembly: the application of too little/much force (A4) and the omission of the operation (A8). In both cases, specifications are not met, and the integrity of the assembly can be compromised. Results from HEART show that the most important contributors to human error probability of failure in this context are distractions and task interruptions (34%) and poor, ambiguous or ill-matched system feedback (30%). The latter refers to the fact that the system provides no direct feedback to the assembler when a bolt is missing, too tight or loose: the responsibility for verification falls to the assembler. A torque wrench can provide direct feedback while tightening, but this does not solve the issue of missing fasteners and introduces calibration/certification challenges [88]. The worker may also forget to check the applied torque value. Feeding parts to the assembly line in kits constitutes a more traditional way to avoid errors of omission (A8), as the omitted part will remain in the kitting container, thus providing direct feedback to the worker. The number of parts provided by the kitting system should match the needs of the associated task, so that there are neither missing nor superfluous parts in the kitting box or rack (depending on the size of parts). As explained in the literature review, kitting systems also ease the process of selecting parts from bins, thus decreasing the probability of encountering the wrong part selected (S2) error mode. Wireless-enabled tightening systems that improve torque application reliability constitute a support to the assembler [89]. Error-proofing fastening tools have been successfully used in assembly since the late 1980s, when power tools equipped with electronic sensors to control torque became available [90]. Artificial intelligence and machine vision can further develop the potential of these error-proofing systems.
The checks performed by the assembler at the end of each cycle can be treated as a lowkey form of inspection because the search for nonconformity is more casual than during active inspection, where a person is directed to inspect specific items of equipment, usually using specific written procedures like checklists [11]. Error modes C1 (check omitted) and C2 (check incomplete) constitute failures of these low-key inspections. While checks performed by the assembler act as a barrier to the propagation of errors, this type of barrier can be sensitive to attention and interruptions and the well-known limitations of visual inspection [43,91]. Further along the assembly line, in-process active inspection is carried out by other assemblers following specific instructions, sometimes in teams to provide human redundancy, but these measures are susceptible to the same shortcomings, which is why automated visual inspection is often used at the end of the assembly process. It would be reasonable to expect automated visual inspection to become more flexible as computer vision improves. For this reason, the deployment of automated visual inspection at the assembly workstation itself should be explored. This would result in a shift from a centralized automated inspection to an in-process, ubiquitous automated inspection carried out with a smart assistance system.
According to HEART, distractions and interruptions affect the probability of human error during the installation of all the parts under study to various degrees, i.e., from 16% for brackets to 42% for cap covers. Attention failures have been associated with many skill-based errors in the Rasmussen SRK model [92], such as the breakdown in visual scan patterns, the inadvertent activation of controls, and the disordering of steps in a procedure [93,94]. Distractions represent the drawing of attention away from the task, potentially causing confusion and increasing the likelihood of task failure. Either way, the potential impact on performance during manual assembly must be considered. Kilbeinsson et al. [95] issue guidelines for the design of interruption systems that minimize errors and delays during assembly.
Criticism of human reliability analysis (HRA) has focused on the accuracy of human error probability computations [62] and on the assumption that human errors and machine failures can be treated similarly [59] despite extensive variability among humans. Further, most of the research that supports HRA techniques was conducted in the nuclear sector, including the validation of HRA itself [96]. For these reasons, some authors support the use of human error identification (HEI) techniques like SHERPA but exclude the quantitative aspects of HRA [97]. In the present study, human error probabilities calculated with HEART are not intended to be used in probabilistic safety analysis (PSA) but they can be used for comparisons. The analysis shows that the installation of brackets has the highest probability of failure (0.62) compared with cushioned loop clamps (0.47), fasteners (0.29) or cap covers (0.09), which suggests an explanation for the differences between the number of quality issues associated with each of the parts. The probabilities calculated in this study are roughly similar to those reported by Richards [98] for failure of properly installing an oil cap during aircraft maintenance (0.48-0.78). However, in both cases, the numerical values of human error probabilities should be treated as tools for human error prevention rather than for prediction, i.e., task planning of novel assembly/maintenance operations. Quantification allows the importance of the various factors affecting assembler performance to be compared but it could lack precision since most of the HRA techniques were not specifically developed for the manufacturing context. We argue that, despite its limitations, the quantification process constitutes a useful guide for the analyst during the search for solutions and, further, an important complement to the identification of human error modes.

Limitations of the Study and Future Research
Even though the authors took care to represent the real context upon which the case was built as accurately as possible, it is often difficult to represent simply the complexity examined. Possible generalizations must be seen in the light of the characteristics of the context in which the study was carried out [99]. Besides, the study focused mainly on factors directly associated with the execution of the assembly tasks by the workers at their respective workstations. However, several other factors associated with the work organization can also have an impact on human performance and they should be addressed in future studies on manual assembly. This includes workload variations due to fluctuations in production demands, the overtime allocation practices and the arrangement of working hours, among others. Similarly, personal factors like worker's training level and experience are enablers affecting human performance that were out of the scope of the study. Even when HEART technique considers some of the mentioned factors, they have low multiplier values, which explains that we chose to exclude them from this analysis. Future research about complex manual assembly should explore these organizational and individual factors using different approaches as a complement to human reliability analysis (HRA). Furthermore, the relationship between the complexity of manual assembly operations, cycle times (both basic and operational) and human error probabilities need to be further examined in an integrative manner.

Conclusions
The paper shows how human reliability analysis (HRA) can be effectively applied in a manufacturing context. SHERPA and HEART, both classic HRA methods initially developed for safety-critical domains, allowed the identification of potential errors and evaluation of the factors contributing to these errors. From the three critical assembly tasks initially selected, a total of 23 subtasks and 59 error modes were identified with SHERPA: 36 action errors, nine selection errors, eight information retrieval errors and six checking errors. According to HEART the highest human error probabilities were associated with assembly parts sensitive to geometry-related errors (brackets and cushioned loop clamps). The computed values of human error probabilities should be interpreted with caution because their greatest value lies in preventing and managing human error rather than in prediction. Nevertheless, HEART supports the analyst by providing a list of proven factors that can affect human performance and whose influence can be compared through their relative impact on human error probabilities. This process can shed light on the best possible strategies to improve assembler performance. For example, the study shows that when spatial abilities are required, the probability of errors can be reduced by providing assembly instructions that engage the worker's senses more effectively. This supports the idea, already discussed in the literature, that assembly work instructions have a major impact on dynamic complexity and should be optimized to decrease the cognitive workload.
The study also identified enhanced inspection processes, better operations tracking, better feedback to the worker and the reduction of distractions and interruptions as ways to reduce error probabilities and improve overall assembly performance. Although some of these can be achieved by traditional means (e.g., kitting system, poka-yoke, human redundancy, etc.), technology can also decrease workers' cognitive load and reduce the dynamic complexity of assembly. With the development of human support technologies within the framework of the Operator 4.0 [33,100], the integration of several technologies into assembly guidance systems could benefit workers and system performance. Questions remain regarding the proper selection and deployment of these technologies, particularly in relation to utility, usability, risks and practical acceptability [101]. Humans must remain at the heart of the reflection. Ergonomic analysis of the work systems, including human errors and reliability analysis, is a necessary step along the way.

Institutional Review Board Statement:
The study was approved by the Ethics Committee of École de technologie supérieure (reference number H20181004 and date of approval 7 November 2018).

Data Availability Statement:
The data related to the study are subject to a confidentiality agreement and therefore are not publicly available.