Next Article in Journal
Exploring the Determinants of School Bus Crash Severity
Previous Article in Journal
How to Keep Drivers Attentive during Level 2 Automation? Development and Evaluation of an HMI Concept Using Affective Elements and Message Framing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing System-Wide Safety Readiness for Successful Human–Robot Collaboration Adoption

1
Centre for Industrial Management/Traffic and Infrastructure, KU Leuven, 3001 Leuven, Belgium
2
Core Lab ROB, Flanders Make@KU Leuven, 3001 Heverlee, Belgium
3
Division Robotics, Automation and Mechatronics, KU Leuven, 3001 Leuven, Belgium
*
Author to whom correspondence should be addressed.
Safety 2022, 8(3), 48; https://doi.org/10.3390/safety8030048
Submission received: 29 April 2022 / Revised: 10 June 2022 / Accepted: 20 June 2022 / Published: 1 July 2022

Abstract

:
Despite their undisputed potential, the uptake of collaborative robots remains below expectations. Collaborative robots (cobots) are used differently from conventional industrial robots. The current safety focus of collaborative workspaces is predominantly on the technological design; additional factors also need to be considered to cope with the emerging risks associated with complex systems. Cobot technologies are characterized by an inherent tradeoff between safety and efficiency. They introduce new, emergent risks to organizations and can create psychosocial impacts on workers. This leads to a confusing body of information and an apparent contradiction about cobot safety. Combined with a lack of safety knowledge, this impedes the introduction of cobots. A multi-step methodology was used, including a literature review and conceptual modeling. This article argues for the need for a system-wide safety awareness readiness assessment in the consideration phase of cobot implementation to alleviate the knowledge deficit and confusion. This work will benefit both researchers and practitioners. In addition, it defends the appropriateness of a maturity grid model for a readiness assessment tool. The building blocks for an easy-to-use and practically applicable tool are proposed, as well as an agenda for the next steps.

1. Introduction

Digital technologies are considered to be the main driver for the manufacturing industry to go forward [1]. Industrial collaborative robots (cobots) are some of these digital technologies [2] and are important accelerators for industrial innovation and growth [3]. The implementation of these new digital technologies to transform business models and achieve industrial performance is associated with Industry 4.0. Industry 4.0 focuses more on the development and integration of new technologies, such as robotics, to improve industrial performance. The upcoming Industry 5.0 advocates for the use of technology and innovation in harmony with social principles while also emphasizing sustainability and workers’ welfare [4,5]. Industry 5.0 implies increased collaboration between humans and smart systems, as exemplified by cobots.
Scholars, robotics associations, and regulatory and normative bodies have proposed many definitions of what a (collaborative) robot is, and the lines between robots and cobots are increasingly blurred. For this work, we simply define a cobot as a robot that works alongside a human operator in a shared industrial workspace (for a more technical definition, we refer to the ISO 15066 standard [6]). We refer to recent state-of-the-art reviews for more details on HRC techniques, challenges, and opportunities [7,8,9]. Cobots have important assets: They are more flexible, can be more easily reprogrammed for new tasks, are mobile, and, in general, cost less than conventional robots. Notwithstanding their undisputed potential as important drivers for innovation and growth, the forecasted steep increase in deployment of cobots has remained below expectations [10,11,12]. A recent examination of the literature investigated the adoption factors for cobots and identified safety as an important barrier to cobot adoption [13]. Next to safety, the other factors include poor acceptance by users, ill-prepared receptiveness and readiness of organizations, a lack of balanced cost–benefit considerations, and lack of knowledge. Knowledge deficits were considered the most significant barrier by [14], and were further broken down into (i) cobots and the technology in general, (ii) potential applications and ways of using cobots, (iii) reference cases, (iv) understanding of safety in general, (v) understanding and lack of safety legislation and unclarity of safety standards and metrics, (vi) risk (re-) assessment, and (vii) ease of deployment. The slower-than-expected uptake of collaborative systems is also attributed to the lack of confidence and knowledge with respect to ensuring safety [15,16] and is related to perceived unsafe feelings of operators towards cobots [17].
Robots and humans are separated in conventional industrial robot applications to ensure safety, both through physical separation and sequencing of tasks. Collaborative robotic systems, in contrast, are designed to interact with human operators. Built-in control systems and sensors are used to avoid and reduce the impact of collisions and inherently assure safety. Both the context and the way cobots are used differ completely from those of conventional industrial robots. Specific recommended safety guidelines for cobots were introduced in 2016 in the ISO 15066 Technical Specification [6], with a strong focus on assuring safety separation, limiting kinetic energy in cobot applications with intentional contact, and defining force thresholds for different body areas. Regardless of these guidelines and standards for robotic devices, safety for HRC is still considered an important challenge [18]. So far, safety for collaborative applications and cobot regulation is predominantly focused on the technological aspects of the cobot system in order to safeguard the physical safety of the operator [18,19]. The changed context in which cobots operate creates new risks that are not only related to the safety of a stand-alone cobot application. There is a need for adaptable systems that take into account both human behavior and the various system components [8]. Additionally, psychosocial factors, interaction failures, and enterprise-level factors need to be considered to cope with the emerging risks from a system-wide perspective [18,19,20,21]. The review of Mukherjee et al. shows how machine learning and reinforcement learning are used to study robot learning strategies, including psychological safety aspects, such as trust [8]. Both ‘Safety requirements for robot systems in an industrial environment‘ (ISO standards 10218-1 and 2) [22,23] and ISO 15066 [6] (adaptations for cobots), which are currently being revised, still primarily focus on controlling machine safety to safeguard the physical safety of the operator.
Practitioners are confronted with a tension-based body of information and an apparent contradiction about the safety of cobots [13]. When adhering to a narrow view on safety for HRC and focusing on machine safety, a deceptive sense of safety can emerge, since other risk factors beyond machine safety can be present. At the same time, cobots can be seen as too safe within this narrow machine safety focus, such as when productivity is hampered by cobot interruptions due to limiting force thresholds or safety regulations. From a human-centered perspective that includes system-wide dimensions beyond machine safety, cobots can be considered to be less safe on some levels because of newly introduced hazards. The confusion resulting from this tension, combined with a lack of knowledge or uncertainty about how to safely operate collaborative robots, contributes to the relatively low cobot adoption rate [13]. When coping with both existing and emerging risks from a system-wide perspective, cobot applications can only be safely operated when certain safety boundaries are not exceeded. The successful design of work systems including cobots already starts before a decision to invest is made [24].
From the foregoing, it can be concluded that system-wide safety is an important driver for the implementation of cobots, as also concluded by previous articles [20,25,26]. System-wide safety and the changed context in which cobots are used explain the apparent contradiction in cobot safety. As will be argued in Section 3.2.2, practitioners need to be aware of and understand the dynamic organizational space in which cobots are deployed.
To raise safety awareness and knowledge, we propose the building of an accessible and easy-to-use diagnostic self-assessment tool in the consideration phase of cobot implementation. We define the term “Safety Readiness” as the degree of preparedness of an organization to deploy a cobot from the perspective of system-wide safety awareness and knowledge. The words ‘hazard’ and ‘risk’ are used interchangeably. We propose the use of a maturity grid model to evaluate the preparedness of an organization’s hazard or risk identification capabilities with respect to the deployment of a cobot. Maturity models generally have the advantage of creating an awareness of the topic being analyzed and can assist in the assessment of capabilities [27]. Maturity grids use a descriptive framework to illustrate some levels of maturity in a simple and textual manner and are typically used for self-assessment [28]. Finally, this article introduces the building blocks of a system-wide Cobot Safety Readiness Assessment Tool (CSRAT) to be used in the consideration phase of the cobot acquisition process.
Three specific research questions will be addressed in this article:
  • RQ1: What are the arguments that explain the necessity of assessing system-wide safety readiness in the consideration phase of cobot implementation?
  • RQ2: How can maturity models be used to assess readiness?
  • RQ3: What are the building blocks for a Cobot Safety Readiness Assessment Tool?
This work is aimed at guiding organizations in the consideration phase of cobot acquisition and at reducing safety concerns as a result of knowledge deficiency and confusion about safety. Further, in contrast to most scientific research, which deals with specific aspects of safety in HRC [26], we provide an integrated and holistic vision of safety challenges. This article thus serves both a scientific and a practical purpose. The scientific relevance lies in the generally recognized maturity model approach for a specific application domain and the systemic approach to safety risk awareness for cobots. To the best of our knowledge, this has not been done before. The proposed framework and building blocks will be further developed and applied to concrete industrial cases in the next step.
The remainder of the article is organized as follows. In Section 2, the methodology is briefly explained. Section 3 examines the context, clarifies the scope, and offers a critical review in three parts. The first part develops the arguments for the need to consider safety in the consideration phase. In the second part, the classification of cobot risk factors from a system-wide perspective is presented and the dynamic nature of cobot safety is discussed. The third part provides an overview of the various maturity models, their main properties and principles for development, and the application of these models for readiness assessment. Section 4 argues for using a grid maturity model and proposes the building blocks for a CSRAT, the theoretical and managerial implications, and the next planned steps. The limitations of this research are discussed in Section 5. Finally, Section 6 summarizes the main conclusions.

2. Materials and Methods

A multi-step methodology was used, and it included a literature review and conceptual modeling. To answer the research questions, distinct development phases were carried out, each with their respective analysis topics and methods, as illustrated in Figure 1.
The analysis for answering the first and second research questions is based on a qualitative analysis of peer-reviewed journal and conference papers, as well as the authors’ own previous research. This type of literature review is a ‘narrative review’ and is used to summarize different primary studies and to include the authors’ own experiences, existing theories, and models [29]. To answer the third research question, insights from the previous analyses are used to propose the building blocks for a model structure of a practically applicable tool for cobot safety readiness based on the maturity grid roadmap phases suggested by Maier et al. [30].

3. Context, Scoping, and Critical Review

3.1. The Need for Safety Awareness Readiness in the Consideration Phase

Earlier, safety readiness was defined as the degree of preparedness to deploy a cobot from the perspective of safety awareness and knowledge. The preparedness to deploy takes shape before the actual acquisition and installation of a cobot application. It is assumed that, at that moment, a first general idea of the collaborative process and tasks to be robotized (type of application, positioning, stationary or mobile use, idea of payload and reach) has formed and that there is an existing cobot technology on the market for this application.
To the best of our knowledge, the exact process that an organization goes through when contemplating the purchase and possible deployment of a cobot has not yet been studied in detail, and is often based on ad hoc business decisions. The ISO/IEC/IEEE15288 standard is a technical standard in systems engineering and establishes a common framework of process descriptions for describing the life cycles of systems created by humans. It defines six lifecycle stages: concept, development, production, utilization, support, and retirement [31]. These six stages inspired Saenz et al. [32] for the development of a cycle model for industrial human–robot collaboration (HRC) applications. This HRC cycle model also consists of six phases: inspiration and technology scan; requirement specification; design; implementation; verification/validation; operation. In the first phase (inspiration and technology scan), stakeholders look for information and inspiration from other available solutions and cases, including how safety was handled, in order to help them make an informed decision [32]. Assessing safety requirements as stipulated in directives and standards is part of the second phase (requirement specification), while the obligatory risk assessment takes place in the third phase (design) [32]. Other authors have defined the lifecycle phases of a cobot application in line with those for traditional robot applications, ranging from setup, commissioning, production, and decommissioning to dismantling [33]. The following example illustrates the application of the HRC life cycle model by Saenz et al. [32]. An industrial consumer goods manufacturer has started to investigate the automation of feeding packaging wrappers on a packaging line. The collaborative process consists of an operator working side by side with a cobot positioned at the start of the line. The cobot’s task is to fill the wrapper feed, while the operator inspects the quality. Referring to Saenz’s HRC life cycle model [32], the scope of this article is concerned with the first and second phases. We label the entirety of these first two phases as the ‘consideration phase’ of a cobot acquisition process. The other four phases are, for simplicity, referred to as the ‘implementation phase’ (see Figure 2).
In the consideration phase, the cobot is not yet in use, and there is no lived experience or history of safety accidents, especially not in the case of a first cobot acquisition. The operator’s perception of the level of danger and the level of comfort when interacting with a cobot is described as perceived safety [34]. Regardless of the actual level of safety, humans can feel unsafe when working with cobots [34,35]. With no historical safety data available and with elements of uncertainty from different stakeholders in the pre-implementation phase, the perceived safety of cobots by operators who have yet to start working with a cobot is to be considered as well. Perceived safety is a domain that, as opposed to technical issues, has been largely neglected [34,35].

3.2. The Need for System-Wide Cobot Safety

3.2.1. Cobot Risk Factors Identified from a System-Wide Perspective

Based on a literature review spanning the last decade, five risk factor classes in HRC were determined by Berx et al., and each class consists of several sub-classes [19] (Figure 3). A socio-technical perspective was used to demonstrate the system-wide nature of the combination of these five classes.
Technology. The technological risk factors, in short, have to do with both the cobot-specific technology (hardware and software) and the underlying system technology [19], often resulting in a focus on mechanical safety, physical separation, and energy management. In addition to the potential safety issues already described earlier in this chapter, we additionally consider cyber security here. The latter concerns the protection of a business system from hackers who bypass network security to steal and/or misuse data. They could take control of the cobot’s controls and potentially endanger the operator, or insufficient system capacity for data processing could slow down the cobot’s response, creating another potential risk.
Human. The human factor can be considered to be an essential link between the management of traditional risk and of emerging risk in complex systems [21]. However, the effects of cobots on humans are still underexposed [36]. The human-related risk factors are summarized in three subgroups: psychosocial risks, mental strain (cognitive under- or overload), and physical workload. Included in the psychosocial subgroup are trust-related risk factors (potential risk associated with the acceptance of the cobot and reflected in overreliance or distrust) and work-related stress (due to the robot’s characteristics and/or movements or fear of job loss) [19].
Collaborative workspace and application. This class includes risk factors related to access control (the processes that determine who can enter the collaborative workspace), the layout of the workspace (such as the placement of equipment or the presence of dangerous objects and substances), and maintenance (of the cobot itself or for maintenance workers when working in the collaborative workspace) [19]. The safe positioning of the cobot in the collaborative workspace, including dynamic collision avoidance strategies and motion planning algorithms, was reviewed and discussed in detail by Chemweno et al. [20]. Risks to operators manifest themselves in the workspace, whether or not they are influenced by one of the other dimensions. The collaborative workspace cannot exist without the human, technological, enterprise, or external dimensions, and all five classes interact.
Enterprise. Risk factors related to decisions at the enterprise level for which neither the worker nor the cobot is directly responsible are represented in this class. This class includes ethical factors (social acceptance, privacy, use of artificial intelligence) [19]. In addition, risk factors related to organizational strategy are classified in this group and include suboptimal training for working with a cobot [19]. The consequences related to the division of labor between workers and cobots also fall into this group. The corporate vision regarding the degree of adaptation of the existing safety rules as a result of the cobot’s introduction can also play a role. An additional factor is employee participation. Although more research is still needed [37], there is growing evidence for the assumption that increased worker participation enables HRC [38] and positively affects workers’ wellbeing [39] and cobot acceptance [40].
External. Risk factors that refer to circumstances and events that arise outside the company and over which the company has no direct influence are included in this category. There are, for example, regulations governing safety (directives, legislation). A potential risk factor is the vagueness of the legislation, which makes it difficult to translate it into concrete measures at the company level. Environmental factors include weather conditions that can affect the workspace (e.g., incident sunlight that can heat up electronic cobot components or affect the efficiency of sensors).

3.2.2. Cobot Safety Is Dynamic and Not Limited to the Consideration Phase

Safety is not absolute, but an emergent property of the dynamic interactions of system components with each other and their environment [18]. Organizations deploying cobots must be aware of how to safely deploy cobots in the dynamic organizational space throughout their life cycle. This view of safety contrasts with the focus on static safe outcomes, which can lead to a deceptive sense of safety. Based on our previous work, we have identified three key elements that characterize this dynamic nature of cobot safety: the degree of knowledge about new risks, the degree of confidence in automation, and the tradeoff between safety and efficiency. We believe that these three elements, in particular, contribute to safety readiness. Other authors have identified similar characteristics, albeit from a slightly different perspective. For example, amongst the conditions for dealing with safety issues, Grahn et al. identified the need to take all factors affecting safety into account, the need to ensure sufficient user acceptance, and the need to be sufficiently cost-effective to make cobots commercially interesting [17].
Knowledge about risks. The degree of knowledge of new risks influences the control that organizations will have when coping with emerging circumstances. Cobot applications were only introduced a decade ago and are still subjected to fast-paced developments. In safety, the typical learning cycle is based on the fact that unexpected technical and interaction failures with new technologies and processes lead to design updates and an improved operational understanding. Due to the lack of historical safety data on cobot applications, organizations will discover only some of the safety problems or system integration barriers during operations. This learning curve can be characterized in line with the three traditional eras of occupational health and safety. These are defined as having evolved through three ages, starting with a technical age, a human factor age, and a management system age [41]. They are considered as complementing, not replacing each other. Since then, new ages in safety science have been suggested [42,43], adding the need to additionally consider adaption and integration [43]. Several scholars have explained that the current research focus concerning cobots is predominantly on mechanical hazards and techno-centric solutions [18,44,45], in line with the first age of safety. This includes inherent safe design, safety separation controls, power- and force-limiting controls, and reciprocal human–robot awareness technologies. Whereas technological controls are well researched, several scholars report a lack of attention to human factors concerning robot applications [19,46,47]. Although mechanical hazards from controlled set-ups can be accurately predicted, it is much harder to understand interaction failures, maladaptation of decision making, and possible psychosocial effects. Although there is growing attention to these human factors [48], there is still a need to gain a better understanding of these risks from what is known as the second age of safety. Typically, these are not risks that can be controlled with one-for-all solutions, but rather, they require safety management frameworks to enable the monitoring, analysis, and management of these risks. This extends the safety focus to the issues corresponding with the third age of safety, the age of safety management systems, and the fourth and fifth ages of safety, which reflect contemporary safety thinking [43]. Contemporary safety frameworks are increasingly influenced by a view on safety as a potential to create system resilience to changing circumstances or the ability to respond, monitor, learn, and anticipate [49]. This is in contrast with a belief in safety that is merely created by being compliant with technical specifications and static procedures [50]. This shifts the focus from only controlling safe outcomes to the potential to create an understanding of the adaptive capacity of systems and their interactions, fighting analytical reductionism [51,52,53].
Trust. Trust in automation influences operator acceptance or rejection of cobot technologies and results from several psychosocial factors, as established by previous research [19,36,54]. Trust factors include robot performance, robot reliance, individual differences, and human–robot collaboration [55], but they are also influenced by societal elements [46] and contextual factors [56]. Perceived safety is also identified as an important design aspect that promotes trust in industrial HRC [57,58]. Long-term trust can only emerge when a worker expects the cobot application to positively influence their situation. Resistance to change, increased workload, fear of job loss, and other negative consequences, even when just perceived, can negatively influence trust. At the human–robot collaboration level, trust loss can suddenly result from automation surprises [59] or after incidents. An accumulation of small errors can even produce a longer-lasting impact, whereas performance consistency positively influences trust in robots [55]. At the operator level, trust is influenced by technology literacy and skills [60], as well as by training and system knowledge [57]. Despite the diversity of psychosocial factors involved, trust in automation can be regarded as the main driver for psychosocial wellbeing and has been proven to be useful in understanding human–system performance in complex work environments [61]. Consequently, awareness of the level of trust displayed by an operator is an important requirement for safe cobot applications. This is in line with Parasuraman’s assertion that individual personality dimensions affect people’s tendency to embrace the use of new technologies [62]. While optimism and innovativeness function as mental enablers, discomfort (“A perceived lack of control over technology and a feeling of being overwhelmed by it” [62]) and insecurity (“Distrust of technology and skepticism about its ability to work properly” [62]) function as mental inhibitors of acceptance of new technologies. Both discomfort and insecurity can be linked with safety. Hence, communication and training strategies about technological and task transformations, as well as simulation and experimentation with cobots in the pre-implementation phase, are strongly encouraged to foster building trust in their integrated safety functions.
Safety–efficiency tradeoff. Simply put, the safety–efficiency tradeoff means weighing productivity against safety. Although the goal of most cobot systems is to increase productivity, safety can put a limit on efficiency. In cobot applications, stringent safety limits, such as speed or payload reductions, can cause decreased productivity. Frequently exceeded safety separation thresholds and safety emergency stops can decrease profitability. Examples where the introduction of cobots paradoxically leads to lower productivity have been described [63], and slower-than-expected uptake of cobot technology has been attributed to safety requirements [12]. In a recent study, company managers expressed their concern about balancing worker safety with avoiding the disruption of the robot’s work [64]. While some scholars argue that safety might never be fully guaranteed from a human-centric perspective, others may consider that cobots are overly safe when productivity is hindered [13]. Creating an ultra-safe system without taking productivity into account is fairly simple. However, the more important issue for organizations using cobots is still to define an acceptable margin for increased productivity while safety can still be managed.
These three key elements play a role not only during the consideration phase, but also during the complete life cycle of a cobot. The use of the proposed CSRAT is, however, aimed at supporting practitioners in the consideration phase of cobot acquisition. The purchase of a cobot is an ad hoc decision based on the best knowledge available at the time. The awareness and knowledge about safety-related aspects are represented in the risk factor classification in Section 3.2.1 and already reflect the ability to identify (new) system-wide risk factors, including trust. Certain ingredients of the safety–efficiency tradeoff (for example, those related to the safety component) are present in the risk factor classification, but will only be fully expressed in the implementation phase when the cobot is in use.

3.3. Maturity Models

Based on dictionary definitions, the term ‘maturity’ is usually described from two perspectives [65]. On the one hand, there is a reference to something or someone who has reached a state of completeness, and on the other hand, there is a reference to the process or evolution of bringing something to maturity or the desired final state [30,66]. Becker et al. defined maturity models as follows: ‘‘A maturity model consists of a sequence of maturity levels for a class of objects. It represents an anticipated, desired, or typical evolution path of these objects shaped as discrete stages. Typically, these objects are organizations or processes’’ [67].
Since its first development in the early 1970s, the concept of maturity models has developed and is widely applied in several management domains [27,68,69]. Today, both public and private organizations use maturity models to understand the uncertainty caused by the new challenges of digital transformation [70]. They are especially suitable when the technology is in an emerging phase and only a few observable implementations on which to base the decision to invest in technology are available [71,72]. Maturity models can generate awareness of the analyzed topic, and they can help to provide guidance for a systematic approach for improvements, ensuring a certain quality, and assessing one’s own capabilities [27]. These models usually imply a continuous improvement process; the next level is more sophisticated (i.e., closer to the target level) than the previous one [73,74,75,76,77].
Despite the considerable demand for maturity models and their practical relevance, there is also academic criticism, especially regarding methodological shortcomings [70]. Several scholars have studied the expressed shortcomings, suggested greater methodological rigor in the maturity model development process, and proposed design principles and guidelines [30,67,72,78,79].

3.3.1. Different Types of Maturity Models

According to the literature, the purpose of a maturity model can be directed at achieving descriptive, prescriptive, or comparative results [30,68,78,79]. A descriptive model describes the as-is situation by providing an assessment of a particular process, with no suggestion for improving maturity.
A prescriptive model indicates how the desirable future levels of maturity can be reached (i.e., by enabling a roadmap for improvement). Comparable models are used to benchmark maturity across different organizations and industries. According to their purpose, this means that maturity models can be used as an assessment tool [75], for benchmarking or detecting best practices [80], and to provide guidelines for improvement [80,81]. Maturity models can be used as tools for determining capabilities and readiness levels [82].
There is no uniform approach for maturity assessment models [75], and some authors have attempted to classify the different types of models. A classification of maturity models was proposed by Fraser et al. in 2002 by differentiating between three types of models: “maturity grids, hybrids and Likert-type questionnaires, and CMM [Capability Maturity Models]-like models” [83]. Schmitt et al. defined five model categories to be considered by SMEs to guide Industry 4.0 projects: roadmaps; readiness models; maturity models; frameworks; process models; guidelines [77]. Moultrie et al. distinguished between two general approaches for the development of maturity-based models: maturity grids and capability maturity models [84].
Maturity grids originated in the late 1970s in the field of quality control, and they define a number of levels of maturity for processes in a given topic area [30,83]. Fraser et al. described grid models as follows: “The principal idea of the maturity grid is that it describes in a few phrases, the typical behaviour exhibited by a firm at a number of levels of ‘maturity’, for each of several aspects of the area under study” [83]. Maturity grids are descriptive and do not prescribe what a particular process should look like, but rather identify characteristics for developing high-performance processes and focus on scoring specific statements of good practice [28,30]. Grid models are comparable with Likert-scale questionnaires with anchor phrases [83]. The Likert scale is typically based on explicitly described ‘best-case’ or top-level statements, and performance is scored on a scale from 1 to n (e.g., ranging from strongly disagree to strongly agree), and it can be regarded as a grid model element [75,83]. Although maturity grids have not received much attention in the academic literature since their conception [27,30], they were studied in depth by Maier et al. [30,85] and by Moultrie et al. [86] in the last decade and have been applied by other scholars [87,88,89].
CMMs are related to grid models, and they were originally applied to the principles of process maturity for the management of software projects in the 1990s. Since then, CMMs have been applied to assess the maturity of processes, projects, and organizations [30,90]. CMMs have gradually expanded beyond their initial focus on software development to include systems engineering processes, for which they are labeled as a capability maturity model integration (CMMI) [82,88]. CMMs are evolutionary tools that aim to assess and improve capabilities (i.e., skills or competencies) in order to reach business or process excellence [91], and they typically have five levels: initial, managed, defined, quantitatively defined, and optimized.
Contrarily to the CMM-like models, maturity grids are not as complex and are considered to be less sophisticated and formal [72,82,83]. The main difference with CMM-like models is that grid models typically contain text descriptions for each maturity level. The use of text descriptions to define and illustrate maturity levels is considered one of the advantages of maturity grids, as it allows for easier understanding [92].
Specifically, in the realm of Industry 4.0 and digital maturity, several authors are using the terms maturity and readiness side by side [74,93,94,95]. Schumacher et al. regarded readiness models as a synonym for maturity models when they had the goal of capturing the starting point and allowing for the initialization of the development process. A readiness assessment can, therefore, be regarded as taking place before engaging in the maturing process. Whereas a maturity assessment aims to capture the as-is state during the maturing process [74], a readiness assessment aims to analyze the preparedness level of the conditions or capabilities needed to reach a particular goal [76].

3.3.2. Maturity Grid Model Properties, Design Process, and Guidelines

Maier et al. proposed a roadmap for the development of a maturity grid model consisting of four phases to offer guidance and incorporate the criticism of maturity models earlier expressed [30]. The intended audience, the purpose of the assessment, the scope, and the success criteria need to be determined in the first phase. The second phase concerns the development of the model and defines the architecture of the maturity grid. In this phase, the process areas (or dimensions), the maturity levels, and the cell descriptions are formulated. Defining the mechanism to administer the maturity grid is also part of this phase (on paper, electronically, group-based, individually, etc.). The evaluation of the grid is the purpose of the third phase. This is an iterative refinement and testing process that seeks feedback on the requirements outlined in the first phase. Keeping the grid up to date and assuring continued accuracy and relevance is the subject of the fourth (maintenance) phase [30].

3.3.3. Safety-Related Maturity Models

After having discussed maturity models in general, this paragraph will briefly look into safety- and risk-management-related maturity models. Santos-Neto and Costa performed a systematic literature review in 2019 and classified 409 maturity models in 14 domains or performance areas [68]. Nearly 40% of the classified papers feature IT/IS management, whereas risk management accounts for circa 7%. A thorough review of risk maturity models was performed by Cienfuegos in 2013. This analysis reveals that risk maturity models have been developed since the late 1990s, and they have mostly adapted the CMM principles [88]. This is confirmed by a similar analysis carried out by Kaassis and Badri [96]. Both papers mention risk identification as one of the main activities of risk management [88,96]. However, when this activity should take place is not addressed. The examples of measurement given refer to the number of hazards uncovered and the number of incident reports. The latter is only possible when the application is already operational, and it not feasible in the consideration phase. During the development of a fuzzy model to assess enterprise risk maturity in the construction industry in China, important best practices were identified [97]. Risk identification, analysis, and response are featured in second place in that list. Other safety-related models are, for instance, an occupational safety and health capability maturity grid for design in the construction industry [98] and a maturity model for safety performance measurement [99].
Many maturity models have been developed within the realm of Industry 4.0 [73,74,100], and although some specifically mention human–machine interaction as a required capability [101] or cobots as one of the aspects that influence readiness for 4.0 [102], they do not specifically address safety.

4. Discussion

4.1. Appropriateness of a Maturity Grid Model as a Methodological Starting Point

The review in Section 3 shows that the logic of the maturity model is well suited for diagnosing readiness. In addition, it shows that a maturity model is useful for a highly innovative domain such as cobots. Typically, in maturity models, there is a notion of an ‘end state’, i.e., a desirable end state (of excellence or compliance) to achieve. Defining a higher and final state of maturity may not be desirable or may vary depending on circumstances [71]. In a fast-changing technological environment, what may be considered as mature today might be outdated tomorrow [103]. In addition, this notion of an end state is not well suited for safety, since ‘absolute safety’ does not exist. We, therefore, adhere to the distinction made by Schumacher et al. between readiness and maturity, since we are looking at the assessment of readiness before the cobot is in use and, thus, before the maturing process has started [74]. In line with this train of thought, we believe that the use of the term ‘readiness’ is appropriate to assess the state of preparedness of the capabilities or dimensions that promote safety for cobots. The term ‘readiness’ is also linked to the principles of technology readiness levels (TRLs) [30,77]. The TRL scale was established as a metric of the readiness of the technology itself and to provide a method for consecutively increasing the maturity of that specific technology [104]. Our purpose, however, is not to assess the readiness of the cobot technology itself.
Less complex descriptive maturity grid models are well suited as a basis for the CSRAT model. The use of text descriptions for each maturity level is especially advantageous because it facilitates understanding [92]. Since, to the best of our knowledge, there is no existing maturity model for measuring safety readiness for cobots, we need to build a new one.

4.2. Proposal of the Building Blocks of the Cobot Safety Readiness Tool (CSRAT)

For the development of the CSRAT, the grid process roadmap of Maier et al. [30] will be adopted. These authors also included a call for a more rigorous approach to maturity grid development. The translation of these into the proposed building blocks of the CSRAT is illustrated in Figure 4. To achieve the goals of this article, the focus is on the activities described in phase 1 and phase 2 of Maier’s roadmap, which are included in the ‘Prepare’ and ‘Iterative Model Development’ sections of Figure 4.

4.2.1. Prepare

Identify need. We argued for the need for a readiness model and the suitability of a maturity grid in Section 3 and Section 4.1.
Specify audience. The audience is composed of the different stakeholders who participate in the assessment [30]. This tool is aimed at both an external and an internal audience. As Mettler suggests, a model should also consider the specific needs of a management-oriented, a technology-oriented audience, or both [72]. The final composition of the stakeholder group should be determined by the organization applying the CSRAT, as long as the multidisciplinary aspect (management- versus technology-oriented) is respected. The external audience is generally more technology-oriented (system integrators, consultants) and uses the CSRAT to guide organizations through the cobot procurement process. The internal audience’s management-oriented stakeholders (e.g., innovation and safety managers) are concerned with making informed decisions about the implementation of the cobot. The internal technology-oriented audience can be represented by the robot application engineers and designers, with a focus on technical specifications.
Define objective. The objective of the CSRAT is to provide an easy-to-use and practically applicable tool for assessing cobot safety readiness.
Clarify Scope. The scope of the tool is restricted to the assessment of safety readiness in the consideration stage of cobot implementation.
Administration mechanism. The delivery mechanism will be a self-assessment without the need for external support. Descriptive maturity models are well suited for self-assessment purposes [28,90]. Self-assessment tools are also a commonly used evaluation method in human–robot interaction studies, and while they can provide very useful information, an important drawback is their validation [105].
Define success criteria. Usability and usefulness are suggested as general model requirements by Maier et al. [30]. Usability is about the understanding of the concepts and language used, and usefulness is about the perceived degree of helpfulness in stimulating learning. Success is defined as the ability of the tool to provide understandable and valuable information and increase awareness about system-wide risk factors.

4.2.2. Model Development

Select process area (or dimension). According to Fraser, a process area is organized by common characteristics that specify a set of key practices for achieving a series of objectives [83]. A literature review can be used as a technique to determine the process areas (or dimensions) of a maturity (grid) model [30,66,106]. The five risk factor classes described in Section 3.2.1 are based on a literature review. In Section 3.2.2, it was argued that cobot safety is a dynamic property along the complete life cycle of a cobot and is characterized by three key elements. The use of the CSRAT, however, is aimed at supporting practitioners in the consideration phase of cobot implementation. Following the general purpose of maturity grid models, the CSRAT is a diagnostic and descriptive tool for the as-is situation. The dynamic aspects of cobot safety once installed are beyond the scope of this article. Consequently, the five classes of risk factors described in Section 3.2.1 will form the process areas or proposed maturity grid dimensions of the CSRAT model. Awareness and knowledge of these risk factors will be instrumental for practitioners by guiding them in the consideration phase.
Maturity levels. The formulation of the maturity levels and rating scales for each dimension is an important step in the development of a maturity grid. According to Maier et al., “Levels need to be distinct, well defined, and need to show a logical progression as clear definition eases interpretation of results” [30]. The number of levels typically varies between four and five [28]. For our purpose, we propose the use of a four-level grid. One mechanism for formulating the levels is, as suggested by Maier et al., to first define the extremes (or worst and best situations), and then the characteristics of the other stages [30]. In our reflection, these levels represent the measure of safety readiness, which we defined earlier as the degree of preparedness of an organization to deploy a cobot from a system-wide safety awareness and knowledge perspective. As an indicator of safety and knowledge awareness, we propose the extent to which safety management frameworks and tools (summarized as ‘tools’) for identifying and mitigating risks are known and adapted to cobots. We propose the use of the following four levels:
  • Level 1: No idea about cobot risk factors;
  • Level 2: Aware of the risk factors for cobots, but not sure how to cope with them and which safety management frameworks and tools to use;
  • Level 3: Previous safety management frameworks and tools are in place and can be used, but adaption is needed to work with cobots;
  • Level 4: Tools are either newly installed or adapted to working with cobots.
In addition to using the CSRAT to determine readiness levels, we also propose the addition of an additional yes/no question about the availability of the internally needed knowledge and, in the event of a negative response, a question about whether knowledge about where to find this expertise is available:
  • We have the right knowledge in-house (Yes/No);
  • If no: We know where to find this knowledge externally (Yes/No).
This question increases the usability of the results of the CSRAT and provides the organization with additional information about the use of a cobot.
Cell text. The maturity grid is composed of a cell for each maturity level per sub-dimension. Each cell provides a descriptive text on the characteristic extent of knowledge and application of a safety management tool or activity for that particular maturity level. These texts are also known as an ‘anchored scale’ [30] or ‘anchor phases’ [86]. The cell text is used in a descriptive manner to illustrate a progressive maturity from no awareness about that particular risk factor to aware (level 2) but not adapted in existing safety management tools (level 3) and to already included (level 4). The cell text for levels 3 and 4 will include examples of some relevant tools to consider. For instance, in the human dimension, this could be for the psychosocial sub-dimension, occupational health and safety surveys, satisfaction or worker commitment surveys, trust measures, or investigations of job demands and resources (see Table 1 as an illustration). Workload surveys can be used to investigate risk factors related to cognitive ergonomics. Physical-ergonomics-related risks can be evaluated by performing a postural analysis (RULA index) [107]. In the Enterprise dimension, for the sub-dimension of safety strategies, this could be the number of hours of training for using cobots. The definition of all levels will be evaluated by the practitioners’ audience in a later step (this is outside of the scope of this article). The use of written evaluation criteria has the benefit of providing clearer and more objective alternatives for the respondents in comparison to Likert scales, thus raising awareness and knowledge of the risk factors and avoiding the need for external consultants [99]. Thus, the respondent ‘scores’ the maturity level by choosing the best-fitting description.

4.3. Theoretical and Managerial Implications

The outcome of this research will be beneficial to both academic researchers and industry practitioners.
The contribution of this work to researchers is the application of a maturity-grid-based methodology for assessing safety awareness and knowledge of industrial cobots. To the best of our knowledge, this is the first time that this approach has been used. This work further contributes by including a system-wide view of HRC that combines technical aspects with human and organizational factors in the proposed assessment tool. A lack of consideration of these factors in HRC has already been pointed out in previous research [108,109].
From a managerial perspective, this study provides decision makers with valuable insights for enhancing their safety strategy—for instance, by pointing out the need to add new risk factors to the existing safety assessment procedures when implementing a cobot. Practitioners benefit from having an easy-to-use assessment tool to be used in the consideration stage of cobot acquisition. They can use the proposed Cobot Safety Readiness Tool as a starting point for exploring cobot technology in an early stage and from a broader perspective, beyond the technological risk factors. The proposed tool constitutes a self-assessment support, enabling practitioners to identify gaps in their knowledge of risk factors and the practices required to safely deploy a cobot. In addition, the grid could be used from a perspective of continuous learning to improve practices. Furthermore, practitioners avoid risking losing a competitive advantage by refraining from investing in potentially innovative and beneficial technologies because of a lack of knowledge.

4.4. Next Steps

First, all of the grid text descriptors must be defined. Then, the initial version of the CSRAT should be discussed and iteratively adjusted based on actual experience according to the model building blocks in Figure 4. Both the external experts and the industry practitioners, as previously defined in the target audiences, will be involved in this process. An evaluation by experts and practitioners will test the utility and content quality of this type of tool. The first field tests have already been planned.

5. Limitations

The proposed CSRAT is static and focuses on the consideration phase, which limits the extensibility of the tool. In addition, once a cobot is in operation, change takes place, and dynamic shifts can occur. Changes in underlying information systems may affect the safety of a cobot, or operator trust may change if unplanned or unexpected events or incidents occur. Since risks can be dynamic, safety management frameworks do not only need the capability to capture critical events, but also need to anticipate and capture potential events. In addition to methodological flaws, the fact that maturity models are static and not well suited for capturing change has been criticized [103]. One possible way to address this criticism is to add an extra question if level 4 is marked in the grid. This question could examine whether or not regular monitoring of possible changes in cobot safety is included in the safety management processes.
Avenues for further research also include the quantification of the information gathered by the tool across different stakeholders within an organization and looking into the prioritization of the various dimensions and sub-dimensions.

6. Summary and Conclusions

To allow collaborative robots to achieve their full potential in industrial applications, there is a need to address the safety component early on in the decision process. This need is explained by knowledge deficiencies and confusion about safety.
To raise awareness and knowledge about cobot safety in the consideration phase, we propose the building of a self-assessment tool. We defend the use of a maturity grid model as a methodological starting point for this safety readiness assessment tool. Maturity grid models can have a descriptive purpose in order to determine and evaluate the current state of readiness of dimensions and sub-dimensions of a model according to the appropriate levels [95]. This article proposes the building blocks of a system-wide Cobot Safety Readiness Tool (CSRAT), suggests the next steps, and discusses some limitations. This article contributes to the creation of awareness of safety risks from a holistic or system-wide perspective and supports the decision making concerning the acquisition of a cobot.

Author Contributions

Conceptualization, N.B. and A.A.; methodology, N.B. and A.A.; validation, W.D. and L.P.; writing—original draft preparation, N.B.; writing—review and editing, A.A., W.D., and L.P.; supervision, W.D. and L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Paradigms 4.0 project of the Fonds Wetenschappelijk Onderzoek (FWO), grant number S006018N.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Carolis, A.; Macchi, M.; Negri, E.; Terzi, S. Guiding manufacturing companies towards digitalization a methodology for supporting manufacturing companies in defining their digitalization roadmap. In Proceedings of the 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), Madeira, Portugal, 27–29 June 2017; pp. 487–495. [Google Scholar]
  2. Kamble, S.S.; Gunasekaran, A.; Gawankar, S.A. Sustainable Industry 4.0 framework: A systematic literature review identifying the current trends and future perspectives. Process Saf. Environ. Prot. 2018, 117, 408–425. [Google Scholar] [CrossRef]
  3. euRobotics SPARC Robotics 2020 Multi-Annual Roadmap For Robotics in Europe; EU-Robotics AISBL: The Hauge, The Netherlands, 2016.
  4. Breque, M.; De Nul, L.; Petridis, A. Industry 5.0 towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Brussels, Belgium, 2021. [Google Scholar]
  5. Bednar, P.M.; Welch, C. Socio-Technical Perspectives on Smart Working: Creating Meaningful and Sustainable Systems. Inf. Syst. Front. 2020, 22, 281–298. [Google Scholar] [CrossRef] [Green Version]
  6. ISO ISO/TS 15066:2016—Robots and Robotic Devices—Collaborative Robots. Available online: https://www.iso.org/standard/62996.html (accessed on 21 October 2021).
  7. Inkulu, A.K.; Bahubalendruni, M.V.A.R.; Dara, A.; SankaranarayanaSamy, K. Challenges and opportunities in human robot collaboration context of Industry 4.0—A state of the art review. Ind. Robot Int. J. Robot. Res. Appl. 2022, 49, 226–239. [Google Scholar] [CrossRef]
  8. Mukherjee, D.; Gupta, K.; Chang, L.H.; Najjaran, H. A Survey of Robot Learning Strategies for Human-Robot Collaboration in Industrial Settings. Robot. Comput. Integr. Manuf. 2022, 73, 102231. [Google Scholar] [CrossRef]
  9. Vicentini, F. Collaborative Robotics: A Survey. J. Mech. Des. 2021, 143, 040802. [Google Scholar] [CrossRef]
  10. Vanderborght, B. Unlocking the Potential of Industrial Human-Robot Collaboration for Economy and Society; Publications Office of the European Union: Brussels, Belgium, 2019. [Google Scholar]
  11. Vogt, G. International Federation of Robotics Editorial World Robotics 2020. World Robot. Rep. 2020, 5–9. Available online: https://ifr.org/img/worldrobotics/Editorial_WR_2020_Industrial_Robots.pdf (accessed on 20 April 2022).
  12. Saenz, J.; Elkmann, N.; Gibaru, O.; Neto, P. Survey of methods for design of collaborative robotics applications—Why safety is a barrier to more widespread robotics uptake. In Proceedings of the 2018 4th International Conference on Mechatronics and Robotics Engineering, Valenciennes, France, 7–11 February 2018; ACM: New York, NY, USA, 2018; pp. 95–101. [Google Scholar]
  13. Berx, N.; Decré, W.; Pintelon, L. Examining the Role of Safety in the Low Adoption Rate of Collaborative Robots. Procedia CIRP 2022, 106, 51–57. [Google Scholar] [CrossRef]
  14. Aaltonen, I.; Salmi, T. Experiences and expectations of collaborative robots in industry and academia: Barriers and development needs. Procedia Manuf. 2019, 38, 1151–1158. [Google Scholar] [CrossRef]
  15. Dominguez, E. Engineering a Safe Collaborative Application. In The 21st Century Industrial Robot: When Tools Become Collaborators. Intelligent Systems, Control and Automation: Science and Engineering; Aldinhas Ferreira, M.I., Fletcher, S.R., Eds.; Intelligent Systems, Control and Automation: Science and Engineering; Springer International Publishing: Cham, Switzerland, 2022; Volume 81, pp. 173–189. ISBN 978-3-030-78512-3. [Google Scholar]
  16. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  17. Grahn, S.; Johansson, K.; Eriksson, Y. Safety Assessment Strategy for Collaborative Robot Installations. In Robots Operating in Hazardous Environments; InTech: Rijeka, Croatia, 2017. [Google Scholar]
  18. Adriaensen, A.; Costantino, F.; Di Gravio, G.; Patriarca, R. Teaming with industrial cobots: A socio-technical perspective on safety analysis. Hum. Factors Ergon. Manuf. Serv. Ind. 2021, 32, 173–198. [Google Scholar] [CrossRef]
  19. Berx, N.; Decré, W.; Morag, I.; Chemweno, P.; Pintelon, L. Identification and classification of risk factors for human-robot collaboration from a system-wide perspective. Comput. Ind. Eng. 2022, 163, 107827. [Google Scholar] [CrossRef]
  20. Chemweno, P.; Pintelon, L.; Decre, W. Orienting safety assurance with outcomes of hazard analysis and risk assessment: A review of the ISO 15066 standard for collaborative robot systems. Saf. Sci. 2020, 129, 104832. [Google Scholar] [CrossRef]
  21. Brocal, F.; González, C.; Komljenovic, D.; Katina, P.F.; Sebastián, M.A. Emerging Risk Management in Industry 4.0: An Approach to Improve Organizational and Human Performance in the Complex Systems. Complexity 2019, 2019, 2089763. [Google Scholar] [CrossRef] [Green Version]
  22. International Organization for Standardization (ISO). Draft International Standard ISO/DIS 10218-1 Robotics—Safety Requirements for Robot Systems in an Industrial Environment—Part 1: Robots; International Organization for Standardization: Geneva, Switzerland, 2020. [Google Scholar]
  23. International Organization for Standardization (ISO). Draft International Standard ISO/DIS 10218-2 Robotics—Safety Requirements for Robot Systems in an Industrial Environment—Part 2: Robot Systems, Robot Applications and Robot Cells Integration; International Organization for Standardization: Geneva, Switzerland, 2020. [Google Scholar]
  24. Kadir, B.A.; Broberg, O.; Souza da Conceição, C. Designing Human-Robot Collaborations in Industry 4.0: Explorative Case Studies. In Proceedings of the International Design Conference, Dubrovnik, Croatia, 21–24 May 2018; Volume 2, pp. 601–610. [Google Scholar]
  25. Guiochet, J.; Machin, M.; Waeselynck, H. Safety-critical advanced robots: A survey. Rob. Auton. Syst. 2017, 94, 43–52. [Google Scholar] [CrossRef] [Green Version]
  26. Gualtieri, L.; Rauch, E.; Vidoni, R. Development and validation of guidelines for safety in human-robot collaborative assembly systems. Comput. Ind. Eng. 2022, 163, 107801. [Google Scholar] [CrossRef]
  27. Wendler, R. The maturity of maturity model research: A systematic mapping study. Inf. Softw. Technol. 2012, 54, 1317–1339. [Google Scholar] [CrossRef]
  28. van Dyk, L. The Development of a Telemedicine Service Maturity Model. Ph.D. Thesis, Stellenbosch University, Stellenbosch, South Africa, 2013. [Google Scholar]
  29. Dochy, F. A guide for writing scholarly articles or reviews. Educ. Res. Rev. 2006, 4, 1–21. [Google Scholar] [CrossRef]
  30. Maier, A.M.; Moultrie, J.; Clarkson, P.J. Assessing Organizational Capabilities: Reviewing and Guiding the Development of Maturity Grids. IEEE Trans. Eng. Manag. 2012, 59, 138–159. [Google Scholar] [CrossRef]
  31. IEEE Adoption of ISO/IEC 15288: 2002, Systems Engineering-System Life Cycle Processes; The Institute of Electrical and Electronics Engineers, Inc.: New York, NY, USA, 2005.
  32. Saenz, J.; Fassi, I.; Prange-Lasonder, G.B.; Valori, M.; Bidard, C.; Lassen, A.B.; Bessler-Etten, J. COVR Toolkit—Supporting safety of interactive robotics applications. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–6. [Google Scholar]
  33. Matthias, B.; Kock, S.; Jerregard, H.; Kallman, M.; Lundberg, I. Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept. In Proceedings of the 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), Tampere, Finland, 25–27 May 2011; pp. 1–6. [Google Scholar]
  34. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef] [Green Version]
  35. You, S.; Kim, J.-H.; Lee, S.; Kamat, V.; Robert, L.P. Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments. Autom. Constr. 2018, 96, 161–170. [Google Scholar] [CrossRef]
  36. Baltrusch, S.J.; Krause, F.; de Vries, A.W.; van Dijk, W.; de Looze, M.P. What about the Human in Human Robot Collaboration? A literature review on HRC’s effects on aspects of job quality. Ergonomics 2021, 65, 719–740. [Google Scholar] [CrossRef] [PubMed]
  37. Carayon, P.; Hancock, P.; Leveson, N.; Noy, I.; Sznelwar, L.; van Hootegem, G. Advancing a sociotechnical systems approach to workplace safety—Developing the conceptual framework. Ergonomics 2015, 58, 548–564. [Google Scholar] [CrossRef] [PubMed]
  38. Charalambous, G.; Fletcher, S.; Webb, P. Development of a Human Factors Roadmap for the Successful Implementation of Industrial Human-Robot Collaboration. In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future; Springer: Cham, Switzerland, 2016; pp. 195–206. [Google Scholar]
  39. Kadir, B.A.; Broberg, O. Human well-being and system performance in the transition to industry 4.0. Int. J. Ind. Ergon. 2020, 76, 102936. [Google Scholar] [CrossRef]
  40. Kopp, T.; Baumgartner, M.; Kinkel, S. Success factors for introducing industrial human-robot interaction in practice: An empirically driven framework. Int. J. Adv. Manuf. Technol. 2021, 112, 685–704. [Google Scholar] [CrossRef]
  41. Hale, A.; Hovden, J. Management and culture: The third age of safety. A review of approaches to organizational aspects of safety, health and environment. In Occupational Injury; CRC Press: Boca Raton, FL, USA, 1998; p. 38. ISBN 9780429218033. [Google Scholar]
  42. Jain, A.; Leka, S.; Zwetsloot, G.I.J.M. Work, Health, Safety and Well-Being: Current State of the Art. In Managing Health, Safety and Well-Being. Aligning Perspectives on Health, Safety and Well-Being; Springer: Dordrecht, The Netherlands, 2018; pp. 1–31. [Google Scholar]
  43. Borys, D.; Else, D.; Leggett, S. The fifth age of safety: The adaptive age. J. Health Serv. Res. Policy 2009, 1, 19–27. [Google Scholar]
  44. Margherita, E.G.; Braccini, A.M. Socio-technical perspectives in the Fourth Industrial Revolution—Analysing the three main visions: Industry 4.0, the socially sustainable factory of Operator 4.0 and Industry 5.0. In Proceedings of the 7th International Workshop on Socio-Technical Perspective in IS Development (STPIS 2021), Trento, Italy, 8 April 2021. [Google Scholar]
  45. Nayernia, H.; Bahemia, H.; Papagiannidis, S. A systematic review of the implementation of industry 4.0 from the organisational perspective. Int. J. Prod. Res. 2021, 1–32. [Google Scholar] [CrossRef]
  46. Martinetti, A.; Chemweno, P.K.; Nizamis, K.; Fosch-Villaronga, E. Redefining Safety in Light of Human-Robot Interaction: A Critical Review of Current Standards and Regulations. Front. Chem. Eng. 2021, 3, 1–12. [Google Scholar] [CrossRef]
  47. Sgarbossa, F.; Grosse, E.H.; Neumann, W.P.; Battini, D.; Glock, C.H. Human factors in production and logistics systems of the future. Annu. Rev. Control 2020, 49, 295–305. [Google Scholar] [CrossRef]
  48. Cardoso, A.; Colim, A.; Bicho, E.; Braga, A.C.; Menozzi, M.; Arezes, P. Ergonomics and Human Factors as a Requirement to Implement Safer Collaborative Robotic Workstations: A Literature Review. Safety 2021, 7, 71. [Google Scholar] [CrossRef]
  49. Hollnagel, E. The Four Cornerstones of Resilience Engineering. In Resilience Engineering Perspectives, Volume 2: Preparation and Restoration; Nemeth, C.P., Hollnagel, E., Dekker, S., Eds.; CRC Press: Boca Raton, FL, USA, 2009; pp. 117–134. [Google Scholar]
  50. Hale, A.; Borys, D. Working to rule or working safely? Part 2: The management of safety rules and procedures. Saf. Sci. 2013, 55, 222–231. [Google Scholar] [CrossRef] [Green Version]
  51. Dekker, S. Foundation of Safety Science of Understanding Accidents and Disasters; Routledge: London, UK, 2019; ISBN 9781138481787. [Google Scholar]
  52. Leveson, N.G. Engineering a Safer World Systems Thinking Applied to Safety; The MIT Press: Cambridge, MA, USA, 2011; ISBN 978-0-262-01662-9. [Google Scholar]
  53. Patriarca, R.; Bergström, J.; Di Gravio, G.; Costantino, F. Resilience engineering: Current status of the research and future challenges. Saf. Sci. 2018, 102, 79–100. [Google Scholar] [CrossRef]
  54. Eimontaitre, I.; Fletcher, S. Preliminary development of the Psychological Factors Assessment Framework for manufacturing human-robot collaboration. Indust. SMEs 2020, 105–144. [Google Scholar]
  55. Kim, W.; Kim, N.; Lyons, J.B.; Nam, C.S. Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach. Appl. Ergon. 2020, 85, 103056. [Google Scholar] [CrossRef] [PubMed]
  56. Weiss, A.; Wortmeier, A.-K.; Kubicek, B. Cobots in Industry 4.0: A Roadmap for Future Practice Studies on Human–Robot Collaboration. IEEE Trans. Hum. Mach. Syst. 2021, 51, 335–345. [Google Scholar] [CrossRef]
  57. Charalambous, G.; Fletcher, S.R. Trust in Industrial Human—Robot Collaboration. In The 21st Century Industrial Robot: When Tools Become Collaborators; Springer International Publishing: New York, NY, USA, 2021; pp. 87–103. ISBN 9783030785130. [Google Scholar]
  58. Akalin, N.; Kristoffersson, A.; Loutfi, A. Do you feel safe with your robot? Factors influencing perceived safety in human-robot interaction based on subjective and objective measures. Int. J. Hum. Comput. Stud. 2022, 158, 102744. [Google Scholar] [CrossRef]
  59. Sarter, N.B.; Woods, D.D. How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Hum. Factors 1995, 37, 5–19. [Google Scholar] [CrossRef]
  60. Michaelis, J.E.; Siebert-Evenstone, A.; Shaffer, D.W.; Mutlu, B. Collaborative or Simply Uncaged? Understanding Human-Cobot Interactions in Automation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–12. [Google Scholar]
  61. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. Situation Awareness, Mental Workload, and Trust in Automation: Viable, Empirically Supported Cognitive Engineering Constructs. J. Cogn. Eng. Decis. Mak. 2008, 2, 140–160. [Google Scholar] [CrossRef]
  62. Parasuraman, A. Technology Readiness Index (Tri). J. Serv. Res. 2000, 2, 307–320. [Google Scholar] [CrossRef]
  63. Liu, C.; Tomizuka, M. Control in a safe set: Addressing safety in human-robot interactions. ASME Dyn. Syst. Control Conf. DSCC 2014, 2014, 3. [Google Scholar] [CrossRef]
  64. Simões, A.C.; Lucas Soares, A.; Barros, A.C. Drivers Impacting Cobots Adoption in Manufacturing Context: A Qualitative Study. In Lecture Notes in Mechanical Engineering; Springer: Cham, Switzerland, 2019; Volume 1, pp. 203–212. ISBN 9783030187156. [Google Scholar]
  65. Oxford English Dictionary: Maturity. Available online: https://www-oed-com.kuleuven.e-bronnen.be/view/Entry/115126?redirectedFrom=maturity#eid (accessed on 26 January 2022).
  66. Lahrmann, G.; Marx, F.; Mettler, T.; Winter, R.; Wortmann, F. Inductive Design of Maturity Models: Applying the Rasch Algorithm for Design Science Research. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6629, pp. 176–191. ISBN 9783642206320. [Google Scholar]
  67. Becker, J.; Knackstedt, R.; Pöppelbuß, J. Developing Maturity Models for IT Management. Bus. Inf. Syst. Eng. 2009, 1, 213–222. [Google Scholar] [CrossRef]
  68. dos Santos-Neto, J.B.S.; Costa, A.P.C.S. Enterprise maturity models: A systematic literature review. Enterp. Inf. Syst. 2019, 13, 719–769. [Google Scholar] [CrossRef]
  69. Pereira, R.; Serrano, J. A review of methods used on IT maturity models development: A systematic literature review and a critical analysis. J. Inf. Technol. 2020, 35, 161–178. [Google Scholar] [CrossRef]
  70. Mettler, T.; Ballester, O. Maturity Models in Information Systems: A Review and Extension of Existing Guidelines. Proc. Int. Conf. Inf. Syst. 2021, 1–16. [Google Scholar]
  71. Normann Andersen, K.; Lee, J.; Mettler, T.; Moon, M.J. Ten Misunderstandings about Maturity Models. In Proceedings of the The 21st Annual International Conference on Digital Government Research, Omaha, NE, USA, 9–11 June 2021; ACM: New York, NY, USA, 2020; pp. 261–266. [Google Scholar]
  72. Mettler, T. Thinking in Terms of Design Decisions When Developing Maturity Models. Int. J. Strateg. Decis. Sci. 2010, 1, 76–87. [Google Scholar] [CrossRef] [Green Version]
  73. Mittal, S.; Khan, M.A.; Romero, D.; Wuest, T. A critical review of smart manufacturing & Industry 4.0 maturity models: Implications for small and medium-sized enterprises (SMEs). J. Manuf. Syst. 2018, 49, 194–214. [Google Scholar] [CrossRef]
  74. Schumacher, A.; Erol, S.; Sihn, W. A Maturity Model for Assessing Industry 4.0 Readiness and Maturity of Manufacturing Enterprises. Procedia CIRP 2016, 52, 161–166. [Google Scholar] [CrossRef]
  75. Mettler, T. Maturity assessment models: A design science research approach. Int. J. Soc. Syst. Sci. 2011, 3, 81. [Google Scholar] [CrossRef] [Green Version]
  76. Mittal, S.; Khan, M.A.; Romero, D.; Wuest, T. Building Blocks for Adopting Smart Manufacturing. Procedia Manuf. 2019, 34, 978–985. [Google Scholar] [CrossRef]
  77. Schmitt, P.; Schmitt, J.; Engelmann, B. Evaluation of proceedings for SMEs to conduct I4.0 projects. Procedia CIRP 2019, 86, 257–263. [Google Scholar] [CrossRef]
  78. de Bruin, T.; Rosemann, M.; Freeze, R.; Kulkarni, U. Understanding the main phases of developing a maturity assessment model. In Proceedings of the Australasian Conference on Information Systems (ACIS), Sydney, Australia, 30 November–2 December 2005. [Google Scholar]
  79. Pöppelbuß, J.; Röglinger, M. What makes a useful maturity model? A framework of general design principles for maturity models and its demonstration in business process management. In Proceedings of the 19th European Conference on Information Systems (ECIS 2011), Helsinki, Finland, 9–11 June 2011. [Google Scholar]
  80. Lasrado, L.A.; Vatrapu, R.; Andersen, K.N. A set theoretical approach to maturity models: Guidelines and demonstration. In Proceedings of the 2016 International Conference on Information Systems, ICIS 2016: Digital Innovation at the Crossroads, Dublin, Ireland, 11–14 December 2016; Association for Information Systems. AIS Electronic Library (AISeL): Atlanta, GA, USA, 2016; Volume 37, pp. 1–20. [Google Scholar]
  81. Rohrbeck, R. Corporate Foresight towards a Maturity Model for the Future Orientation of a Firm; Springer: Berlin/Heidelberg, Germany, 2012; ISBN 9783790827613. [Google Scholar]
  82. De Carolis, A.; Macchi, M.; Kulvatunyou, B.; Brundage, M.P.; Terzi, S. Maturity Models and Tools for Enabling Smart Manufacturing Systems: Comparison and Reflections for Future Developments. In IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2017; Volume 517, pp. 23–35. ISBN 9783319729046. [Google Scholar]
  83. Fraser, P.; Moultrie, J.; Gregory, M. The use of maturity models/grids as a tool in assessing product development capability. In Proceedings of the IEEE International Engineering Management Conference, Cambridge, UK, 18–20 August 2002; Volume 1, pp. 244–249. [Google Scholar]
  84. Moultrie, J.; Clarkson, P.J.; Probert, D. Development of a Design Audit Tool for SMEs. J. Prod. Innov. Manag. 2007, 24, 335–368. [Google Scholar] [CrossRef]
  85. Maier, A.M.; Eckert, C.M.; John Clarkson, P. Identifying requirements for communication support: A maturity grid-inspired approach. Expert Syst. Appl. 2006, 31, 663–672. [Google Scholar] [CrossRef]
  86. Moultrie, J.; Sutcliffe, L.; Maier, A. A maturity grid assessment tool for environmentally conscious design in the medical device industry. J. Clean. Prod. 2016, 122, 252–265. [Google Scholar] [CrossRef] [Green Version]
  87. Campos, T.L.R.; da Silva, F.F.; de Oliveira, K.B.; de Oliveira, O.J. Maturity grid to evaluate and improve environmental management in industrial companies. Clean Technol. Environ. Policy 2020, 22, 1485–1497. [Google Scholar] [CrossRef]
  88. Cienfuegos, I.J. Developing a Risk Maturity Model for Dutch Municipalities. Ph.D. Thesis, University of Twente, Enschede, The Netherlands, 2013. [Google Scholar]
  89. Golev, A.; Corder, G.D.; Giurco, D.P. Barriers to Industrial Symbiosis: Insights from the Use of a Maturity Grid. J. Ind. Ecol. 2015, 19, 141–153. [Google Scholar] [CrossRef]
  90. Mullaly, M. Longitudinal Analysis of Project Management Maturity. Proj. Manag. J. 2006, 37, 62–73. [Google Scholar] [CrossRef]
  91. Van Looy, A.; De Backer, M.; Poels, G.; Snoeck, M. Choosing the right business process maturity model. Inf. Manag. 2013, 50, 466–488. [Google Scholar] [CrossRef]
  92. Galvez, D.; Enjolras, M.; Camargo, M.; Boly, V.; Claire, J. Firm Readiness Level for Innovation Projects: A New Decision-Making Tool for Innovation Managers. Adm. Sci. 2018, 8, 6. [Google Scholar] [CrossRef]
  93. Jones, M.D.; Hutcheson, S.; Camba, J.D. Past, present, and future barriers to digital transformation in manufacturing: A review. J. Manuf. Syst. 2021, 60, 936–948. [Google Scholar] [CrossRef]
  94. De Carolis, A.; Macchi, M.; Negri, E.; Terzi, S. A Maturity Model for Assessing the Digital Readiness of Manufacturing Companies. In IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2017; Volume 513, pp. 13–20. ISBN 9783319669229. [Google Scholar]
  95. Zoubek, M.; Simon, M. A Framework for a Logistics 4.0 Maturity Model with a Specification for Internal Logistics. MM Sci. J. 2021, 2021, 4264–4274. [Google Scholar] [CrossRef]
  96. Kaassis, B.; Badri, A. Development of a Preliminary Model for Evaluating Occupational Health and Safety Risk Management Maturity in Small and Medium-Sized Enterprises. Safety 2018, 4, 5. [Google Scholar] [CrossRef] [Green Version]
  97. Zhao, X.; Hwang, B.-G.; Low, S.P. Developing Fuzzy Enterprise Risk Management Maturity Model for Construction Firms. J. Constr. Eng. Manag. 2013, 139, 1179–1189. [Google Scholar] [CrossRef]
  98. Poghosyan, A.; Manu, P.; Mahamadu, A.-M.; Akinade, O.; Mahdjoubi, L.; Gibb, A.; Behm, M. A web-based design for occupational safety and health capability maturity indicator. Saf. Sci. 2020, 122, 104516. [Google Scholar] [CrossRef]
  99. Jääskeläinen, A.; Tappura, S.; Pirhonen, J. Maturity Analysis of Safety Performance Measurement. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 1026, pp. 529–535. ISBN 9783030279271. [Google Scholar]
  100. Elibal, K.; Özceylan, E. Comparing industry 4.0 maturity models in the perspective of TQM principles using Fuzzy MCDM methods. Technol. Forecast. Soc. Change 2022, 175, 121379. [Google Scholar] [CrossRef]
  101. Maisiri, W.; van Dyk, L. Industry 4.0 Competence Maturity Model Design Requirements: A Systematic Mapping Review. In Proceedings of the 2020 IFEES World Engineering Education Forum—Global Engineering Deans Council (WEEF-GEDC), Cape Town, South Africa, 16–19 November 2020; pp. 1–6. [Google Scholar]
  102. Chonsawat, N.; Sopadang, A. Defining SMEs’ 4.0 Readiness Indicators. Appl. Sci. 2020, 10, 8998. [Google Scholar] [CrossRef]
  103. Forsgren, N.; Humble, J.; Kim, G. Accelerate Building and Scaling High Performing Technology Organizations; IT Revolution Publishing: Portland, Oregon, 2018; ISBN 978-1942788331. [Google Scholar]
  104. Deutsch, C.; Meneghini, C.; Mermut, O.; Lefort, M. Measuring Technology Readiness to improve Innovation Management. In Proceedings of the The XXI ISPIM Conference, Bilbao, Spain, 6–9 June 2010; INO: Québec, Canada, 2010; pp. 1–11. [Google Scholar]
  105. Bethel, C.L.; Henkel, Z.; Baugus, K. Conducting Studies in Human-Robot Interaction. In Human-Robot Interaction Evaluation Methods and Their Standardization; Springer: Cham, Switzerland, 2020; pp. 91–124. [Google Scholar]
  106. Mettler, T.; Rohner, P. Situational maturity models as instrumental artifacts for organizational design. In Proceedings of the Proceedings of the 4th International Conference on Design Science Research in Information Systems and Technology—DESRIST’09, Philadelphia, PA, USA, 7–8 May 2009; ACM Press: New York, NY, USA, 2009; p. 1. [Google Scholar]
  107. Gualtieri, L.; Palomba, I.; Merati, F.A.; Rauch, E.; Vidoni, R. Design of Human-Centered Collaborative Assembly Workstations for the Improvement of Operators’ Physical Ergonomics and Production Efficiency: A Case Study. Sustainability 2020, 12, 3606. [Google Scholar] [CrossRef]
  108. Neumann, W.P.; Winkelhaus, S.; Grosse, E.H.; Glock, C.H. Industry 4.0 and the human factor—A systems framework and analysis methodology for successful development. Int. J. Prod. Econ. 2021, 233, 107992. [Google Scholar] [CrossRef]
  109. Peruzzini, M.; Grandi, F.; Pellicciari, M. Exploring the potential of Operator 4.0 interface and monitoring. Comput. Ind. Eng. 2020, 139, 105600. [Google Scholar] [CrossRef]
Figure 1. Research purpose, topics, and methods.
Figure 1. Research purpose, topics, and methods.
Safety 08 00048 g001
Figure 2. Cobot consideration and implementation phase adapted from the HRC life cycle model from Saenz et al. [32].
Figure 2. Cobot consideration and implementation phase adapted from the HRC life cycle model from Saenz et al. [32].
Safety 08 00048 g002
Figure 3. Risk factors classified in five classes and 13 sub-classes (based on Berx et al., 2022 [19]).
Figure 3. Risk factors classified in five classes and 13 sub-classes (based on Berx et al., 2022 [19]).
Safety 08 00048 g003
Figure 4. Model building blocks (adapted from Maier et al. (2012) [30]).
Figure 4. Model building blocks (adapted from Maier et al. (2012) [30]).
Safety 08 00048 g004
Table 1. Illustration of the CSRAT for one grid sub-dimension.
Table 1. Illustration of the CSRAT for one grid sub-dimension.
Maturity Grid DimensionSub-dimensionDescriptionLevel 1. No ideaLevel 2. Aware, no toolsLevel 3. Aware, tools in placeLevel 4. Tools adapted for cobotsChoosen Level ScoreWe have the right knowledge
No idea about cobot risk factorsAware of the risk factors for cobots, but not sure how to cope with them and which tools to useTools are known and in place, but not adapted for cobotsTools are adapted for cobotsIn-house Y/NNot in-houseKnow where to findY/N
HUMANPsychosocialPsychosocial risk factors are a combination of psychological and social factors. Psychosocial risks can arise from poor work design and organisation or poor social context of work. They may result in negative psychological, physical and social outcomes such as work-related stress, burnout or depression. In human-robot collaboration, trust between human and cobot is known to have an influence on the safety of the operator.We are not aware of how psychosocial factors can play a role as a hazard when humans collaborate with a cobot.We know that psychosocial factors can have an influence on the safety of the operator, but we do not exactly know which are the specific drivers and how to identify them.We know that psychosocial factors can have an influence on the safety of the operator, but have not yet added this in our safety management tools (e.g. occupational health & safety surveys, satisfaction or worker commitment surveys, trust measures, or job demand & resources investigation).We know that psychosocial factors can have an influence on the safety of the operator, and already included this aspect in our safety management tools (e.g. occupational health & safety surveys, satisfaction or worker commitment surveys, trust measures, or job demand & resources investigation).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Berx, N.; Adriaensen, A.; Decré, W.; Pintelon, L. Assessing System-Wide Safety Readiness for Successful Human–Robot Collaboration Adoption. Safety 2022, 8, 48. https://doi.org/10.3390/safety8030048

AMA Style

Berx N, Adriaensen A, Decré W, Pintelon L. Assessing System-Wide Safety Readiness for Successful Human–Robot Collaboration Adoption. Safety. 2022; 8(3):48. https://doi.org/10.3390/safety8030048

Chicago/Turabian Style

Berx, Nicole, Arie Adriaensen, Wilm Decré, and Liliane Pintelon. 2022. "Assessing System-Wide Safety Readiness for Successful Human–Robot Collaboration Adoption" Safety 8, no. 3: 48. https://doi.org/10.3390/safety8030048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop