Next Article in Journal
Responsive Therapeutic Environments: A Dual-Track Review of the Research Literature and Design Case Studies in Art Therapy for Children with Autism Spectrum Disorder
Previous Article in Journal
Bridge Tower Warning Method Based on Improved Multi-Rate Fusion Under Strong Wind Action
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects

by
Guodong Zhang
1,2,
Xiaowei Luo
2,
Lei Zhang
3,
Wei Li
4,
Wen Wang
2,5 and
Qiming Li
1,*
1
Department of Construction and Real Estate, School of Civil Engineering, Southeast University, Nanjing 211189, China
2
Department of Architecture and Civil Engineering, City University of Hong Kong, Tat Chee Ave., Kowloon Tong, Kowloon, Hong Kong
3
Research Center of Smart City, Nanjing Tech University, Nanjing 211816, China
4
School of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
5
School of Naval Architecture, Ocean & Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(15), 2734; https://doi.org/10.3390/buildings15152734 (registering DOI)
Submission received: 27 June 2025 / Revised: 17 July 2025 / Accepted: 23 July 2025 / Published: 2 August 2025
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

The construction industry has been troubled by a shortage of skilled labor and safety accidents in recent years. Therefore, more and more robots are introduced to undertake dangerous and repetitive jobs, so that human workers can concentrate on higher-value and creative problem-solving tasks. Nevertheless, although human–robot collaboration (HRC) shows great potential, most existing evaluation methods still focus on the single performance of either the human or robot, and systematic indicators for a whole HRC team remain insufficient. To fill this research gap, the present study constructs a comprehensive evaluation framework for HRC team performance in construction projects. Firstly, a detailed literature review is carried out, and three theories are integrated to build 33 indicators preliminarily. Afterwards, an expert questionnaire survey (N = 15) is adopted to revise and verify the model empirically. The survey yielded a Cronbach’s alpha of 0.916, indicating excellent internal consistency. The indicators rated highest in importance were task completion time (µ = 4.53) and dynamic separation distance (µ = 4.47) on a 5-point scale. Eight indicators were excluded due to mean importance ratings falling below the 3.0 threshold. The framework is formed with five main dimensions and 25 concrete indicators. Finally, an AHP-TOPSIS method is used to evaluate the HRC team performance. The AHP analysis reveals that Safety (weight = 0.2708) is prioritized over Productivity (weight = 0.2327) by experts, establishing a safety-first principle for successful HRC deployment. The framework is demonstrated through a case study of a human–robot plastering team, whose team performance scored as fair. This shows that the framework can help practitioners find out the advantages and disadvantages of HRC team performance and provide targeted improvement strategies. Furthermore, the framework offers construction managers a scientific basis for deciding robot deployment and team assignment, thus promoting safer, more efficient, and more creative HRC in construction projects.

1. Introduction

Construction activities are challenged by problems such as labor-intensive jobs, labor shortages, hazardous work environments, and occupational diseases compared to the manufacturing sector. To solve these problems, new technologies like robotics and automation, artificial intelligence (AI), and information and communication technology (ICT) have attempted to be used in the field of construction projects to improve efficiency, productivity, safety, and sustainability in recent decades [1,2,3,4].
Ever since the successful verification of large-scale industrialized and robotized prefabrication of system houses in the Japanese market, Shimizu has focused on the research and development of on-site construction robots, starting from the 1970s [5]. This can be seen as the initial attempt of construction robots. Nowadays, more and more Single-Task Construction Robots (STCRs) are being developed to be easily deployed on construction sites for various repetitive, dangerous, and physically demanding tasks with global research efforts [5,6,7,8,9,10]. Applications for intelligent machines or robots on construction sites are promising.
However, STCRs have many problems, such as not being able to cooperate well with workers and not being able to perform tasks. So, robots are designed to assist human workers thus far [11]. Generally speaking, robots have been designed and developed based on human requirements but autonomous robots need to be more intelligent to tackle more complex issues such as uncertainty and unpredictable situations [12]. The concept of human–robot collaboration (HRC) in construction presents a promising avenue to elevate the effectiveness and competitiveness of the industry by leveraging the unique strengths of both human workers and robots. HRC has the benefits of both human experience and sophisticated robotic technical performance.
Research indicates that HRC has the potential to significantly enhance system performance and efficiency in construction projects [13]. However, successful HRC requires careful consideration of human factors and their roles in the collaborative process due to their different characteristics (see Table 1).
The use of robots in construction promises to be an ideal solution for improving productivity, quality, and safety in construction projects. The team is a basic unit in the construction project, and this research aims to provide a better understanding of the potential of AI and construction technologies to enhance construction project performance.
Assessing team performance in HRC settings presents several challenges. Traditional performance metrics often focus on single dimensions and do not account for the unique dynamics of HRC in the construction industry. The complexity of construction tasks, the need for the integration of human and robots, and the dynamic nature of construction sites make it difficult to comprehensively evaluate team performance.
This study aims to systematically develop a framework of indicators for HRC team performance in the construction environments and an evaluation model based on methods of the Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), which hopes to offer decision-making support to construction projects on deploying HRC systems effectively and provide reference for the design of construction robots. The proposed framework considers performance indicators systematically from different dimensions, such as productivity, safety, creativity, and satisfaction, which are goals of project management and team management.

2. Literature Review

This part discusses the development background of HRC in the construction field and common performance assessment used in the research.

2.1. HRC in Construction

HRC is a category of human–robot interaction and is defined as the interaction between humans and robots working together in the same workspace to accomplish complex tasks for shared goals [14,15]. In HRC, humans and robots collaborate through physical or non-contact ways to complete complex tasks to establish a dynamic system for task accomplishment in various environments. The objective of HRC is to increase the performance of the construction activities by improving productivity and reducing the physical and cognitive workload on workers [16].
HRC is the high level of human–robot interaction (HRI), compared to human–robot coexistence and human–robot cooperation [15,17]. HRC can be differentiated into two primary types of collaboration from the perspective of distances between humans and robots, which is physical-based contact collaboration and non-physical-based contact collaboration [17]. This classification is fundamental in understanding HRC in construction projects. Both types have their unique characteristics and require different strategies to optimize team performance. For physical collaboration, this involves direct interaction between humans and robots on the construction sites where they may jointly handle materials or tools [18,19,20,21,22]. In non-physical scenarios, humans and robots work together without physical contact. Instead, robots are teleoperated through digital devices, such as drones and mobile inspection robots [21,23,24,25,26,27]. For these two types of HRC tasks, different construction robots are also classified by their working environments as on-site and off-site robots [9], or by their spatial positions as ground robots and aerial robots [26], which are mostly researched in the literature [18,19,20,21].
Although construction robots are highly intelligent and smart now, humans are still necessary for construction sites. The collaboration of humans and robots is crucial due to the complex and dynamic-changing environment of sites. Humans are good at adaptability, problem-solving, decision-making, and coordination [28], which are essential in handling the unpredictable events on sites. Most robots are either preprogrammed or teleoperated for specific tasks [29], and they lack the ability to adapt to the complex conditions of sites without human workers. Moreover, fully autonomous robots can be of high cost, making HRC a more economically viable option. Combining human flexibility with robotic efficiency, construction projects can achieve a higher level of productivity and safety.
The application of robotic systems in these areas helps to maintain high standards of workmanship while freeing up human workers for more complex problem-solving tasks. Robots are changing the role of human workers on the sites across multiple dimensions, including interaction, behaviors, and attitudes [30]. Firstly, the complexity of construction activities and the repetitive tasks assigned to robots make the interactions within human–robot construction teams highly intricate and multiplex [31]. Park et al. [32] proposed a natural language interaction framework for intuitive communication between human workers and robots in construction, demonstrating its potential in a drywall installation case study. Teleoperation is another interaction way for human workers, which allows workers to control robots from a distance in hazardous on-site environments [33]. Human workers can significantly reduce task difficulty and improve efficiency and safety in construction teleoperation [34]. Secondly, applications of intelligent robots into construction activities can influence or alter human behaviors, leading to increased efficiency and reduced physical contact. For example, separating the work area of human and robots can increase safety awareness of workers to improving the overall team safety performance [35]. Furthermore, capabilities of construction robots can influence attitudes among workers, like acceptance [36] and trust [37], which might in turn improve or decrease human–robot team performance compared with human–human teams.
Despite the application of construction robots on the sites, there is little literature regarding the comprehensive performance of HRC from the perspective of the whole team. Most research tends to focus on individual human worker performance, neglecting the outcomes of integrated human–robot teams. Addressing this gap is essential for optimizing HRC and achieving superior performance in construction environments.

2.2. Performance of HRC

While previous research has focused on the technology development of robotics and improving the productivity [38,39], the multi-dimensional nature of HRC performance remains under-explored. However, these studies pay less attention to the interaction between humans and robots during task execution and research on the performance of HRC has been neglected because the productivity is not equal to the performance. It is important to consider HRC when designing robots, as this can promote a safer, healthier, and more efficient task environment and better improve the overall performance of HRC.
Evaluation indicators of HRC team performance can help evaluate the output of HRC teams on construction sites. Scholars have developed some indicators for assessing the performance of some HRC tasks in the areas of manufacturing and space exploration. Most of them are used for some specific tasks and work procedures [40], and only few are general indicators [41,42]. General indicators commonly include task completion time, task completion reliability, resource utilization, mean time between failures, and mental workload [43,44,45]. These indicators can be used to measure the performance of most HRC tasks and can be categorized into human performance indicators, robot performance indicators, and human–robot team performance indicators.

2.2.1. Human Performance

Researchers of human performance in HRC care about the performance on safety, productivity, and physical ergonomics of individual workers [46]. In addition, human performance is considered from the perspectives of the organizational, human cognitive, and technological factors in inspection tasks [47]. Humans have the capabilities of decision-making, adaptability, flexibility, and creativity. In the HRC scenarios, humans can make high-level decisions based on complex information, adapt to unexpected changes and modify robot actions, and solve problems innovatively that robots cannot handle autonomously.
The definition of human performance in the organization management is actions, behaviors, and outputs that employees engage in or bring about that are related to or contribute to the achievement of organizational goals [48]. Researchers believe that human performance should include task performance, contextual performance, and adaptive performance according to the job performance model [49]. Task performance refers to the expected value of what a human performs in the tasks and it depends on task-related behaviors [50]. Contextual performance refers to the psychological level performance of humans based on their work environment, such as their willingness to work with robots [51]. Adaptive performance is the human’s adjustment to work situations, such as the uncertainty and complexity when facing the robots [52]. The indicators that measure these dimensions include accuracy of mental models [44], mental computation [44], trust [44], situational awareness [44], human reliability [43], time for production [53], and workload [54]. The accuracy of mental models is influenced by the comprehensiveness and simplicity of the human–robot interaction interface and the compatibility of robots [55,56]. Mental computation depends on the cognitive workload for operators to complete tasks [57]. For example, a task requires the memory and cognitive ability of operators to real-time perceive the surrounding environment [55]. It is necessary to both maximize productivity and reduce the cognitive load on operators, as high stress can affect human capabilities and thus team performance. Situational awareness is measured by monitoring task progress and sensitivity to task dynamics [58]. Human reliability is the probability of human error when humans operate robots [59]. Time for production is the time that a human is fully engaged in tasks. Workload is the amount of work that humans undertake, either physiologically or physically [60].
In addition, indicators on human safety are used to assess risks to human when working with robots, such as positions of robots relative to human, which are applicable to HRC tasks in high-risk environments [61]. Humans will be more efficient when environments and goals to be accomplished correspond with their safety [62].

2.2.2. Robot Performance

Research on robot performance focuses on the function reliability of robot components, including the observation, interpretation, plan, execution, and communication [47]. Observation involves the robot’s ability to accurately perceive its human partners and environment through sensors, cameras, and other input devices. Interpretation is the capacity that robots process and understand the collected data and recognize objects. Planning is the ability to generate appropriate action plans for specific tasks or goals. Execution is the capability that robots perform physical tasks accurately and efficiently, such as moving bricks. Communication is the ability to effectively exchange information with human operators and other robots, ensuring coordinated and collaborative task execution. At the same time, some indicators are commonly used to represent the performance of robots, such as robot self-awareness, neglect tolerance, robot attention demand, fan out, and interaction effort. Robot self-awareness denotes the ability of a robot to perceive itself. Higher robot self-awareness means less cognitive workloads of human workers when operating robots. The indicators for robot self-awareness are usually autonomous operation time, autonomy, and task success rate of robots [63]. Neglect tolerance is the decreasing degree of ability of a robot to perform a task over time when the robot is ignored by human workers [64]. Robot attention demand is used to measure the task time of a human worker interacting with a robot [65]. Fan out measures how many robots with similar functions a human worker can effectively interact with at the same time [41,66]. Interaction effort is the time of interaction between human workers and robots, which is inversely proportional to robot attentional demand [67]. With the development of communication technology and artificial intelligence, robots can realize real-time communication with humans faster and recognize the information from humans more accurately [68,69,70].

2.2.3. HRC Team Performance

Research on HRC team performance examines the collective output and efficiency of the human–robot collaboration. In a traditional human-centered team, a leader and team members are two main parts, which often has a relationship of one leader to one member or one leader to many members. The leader guides and leads team members to achieve goals, and team members work under supervision of the leader [71]. Commonly, the leader takes charge of planning, organizing, leading, and controlling, which are the four major functions of modern management, instead of working with team members directly. In the HRC team, human workers may become the leader, and robots are the team members instead. Robots have the characteristics of team members, such as following the schedule to achieve goals, working as a team, and communication skills [72]. However, due to the limitations of current technologies, the abilities of robots are inferior to human workers and the leader in an HRC team cannot be completely free from working directly like in a human–human team. The leader in an HRC team not only plans, organizes, leads, and controls the robots, but also assists robots to complete a certain amount of work. So, specific performance indicators are required to measure the team performance of the HRC. Studies of HRC team performance focused on task assignment, execution, and completeness to measure the overall performance in an autonomous human–robot team (HRT) [73,74,75,76]. Task difficulty represents the mental workload of a specific task and it depends on recognition accuracy, scenario coverage, critical time ratio, and robustness [77]. Recognition accuracy is the ability of robots to accurately match input and output. Scenario coverage is the ratio between the scenarios using robots and all the scenarios. Critical time ratio is the ratio of the time that robots spend in critical situations to the total time on the human–robot interaction [78]. Robustness measures the ability of the HRC team to adapt to task and environments during task execution [79]. In addition, task success rate, as well as automation rate, team efficiency, and team cohesion are used as indicators [80,81,82].

2.2.4. Summary of HRC Performance

Designing a performance measurement framework to measure HRC-related processes is a complex and challenging task. This requires reference to performance management in construction project management due to a lack of systematic research on HRC. In project management, the concept of performance measurement has received a considerable amount attention as it is a critical activity that organizations must perform in order to achieve their strategic goals, especially for those operating in the construction industry where both organizational and project goals need to be met [83]. Performance measurement in construction has been focused on three levels, (1) industry, (2) corporate, and (3) project [84], with emphasis being placed on key performance indicators (KPIs) and performance measurement systems [85]. Research on HRC is from three levels of human, robot, and team, while there is a lack of comprehensive consideration on the level of team in the existing literature. Although there is the literature that studies construction project management from the perspective of team performance, such as productivity, quality, and team satisfaction [86], it focuses on human-centered teams rather than HRT. Therefore, HRC team performance is significant.
Moreover, structural engineering domains provide valuable perspectives for enhancing HRC team performance in construction. For instance, recent studies on structural safety and adaptability under uncertainty, such as material property inhomogeneity in ductile iron systems [87], design solutions for structural sections under complex loading [88], and damage-control mechanisms in composite connections [89], offer rigorous methods to quantify safety, quality, and resilience. These methodologies, while developed in structural mechanics, may inform future extensions of HRC involved in tasks such as robotic assembly, structural jointing, or adaptive construction under dynamic site conditions.

2.3. Knowledge Gap and Point of Departure

Although interest in HRC is rising, there are still important gaps in understanding and evaluating HRC team performance both in levels of practice and theory. This section aims to determine the key gaps in current HRC research that justify the need for a new framework. It marks the conceptual starting point of the study and identifies shortcomings in both practical application and theoretical understanding.
In practice level, contractors and site managers lack a clear, objective way to judge whether HRC improves safety, productivity, or quality on construction sites. Without a well-defined indicator system, contractors hesitate to invest in construction robots, and early pilot projects struggle to show concrete value. A site-specific, multi-dimensional set of performance indicators would give practitioners a clear evaluation criterion to diagnose weak spots and build a credible case for wider deployment.
In the theory level, most past research uses either qualitative methods or quantitative methods alone, without bringing the two together. Many studies rely on qualitative observations or subjective measures on one hand, or quantitative task metrics on the other. In addition, most HRC research still centers on new technology or on isolated human factors and paying insufficient attention to how people and robots work as a team. This separate approach keeps us from understanding how effective HRC teams are. Second, most studies conduct research on HRC performance from only one theory or academic dimension, which cannot fully reflect construction tasks. HRC on construction sites covers many issues, like physical ergonomics, robot reliability, mental workload, and team collaboration. No single framework can explain all these factors. A pure human-factors model may overlook technical robustness, while a robotics-centered view may ignore teamwork. It is necessary to build performance indicators that combine several theories—such as safety science, human-factor engineering, and team collaboration. Leveraging multiple theories allows the construction of a robust, multi-dimensional indicator framework that reflects comprehensive factors influencing HRC team performance on sites. Third, a few studies focused on team-level performance in construction. Conditions on-site change frequently, with unique challenges that are not present in controlled manufacturing or laboratory settings. Performance indicators borrowed from other domains (such as factory automation or military robotics) capture only a part of what matters on sites. Construction teams must continuously adapt to ensure safety and productivity amid shifting site constraints while single-dimensional indicators are often overlooked. In sum, there is a clear need to broaden HRC performance research by shifting focus from technology-centric indicators to holistic and performance-oriented evaluation and to develop team-level indicators to construction. Addressing these gaps is essential for advancing both the theory and practice of HRC in construction.
To respond to these gaps, this study systematically develops a comprehensive framework of indicators for assessing HRC team performance in construction projects. Unlike prior work, this study draws on multiple theoretical perspectives in constructing indicators and targets the team level performance under dynamic construction sites. Furthermore, this study employs AHP to establish a weight scheme, evaluates the HRC team performance based on this weighted framework and TOPSIS, and validates its practical utility in a construction case study, providing an analysis tool for practitioners. This approach enables a more complete understanding of how HRT functions and succeeds in construction, providing a foundation for both academic and practical improvement in construction HRC.

3. Research Methodology

The primary objective of this research is to develop an indicator framework that can assess HRC team performance in construction projects. The methodology follows a systematic approach, illustrated in Figure 1, which includes the development of a conceptual framework, identification of performance issues, and the translation of these issues into specific indicators.
Firstly, this paper mapped the landscape of HRC performance through an extensive literature review. Guided by cognitive-load theory, the technology-acceptance model, and team-role theory, this paper extracted common constructs and clustered them into five provisional dimensions—productivity, safety, quality, flexibility, and creativity. The second step is to identify HRC team performance indicators. A comprehensive literature review is conducted to gather and analyze existing research and theories related to HRC team performance. The literature sources include academic papers from major databases, such as Web of Science, Science Direct, Scopus, and ASCE, and international standards. Based on the literature review, a conceptual framework for HRC team performance is established, and initial HRC team performance indicators are identified. Thirdly, the proposed indicator system is validated using data collected through questionnaires, and statistical analysis is used to determine the final indicator system. Fourthly, a comprehensive evaluation model was constructed using the AHP-TOPSIS method. AHP was employed to determine the relative importance of the indicators, and TOPSIS as a multi-criteria decision-making method integrates the AHP weights to calculate the performance score of a real case of human–robot plastering team on a construction project. Finally, the evaluation framework was analyzed for theoretical value and practical utility.

4. Conceptual Framework of HRC Team Performance

HRC in construction projects is becoming increasingly significant due to the need for enhanced productivity, safety, and quality [90,91]. These are not only widely recognized as key goals for HRC applications in construction projects [92], but are also related to the practical needs and constraints of management of the built environment [93]. In traditional construction projects, human–human teams accomplish tasks by pursuing goals such as progress, quality, safety, and cost [94]. In the scenario of HRC, the dimension of performance is also expanded due to the addition of intelligent machines, so the goals also need to be interpreted more broadly. Therefore, HRC team performance in construction projects must be evaluated across multiple dimensions to capture the full scope of human–robot work.
In constructing the conceptual model, performance dimensions of productivity, safety, and quality were retained because each captures a distinct way that HRC reshapes construction work. Productivity matters because collaborative robots assume high-load, repetitive, or precision tasks while humans coordinate, supervise, and intervene at constraint points [95]. Output depends on the fluency of this collaboration. Safety must be reconsidered once workers and robots share a dynamic workplace rather than being physically segregated. In this situation, risk becomes time-varying, mediated by perception–response cycles, and highly sensitive to workers understanding of robot motion intent [96]. Quality extends beyond traditional understanding of construction quality. Robotic precision only translates into durable results when integrated with human preparation, finishing, and inspection under fluctuating workloads and trust conditions [97]. In addition, the introduction of intelligent and reprogrammable robots into dynamic site conditions exposes two additional performance needs that are not well captured by traditional indicators. First, construction work conditions change daily, so the value of HRC depends on how rapidly HRC team can reconfigure tasks, recover from disruptions, and sustain operational continuity when conditions deviate from plan, which brings this emergent capability as flexibility paired with reliability under change [92]. Second, when robots assume repetitive, hazardous, or high-precision tasks, human workers can improve cognitive and temporal capacity. Projects benefit when this released capacity is engaged in on-site problem solving, process improvement, and innovation, which can be demonstrated as the dimension of team creativity [98,99]. Together, flexibility and team creativity distinguish HRC deployments that merely automate existing routines from those that unlock adaptive and innovative value in construction operations. These two HRC-specific dimensions therefore extend the traditional project performance triad and are essential to a complete conceptual account of HRC team performance.
Overall, the proposed conceptual framework defines five key performance dimensions, including productivity, safety, quality, flexibility, and team creativity. To make these dimensions more theoretically persuasive, these dimensions emerged from a careful integration of three theoretical foundations as the following: Cognitive Load Theory (CLT) [100], Technology Acceptance Model (TAM) [101], and Team Role Theory (TRT) [101].
The five performance dimensions in our framework (productivity, safety, quality, flexibility, and team creativity) are deductively justified from three complementary theoretical lenses—Cognitive Load Theory (CLT), the Technology Acceptance Model (TAM), and Team Role Theory (TRT)—when human–robot collaboration (HRC) is examined as a socio-technical production system in dynamic site conditions. Mapping these theories onto an Input–Process–Output (IPO) logic clarifies why each dimension must be represented to capture HRC team performance. This integration provides conceptual coverage across (1) human information-processing capacity and error susceptibility (supported by CLT), (2) perceived usefulness, usability, reliability, and behavioral intention to use robotic technology (supported by TAM), and (3) role differentiation, creativity, and coordination dynamics in mixed teams (supported by TRT). The CLT helps assess the mental workload of human workers during collaboration, which is critical for optimizing the human–robot interface and ensuring worker well-being. The TAM theory provides a lens for understanding how workers accept and use new technology, which helps in evaluating the robot’s usability and the workers’ willingness to collaborate—key factors influencing team performance. The TRT allows humans and robots to be viewed as a collaborative team, analyzing their role allocation, coordination, and communication. This moves beyond the traditional view of a robot as a mere tool.
By integrating these three theories, the framework ensures that performance is understood at the human level (cognitive and behavioral factors), robot level (technological and task factors), and team level (collaborative factors). In addition, to systematically evaluate HRC team performance, this paper organizes the framework using an IPO model across three levels (human, robot, team), which was used in performance research [102]. In this study, an IPO model conceptualizes how antecedent conditions (inputs) and interactive behaviors (processes) lead to outcomes (outputs). Input is the conditions present before collaboration, such as human worker operation skills and mental situation evaluated by the CLT to avoid overload, robot’s features and capabilities evaluated based on the TAM, and team composition based on the TRT. Processes are the interactions and behaviors as humans and robots work together, like a human’s decision-making and attention management under the CLT, the robot’s task execution and adaptability under TAM, and the team’s coordination based on the TRT. Outputs are the resulting performance outcomes valued by the project, measured across multiple dimensions. Figure 2 illustrates a three-level, three-stage, five-dimension framework, showing how inputs at the human, robot, and team levels feed into collaborative processes and yield performance outputs. The resulting five dimensions reflect the expanded performance goals for HRC teams, each justified by specific theoretical insights and the unique demands of human–robot collaboration in construction as the following:
HRC productivity is maximized when technology is readily adopted and each team member (human or robot) focuses on the tasks they perform best, a synergy well-supported by the CLT and TAM. The CLT shows that reassigning high workload tasks to robots prevents human cognitive overload and preserves decision speed. The TAM indicates that actual productivity gains can be achieved only when workers perceive robots as useful and easy to deploy. Building on these theoretical expectations, this paper defines productivity to include not only task speed and the input–output ratio, which can be equated to schedule and cost, but also the degree of coordination between humans and robots, as well as reductions in physical and cognitive workload through shared task execution. Studies have shown that HRC can significantly enhance construction productivity by enabling robots to handle repetitive or hazardous tasks, allowing human workers to focus on more complex activities [13,103].
Compared with traditional construction projects, HRC has brought about changes in technology, management, and cognition, leading to the transformation of the concept of safety. The concept of safety is undergoing a deep transformation from passive reaction to proactive response [104], from spatial isolation to human–robot coexistence [105], and from rule-based operation to cognitive collaboration [106]. From the technology perspective, safety is no longer centered on physical isolation due to the collaborative robots equipped with advanced perception, active learning, and decision-making capabilities. Instead, safety relies on the safety awareness and intelligent reaction of collaborative robots based on their response to the surroundings. In terms of management, traditional construction sites relied on clearly defined boundaries between workers and machines, while humans and robots collaborated within shared spaces in HRC scenarios. In this situation, safety shifts from keeping distance to adaptive interaction and collaboration. To cognition, traditional machines are tools with clear rules and predictable behavior, which require workers to have basic skills and knowledge. The relationship between workers and machines is linear and low in cognitive workload, and safety relies more on physical protection and worker experience. With collaborative robots, safety depends on how workers understand, predict, and cooperate with the behavior of robots, which increases the cognitive load of workers [107,108]. The situational awareness and technical ability of workers become a critical part of safety [109]. The concept of safety is no longer based on rule constraints, but rather a cognitive collaborative process by human workers and robots. Also, safety in HRC is likewise theory-coupled. The CLT presents that elevated cognitive load and divided attention undermine situational awareness, motivating dynamic monitoring indicators such as real-time separation, hazard response, and human awareness measures. The TAM contributes by linking perceived reliability and trust in robot safety functions to worker risk-taking and intervention behavior. The TRT means that stop-authority must be distributed across team roles. Thus, coordinated hazard response time and risk-assessment updating become key measures. In sum, the CLT, TAM, and TRT are crucial for HRC safety.
The focus of quality is on technical accuracy or standard compliance in traditional construction projects. In HRC, the physical and psychological health of human workers also needs to be an important part, which is the concept of job quality [110,111]. Robotic systems are expected to not only improve the precision of tasks but also reduce the physical stress and cognitive burden of workers [46]. The CLT shows that cognitive overload, fatigue, and attention switching degrade inspection accuracy. The TAM reinforces this by highlighting that worker satisfaction and comfort are critical for technology adoption [112]. The TRT demonstrates that complementary roles can raise first-pass yield and reduce rework in teams. The quality dimension thus encompasses indicators like physical and cognitive workload, and overall collaboration satisfaction, reflecting the goal of a healthier, less stressful work process for humans in HRC.
In addition, flexibility has become a key factor in HRC performance [113]. This is because the construction environment is complex and variable, including constantly changing tasks, processes, and scenarios [114,115]. Unlike the relatively static work environment and processes in the manufacturing industry, construction project HRC systems must adapt to changing conditions in real time, so flexibility is a key criterion for team performance. The CLT highlights the cognitive burden of changeovers and the need for schema transfer when tasks shift. The TAM is also relevant. For a robot to be useful across varied scenarios, it must be perceived as flexible and easy to adjust. The TRT informs this dimension that a well-coordinated team can adjust who takes on subtasks when conditions shift, improving adaptive capacity. Therefore, this paper operationalizes flexibility through indicators that capture uptime, reconfiguration time, environmental adaptation, stability under stress, and robustness against disturbances.
The last important dimension of human–robot collaboration (HRC) is team creativity due to the collaboration [116,117]. The CLT predicts that when repetitive and high-workload activities are automated, humans can recover cognitive bandwidth for creative thinking and opportunistic problem solving. The TAM suggests that positive perceptions of robotic usefulness encourage experimentation with new robot-enabled workflows. The TRT proves that heterogeneous teams are known to spark creativity because each member brings unique capabilities. The robot’s precision and data processing combined with the human’s experiential knowledge and intuition can lead to creative synergies that neither could achieve alone. When robots take over repetitive, hazardous, or very precise tasks—such as continuous welding, bricklaying, or heavy lifting—human workers are freed from routine physical work and mental load [118]. They can then focus on higher-level activities like planning the next work steps, adjusting site logistics when deliveries change, or exploiting the robot’s new abilities [119]. In this context, creativity emerges as on-site innovations and real-time problem solving by the human–robot team. For example, workers can modify a robot’s path to avoid an unexpected obstacle. The TRT also emphasizes the importance of an open, communicative team climate. Thus, the value of HRC lies not only in higher efficiency but also in a rapid feedback cycle in which workers observe robot performance, propose improvements, and gradually reshape construction practice.
Considering these evolving expectations, this study proposes a framework including five dimensions, productivity, safety, quality, flexibility, and team creativity, which provides a comprehensive foundation for evaluating HRC team performance in the context of construction projects.

5. Identification of Indicators of Different Dimensions of HRC Team Performance

To operationalize the conceptual framework introduced in Section 4, the present study establishes a set of measurable indicators that span five performance dimensions of HRC in construction projects.
The indicator set presented in this section was established through a three-stage sourcing process that integrates conceptual robustness with empirical and practical relevance. The first stage is theoretical construction. Starting from the Cognitive Load Theory, Technology Acceptance Model, and Team Role Theory introduced in Section 4, we deduced the latent performance constructs that must be operationalized, like productivity, proactive safety, adaptive capacity, and creativity. The second stage is literature synthesis. A structured scan of peer-reviewed articles identified indicators that recur across studies of construction robotics and human factors. This step ensured empirical recurrence and terminological consistency with the extant body of knowledge. The third stage is expert consultation. Discussion with a panel of seven domain experts (two HRC-related researchers, three construction project managers, two robot engineers) assessed practical feasibility and site-level data availability.
Only indicators that passed all three gates were retained. The final catalog comprises 33 indicators, distributed across five performance dimensions (see Table 2). The following subsections present each dimension in turn, preserving the original indicators while clarifying their roles.

5.1. Productivity Indicators

Productivity is the foremost dimension considered when assessing whether HRC brings value on site, yet a single speed-oriented variable rarely reveals the full picture. In the present framework, five complementary indicators were retained, namely task completion time, production capacity, the human–robot ratio, the human–robot time ratio, and collaboration efficiency.
Task completion time shows how long one task takes from the start to final acceptance, so it reflects schedule control. Production capacity records the physical output produced per unit time and adds a throughput view. The human–robot ratio describes how many workers and robots are assigned to the task, while the human–robot time ratio compares the actual labor hours each side puts in. Collaboration efficiency relates total output to the combined inputs of people and robots, giving an overall resource picture. These five indicators together make it possible to see whether faster progress comes from real process improvement or simply from adding more manpower or machines, and whether extra robots truly raise throughput in proportion.

5.2. Safety Indicators

Safety is the basic pre-condition for any human–robot collaboration. Without protection measures, productivity gains hold little practical meaning. In this study nine indicators are kept giving a complete and practical view of safety performance.
At the physical level, the first group of indicators considers the real-time distance that the robot maintains from workers and the peak forces recorded when contact occurs, ensuring conformance with the ISO/TS 15066:2016 [152] limits. The second group of indicators captures responsive capability, or how quickly the robot moves to a safe state after a hazard is detected and how often it can recognize. Safety also depends on workflow design and operator awareness. Safe collaboration efficiency shows how much productive time is lost because the robot stops for safety. The risk-assessment update frequency tells whether formal reviews keep pace with changing site conditions. Situation awareness level, trust degree, and human error rate focus on the human side, showing how well workers understand the robot, how much they trust it, and how often misunderstandings lead to mistakes.
Taken together, these nine indicators connect hardware compliance, control responsiveness, procedure management, and human factors.

5.3. Task and Collaboration Quality Indicators

Quality indicates whether work delivered by the human–robot team meets the required specification and whether the collaboration feels trustworthy and comfortable. Approved co-executed tasks and overall effectiveness report the share of jobs that pass inspection on the first attempt, while the rework incidents reveal hidden defects that require correction. Confidence in collaboration quality records, the degree of reliability perceived by workers, and collaboration satisfaction offers a broader verdict on cooperative experience. Physical workload, cognitive workload, and perceived stress level describe the muscle effort, mental effort, and psychological stress tolerance by workers. Worker collaboration experience captures the intuitiveness of the interface, whereas intention recognition measures how often the robot correctly understands human commands or gestures at the first attempt. Together, these nine indicators present a rounded view of technical application, task quality performance, and worker well-being during HRC.

5.4. Flexibility and Reliability Indicators

Construction sites change quickly and robots must keep running in complex and dangerous environments. Flexibility and reliability are described through five indicators that trace system availability, adaptability, and endurance. Collaboration duration records how long the human–robot system stays operational within a given observation window. Collaborative task reconfiguration time measures how fast workers can break down a new task, generate fresh robot paths, and return to normal work when a plan or site layout is modified. Environmental adaptation ability shows the proportion of unexpected surroundings that the system can handle without pausing, indicating how well perception and control cope with real-world variability. Stability under extreme conditions looks at the loss of accuracy or throughput when the system faces harsh conditions. Finally, robustness reflects the share of output that stays within quality and safety limits when several disturbances occur at once, for example, a task change combined with sensor noise and network communication delay. Together these indicators offer a view of how the human–robot team can deliver work while adapting to construction sites.

5.5. Collaborative Creativity Indicators

Creativity is expected to emerge when robots take over routine or hazardous tasks and leave humans with time for exploration and problem solving. Six indicators are used to follow this process from the earliest psychological conditions to the final project outcomes.
Perceived collaboration creative climate and creative task willingness show whether the workplace encourages the sharing of ideas and whether workers are motivated to take part in higher-level problem solving after routine duties have been handed over to robots. Creative contribution measures the actual number of original and suitable solutions that arise under these conditions. An adopted innovation proposal denotes worker’s suggestions that are approved for use, and implemented robot-generated alternatives denote the autonomous options produced by the robot. Creative value-added links the adopted ideas to concrete benefits, such as cost savings, shorter schedules, or higher quality. The six indicators present a chain from climate and willingness through idea creation to measurable project value, which provides a clear basis for judging the creative return brought by HRC in construction projects.

6. Validation, Weighting, and Evaluation of the Performance Framework

The initial set of 33 indicators, derived from theory and the literature, required empirical validation to ensure their relevance and importance in the context of construction projects. Furthermore, to enhance the practical utility of the framework, it is necessary to move beyond an assumption of equal importance for all indicators. This section details the two-stage process of validating the indicator set and establishing a quantitative weighting scheme. First, an expert survey was used to screen the indicators for importance and reliability. Second, the Analytic Hierarchy Process (AHP) was employed to determine the relative weights of the retained indicators. Third, a comprehensive model based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) was established to evaluate the overall performance of an HRC team.

6.1. Validation of the Proposed Indicators

The validation of the indicators developed for assessing the performance of human–robot collaboration (HRC) teams in construction projects is a crucial step to ensure their reliability and applicability. This section explains how the 33 performance indicators for HRC teams were refined and then empirically validated.
To confirm the relevance and reliability of the proposed human–robot collaboration (HRC) performance indicators, the present study adopted questionnaire-based quantitative validation. The entire procedure was organized in four sequential steps, including expert selection, questionnaire design, data collection and analysis, and reliability and retention tests.
Firstly, fifteen specialists who possess experience in smart construction were invited to complete the survey. Their profile is shown in Table 3. Their institutional distribution is as follows: seven experts came from universities and research institutes, four from construction contractors, two from construction-robotics technology firms, and two from a government department. This composition ensures balanced input from academic, industrial, and regulatory perspectives. The gender distribution corresponds to the actual demographics of construction fields, especially in roles like site management, which are still male dominated. Researchers were well represented (46.7%) because HRC is still in a developmental and research-oriented stage, and they have adequate theoretical knowledge good for establishing the framework. Most participants had at least three years of professional practice in construction. This distribution reflects the current reality of professionals actively engaged in HRC research and practice, which provides a range of experience levels ensures both depth and currency in the survey feedback. HRC in construction is still an emerging area, and the majority of those working on it are relatively younger professionals, such as early-career researchers and engineers, who tend to fall within the 3–5 year experience range.
Secondly, the questionnaire comprised 33 indicators covering five dimensions: Productivity (P), Safety (S), Collaboration Quality (Q), Functional Adaptability (F), and Creativity (C). Each item was evaluated on a five-point Likert scale (1 = “Not important at all”, 5 = “Extremely important”). In addition to the rating scale, a brief definition and an illustrative construction-site scenario were provided for every indicator to ensure common understanding among respondents.
Thirdly, electronic questionnaires were distributed online, and all fifteen experts returned valid responses, yielding a response rate of 100%. After data cleaning, three descriptive indices—mean value ( x ¯ ), standard deviation (SD), and rank order (R)—were calculated for each indicator using SPSS 26.0 (see Table 4). These indices quantify, respectively, the perceived importance, the consensus level, and the relative priority of every indicator.
Finally, to examine internal consistency, Cronbach’s alpha (α) was employed. The computed value was 0.916, which exceeds the commonly accepted threshold of 0.9 and therefore indicates excellent reliability. This result also indicates that the respondents held a highly uniform understanding of the indicators and that the dataset is statistically sound for subsequent analysis. Following mainstream construction-management studies, an important cut-off of 3.0 (the Likert mid-point) was adopted:
x ¯ 3.0 indicator   retained   ( deemed   important )
x ¯ < 3.0 indicator   discarded
Applying this rule produced 25 retained and 8 discarded indicators. Examples of the former include task completion time (P1) and dynamic safety distance (S1), and examples of the latter include trust degree (S8) and human error rate (S9). The high Alpha coefficient confirms that these retention decisions rest on a stable and reliable empirical foundation.
The expert panel’s ratings reveal that the eight indicators listed in Table 5 all fall at or below the 3.0 importance threshold. The eight indicators that failed to clear the 3.0 cut-off were discarded. In brief, some items were judged to be redundant because their intent is captured by higher-scoring, easier-to-measure indicators (P4 versus P1/P2 and Q5 versus Q4). Others blend multiple constructions or lack an industry standard, making them difficult to define or operationalize with confidence (S5). Several indicators describe antecedent conditions, which means factors that influence performance rather than express it, so they are excluded (S8 and Q8). Two items raise major data-collection or attribution challenges (S9 and Q7). C5 concerns a practice that remains rare in current situations. Together, their low means can be explained by a combination of conceptual redundancy, representing factors rather than direct performance outcomes, measurement difficulty, and current technological maturity in construction HRC sites nowadays.

6.2. Determination of Indicator Weights Using AHP

While the initial validation confirmed the importance of the 25 indicators, it did not address their relative impact on the overall HRC team performance. Treating all indicators as equal could lead to misallocated resources and inaccurate performance assessments. To address this, AHP was employed to establish a scientific weighting scheme. AHP is a structured technique for organizing and analyzing complex decisions, based on decomposing a problem into a hierarchy and using comparison matrices to derive relative importance of decision elements [153].
The same panel of 15 experts was engaged to conduct pairwise comparisons for the five dimensions and the indicators within each dimension, using 1–9 scale. The judgments were synthesized to calculate the local and global weights for each indicator. The consistency of the judgments was verified using the Consistency Ratio (CR), with all the matrices achieving a CR below the 0.10 threshold, confirming the reliability of the expert inputs. The final calculated weights are presented in Table 6.
The AHP results reveal a clear hierarchy of priorities. At the dimension level, Safety (WS = 0.2708) emerged as the most critical factor, surpassing Productivity (WP = 0.2327). This finding is significant, as it challenges the common assumption that robots are introduced into construction primarily for productivity gains. The expert panel’s prioritization suggests that the primary concern and potential barrier to HRC adoption is not cost or speed, but rather effective management of new and complex safety risks that arise when humans and robots share a dynamic workspace. This establishes safety performance as the fundamental prerequisite upon which all other performance benefits, including productivity, must be built. This has profound implications for the design of collaborative robots, which must prioritize robust hazard detection and human-centric safety protocols over pure operational speed.
At the indicator level, the new weighting scheme provides a more detailed view of performance priorities. Task completion time (P1) remains the highest-ranked indicator (WP1 = 0.0763), reflecting the importance of schedule performance in construction. The top five is populated by a mix of safety, productivity, and creativity indicators: dynamic separation distance (S1) (WS1 = 0.0667), collaborative robot’s autonomous hazard identification rate (S4) (WS4 = 0.0580), production capacity (P2) (WP2 = 0.0571), and creative value added (C6) (WC6 = 0.0542). The high ranking of S1 and S4 emphasizes that both physical safety (maintaining distance) and system intelligence (autonomous hazard detection) are crucial. The significant weight given to C6 indicates a growing recognition that the ultimate value of HRC lies not just in performing existing tasks faster, but in enabling innovative solutions and creating new value that were previously unattainable.

6.3. A TOPSIS-Based Evaluation Model for HRC Team Performance

TOPSIS is widely used for ranking a set of alternatives based on multiple criteria [154]. The core principle of TOPSIS is that the optimal alternative should have the shortest geometric distance from the Positive Ideal Solution (PIS) and the longest geometric distance from the Negative Ideal Solution (NIS). The PIS represents the best possible performance score for each criterion, while the NIS represents the worst. This approach is particularly well-suited for evaluating complex systems like HRC teams, as it provides a single, comprehensive score that reflects overall performance across all dimensions. The methodology is adapted from its successful application in evaluating the resilience of complex engineering systems [155,156].
The TOPSIS evaluation process involves the following six steps.
Step 1: Construct the initial decision matrix (X). Assume that n alternatives are evaluated against m indicators and an initial decision matrix X is formed:
X = x i j m × n
where x i j is the performance score of the i-th alternative on the j-th indicator.
Step 2: Normalize the decision matrix (R). To eliminate dimensional inconsistencies and allow for comparison, the matrix X is normalized using the vector normalization method:
r i j = x i j i = 1 m x i j 2
where r i j is the normalized score.
Step 3: Construct the weighted normalized decision matrix (V). The normalized matrix R is then multiplied by the indicator weights ( w j ) derived from the AHP analysis in Section 6.2. This step integrates the relative importance of each indicator into the model.
v i j = w j × r i j
where w j is the weight of the j-th indicator, and j = 1 n w j = 1 .
Step 4: Determine the Positive Ideal Solution (PIS, A+) and Negative Ideal Solution (NIS, A−). The PIS and NIS are identified from the weighted normalized matrix V.
A + = v 1 + , v 2 + , , v j + = [ max { v 11 , v 21 , , v i 1 } , , max { v 1 j , v 2 j , , v i j } ]
A = v 1 , v 2 , , v j = [ min { v 11 , v 21 , , v i 1 } , , min { v 1 j , v 2 j , , v i j } ]
where v j + is the PIS of j-th indicator and v j is the NIS of j-th indicator.
Step 5: Calculate separation measures. The Euclidean distance of each alternative from the PIS ( S i + ) and the NIS ( S i ) is calculated.
S i + = j = 1 n v i j v j + 2
S i = j = 1 n v i j v j 2
Step 6: Calculate the relative closeness to the ideal solution ( C i ). The final performance score for each alternative is calculated as its relative closeness to the ideal solution.
C i = S i S i + + S i
The value of C i ranges from 0 to 1. A higher C i value indicates that the HRC team performance is closer to better overall performance.
To operationalize the TOPSIS model, raw data must be translated into a consistent numerical scale. This study establishes a four-level grading system for each of the 25 HRC performance indicators: Grade I (Poor), Grade II (Fair), Grade III (Good), and Grade IV (Excellent). For calculation, these grades are mapped to a 1–4 numerical scale. This rubric provides a clear and objective standard for data collection, making the evaluation process transparent and repeatable. Table 7 details the specific criteria for each grade level.
Based on the evaluation standards of HRC team performance indicators, an initial evaluation matrix was established. The TOPSIS evaluation method was then applied to calculate the relative closeness scores for four levels of performance indicators, which were used to define the HRC team performance evaluation criteria. The detailed calculation results are shown in Table 8.
According to the calculation results in Table 8, the HRC team performance evaluation criteria are as follows:
(1)
Low Performance: 0 C i < 0.448 ;
(2)
Fair Performance: 0.448 C i < 0.723 ;
(3)
Good Performance: 0.723 C i < 1 ;
(4)
Excellent Performance: C i = 1 .

6.4. Case Study

6.4.1. Case Background

To validate the practical applicability of the AHP-TOPSIS framework, a case study was conducted on a realistic construction scenario. The case involves a human–robot team tasked with interior wall finishing for a new high-rise residential building project in Nanjing, China.
The HRC team consists of one skilled worker and a plastering robot. The worker’s responsibilities are not eliminated but are shifted to higher-value activities. These include (1) setting up the work area and the robotic system, (2) preparing and loading material into the robot’s hopper, (3) performing real-time quality control by visually inspecting the robot’s application, and (4) manually finishing complex geometries such as corners, edges, and areas around electrical outlets that are difficult for the robot to access. This leverages the human’s experience, adaptability, and flexibility. The robot is a fully autonomous wall painting system. It is equipped with an arm, a material sprayer, and a sanding tool head. Its primary tasks are to apply plaster evenly across large wall surfaces and then grind them to a smooth status, thus performing the most physically demanding and repetitive parts of the construction task.
The workflow is designed as a collaborative cycle. The first step is the setup. The worker prepares a room, sets up the mobile robot platform, and fills the material hopper. The second step is autonomous operation. The robot is activated and performs a scan of the room, plans its path, and begins applying plaster to the main wall sections. The third step is worker activity. While the robot works, the worker prepares the next batch of plaster, monitors the robot’s progress, and begins preliminary work on detailed areas. The final step is finishing and transition. Once the robot completes the main surfaces of a room, the worker steps in to manually plaster and sand the corners and edges, ensuring a high-quality finish. During this time, the robot can be moved to the next room to begin its scanning and setup process, minimizing downtime.

6.4.2. HRC Team Performance Data Collection and Score Calculation

Data for the case study were collected through a combination of project documentation review, operator interviews, and direct observation. These data were then scored according to the grading standards established in Table 7 to generate a numerical score for the 25 indicators. This process forms the initial decision matrix for the TOPSIS analysis. The raw data and corresponding graded scores are presented in Table 9.
With the initial decision matrix established, the six-step TOPSIS calculation was performed. The weights from Table 6 were applied to create the weighted normalized matrix. Finally, the Euclidean distances and the relative closeness score (C) were calculated. The result for the case study was C = 0.664 < 0.723. Based on this scale, the overall HRC team performance score of 0.664 is classified as Fair Performance.

6.4.3. Results Analysis and Suggestions

This result indicates that while the HRC team is effective and provides significant benefits over traditional human–human collaboration, there are specific areas with room for improvement to achieve a Good or Excellent result.
The HRC team performance is high in the dimensions of productivity and safety. High scores in P3 (human–robot ratio) and S1 (dynamic separation distance), combined with their significant AHP weights, contributed positively to the overall score. This shows that the system is well-designed from a core efficiency and physical safety perspective. The robot effectively handles repetitive work, and the human–robot ratio is optimized for the construction task. However, the primary areas that constrain the team performance are in the dimensions of flexibility and reliability and collaborative creativity. The indicators F2 (collaborative task reconfiguration time) and F3 (environmental adaptation ability) both received a low score of 2. Due to the substantial weight of dimension of flexibility and reliability, these low scores had a negative impact on the final result. Similarly, the indicators belonging to the dimension of collaborative creativity scored low. The focus on maximizing productivity appears to leave little incentive for workers to engage in creative problem-solving (C1, C2, C3, C4). Consequently, the team generated minimal creative value added (C6), ignoring a key potential benefit of HRC where humans are freed from repetitive tasks to focus on higher-level cognitive work.
Based on this result analysis, the project manager can translate the quantitative findings into improvement strategies. The low score on F2 shows that the task transition is a major drag on HRC team performance. The manager should determine whether the delay stems from insufficient worker training, cumbersome robot setup or teardown steps, or poor site logistics. Possible responses include focused rapid deployment training for the worker, evaluating whether to apply alternative robotic systems that enable more automated mobility, and restructuring the workflow. The low score on F3 indicates that the robot struggles when conditions deviate beyond minor variations, forcing the worker to compensate and thus reducing overall autonomy. To address this, the manager should collaborate with the technology provider to clarify current sensing and decision-making limits and explore software updates or configuration changes that could improve AI-driven problem solving. Indicators of collaborative creativity are also weak, suggesting that the HRC team is perceived mainly as a production resource rather than a source of continuous improvement. Establishing a formal feedback mechanism, such as a brief weekly meeting for the worker to share observations and improvement ideas, can bring actionable insights. Pairing this with small performance-based incentives for adopted ideas that measurably improve efficiency, safety, or quality can improve indicators of collaborative creativity.
Overall, this case demonstrates that the AHP–TOPSIS framework is more than an academic evaluation tool. It can generate a quantitative performance rating to help analyze specific strengths and weaknesses of HRC team performance and enable managers to make decisions to optimize HRC team performance.

7. Implications

7.1. Theoretical Implications

7.1.1. Bridging the HRC Performance Gap in Construction Projects

Most studies on HRC mainly pay attention to the development of robotic technology and corresponding management methods. However, they seldom give a systematic and comprehensive evaluation of the performance results produced by HRC. Because of this imbalance, the understanding of HRC effectiveness is still very limited. In the construction field, this problem is even more obvious, since almost no research has proposed a clear indicator system to measure HRC performance. This study tries to solve this problem by changing the focus from “what the robot can do” to “how well the human and the robot work together”. This paper develops a complete set of performance indicators for HRC teams in construction projects. This framework allows researchers and practitioners to evaluate HRC from a performance-oriented perspective.
Therefore, the contribution of this paper is not only to fill an existing research gap but also to establish a new and integrated paradigm for HRC performance assessment in construction. By moving away from single or fragmented indicators—such as productivity or safety—and by proposing five dimensions (productivity, safety, job quality, flexibility and reliability, and team creativity), this study builds a solid conceptual foundation for future theoretical and empirical work in this field.

7.1.2. Team-Level Performance Indicators in HRC Contexts

Most previous HRC studies mainly focus on the performance of individual human workers, discussing separate topics like human workload, trust in robots, or acceptance of new technology. Because of this single-point view, the HRC team performance is still not clear. In other words, we know little about how well the whole HRC unit really works. To close this gap, this research puts focus on team performance and develops a set of indicators that measure the HRC performance in construction. The final framework contains five dimensions and 25 indicators. These dimensions are not independent, and they influence each other in complex ways. For instance, better safety (e.g., shorter robot hazard response time) can reduce downtime and thus improve productivity. Likewise, lower physical and cognitive workload (Q4) can free mental resources and increase creative task willingness (C2).
Because of such interactions, future theories should go beyond fragmented views and study these dynamic, system-level relationships. This paper provides the basic building blocks for this holistic understanding and shows the necessity of modeling HRC performance as an integrated whole rather than as isolated parts.

7.1.3. A Multi-Theoretical Framework for Comprehensive HRC Team Performance

HRC in construction projects is influenced by many different factors, such as physical ergonomics, mental load, and safety requirements. Because these factors are so diverse, one single theory cannot explain all performance dimensions. Therefore, an integrated, multi-theoretical view is necessary. This study builds the indicator system by combining the CLT, TAM, and TRT. The CLT helps describe the mental effort of human workers. The TAM explains how workers accept and collaborate with robots. The TRT clarifies the cooperative roles of humans and robots in the team. By merging these three theories which come from cognitive psychology, information systems, and organizational behavior, we can evaluate HRC team performance more completely and more precisely, reflecting the real situation on a construction site.

7.1.4. A Methodological Contribution to Performance Assessment

This study contributes to a new method for HRC team performance assessment in construction. By integrating the AHP, TOPSIS, and case study, it provides a replicable template for developing comprehensive evaluation models. This multi-stage approach is more robust than single-method assessments and can be adapted to evaluate other emerging technologies in the construction sector.

7.2. Practical Implications

The HRC team performance assessment framework developed in this study provides a systematic and comprehensive approach for construction projects. This framework offers decision-making support for the effective deployment of HRC systems and provides a reference for the design of construction robots.

7.2.1. Effective Deployment and Management of HRC in Construction

For construction organizations, the effective deployment of HRC systems depends on clear and measurable indicators that prove value and guide strategic decisions. Without such a robust framework, firms will face significant challenges in justifying the substantial investments required for HRC technologies and in optimizing their operational strategies for maximum benefit.
This study offers exactly this benchmark. The 25 validated indicators, organized under five dimensions, give project managers a concrete toolkit. Managers can turn rough observations into data-driven decisions. They can spot bottlenecks early, calculate return on investment, and judge whether faster progress comes from true process improvement or simply from adding extra resources. In short, the framework tells managers whether robots improve output in line with their cost.

7.2.2. Improvement of Construction Project Productivity

In construction, one strong reason to use HRC is to improve productivity and work efficiency. HRC helps solve problems such as worker shortages, heavy manual work, and dangerous job conditions. This framework gives clear and simple indicators to measure these productivity gains. Important metrics are task completion time (P1) to check if work stays on schedule, production capacity (P2) to count how much output is produced, and collaboration efficiency (P5) to see how well people and robots turn their effort into finished work. By tracking these indicators, project managers can find bottlenecks, divide tasks better between humans and robots, and make sure that HRC speeds up work and cuts cost.

7.2.3. Improvement of On-Site Safety and Worker Well-Being in HRC

Construction work is dangerous, with moving machines, heavy loads, and unstable surfaces all around. When robots start to share the same narrow workspace with workers, the traditional practice of keeping machines behind fences is no longer practical. Safety thinking must change from simple physical isolation to active human–robot coexistence that protects both the body and the mind of workers.
The framework proposed in this study answers this need by combining real-time safety signals with human well-being measures. It watches the dynamic separation distance (S1) between each robot and nearby workers, records the collision compliance (S2) force if unintended contact occurs, and times the robot hazard response (S3) to show how quickly robots react to danger. At the same time, it tracks the physical and cognitive workload (Q4) to make sure workers do not suffer excessive strain, and it asks about collaboration satisfaction (Q6) so that workers can express how comfortable they feel when teaming with robots. By observing these indicators, managers can spot risks early, adjust robot speed or path in seconds, and improve training or work plans before small problems grow into accidents.
This integrated view moves safety management from a passive “wait-and-repair” style to a proactive “sense-and-prevent” approach. It also places worker comfort and mental health on the same level as hard safety indicators, matching modern ideas in occupational health that value prevention, continuous monitoring, and respect for workers.

7.2.4. Fostering Flexibility and Innovation in HRC

The construction environment is inherently dynamic, characterized by constantly changing tasks, processes, and unpredictable scenarios. This necessitates a high degree of adaptability from both human and robotic systems. At the same time, HRC can take over routine or dangerous work, giving human workers time and energy to think creatively and solve new problems.
The framework captures this need through two dimensions. The Flexibility and Reliability dimension uses indicators such as collaborative task reconfiguration time (F2) and environmental adaptation ability (F3) to show how quickly an HRC team can adjust when site conditions or project plans change. The Collaborative Creativity dimension, with measures like creative contribution (C1) and creative task willingness (C2), illustrates how well HRC stimulates on-site ideas and quick problem-solving when robots handle repetitive jobs.
By tracking both flexibility and creativity together, managers consider HRC not just to gain extra speed, but as a driver of continuous learning and fresh thinking on sites. When robots can be reprogrammed in minutes and workers are free to innovate, the construction industry can move from a step-by-step model to one that is more agile, responsive, and innovative and be ready to meet the constant changes in modern construction projects.

8. Conclusions

This paper proposes a new and theory-oriented framework of performance indicators for HRC teams in the construction industry, thereby transferring the evaluation perspective from individual actors to the collaborative team. Based on Cognitive Load Theory, Technology Acceptance Model, and Team Role Theory, the framework is deliberately designed to capture cognitive, technological, and collaborative aspects of team performance that are neglected by conventional assessment approaches. After questionnaire and empirical validation, the final framework contains five performance dimensions with twenty-five indicators. These dimensions are across productivity, safety, quality, flexibility, and creativity and together depict HRC team performance as a comprehensive property rather than a simple sum of individual outputs. In this way, the research successfully fills a gap in construction HRC studies, where previous investigations have tended to examine human and robot performance in isolation. In addition, the AHP method is used to establish a scientific weighting scheme and reveal a prioritization structure that the dimension of Safety outranks that of Productivity, challenging the common perception that robots are introduced primarily to accelerate work. To translate the indicator system into an operational assessment tool, the study couples the AHP weights with a graded evaluation criteria and implements a TOPSIS-based evaluation model, deriving relative closeness thresholds that distinguish between Low, Fair, Good, and Excellent HRC team performance levels. Also, an interior wall plastering case consisting of a plastering robot and one skilled worker is used to demonstrate the evaluation framework’s practical utility. With the application of the AHP-TOPSIS model, the team’s overall performance (C = 0.664) falls in the range of Fair. The case analysis shows relatively strong performance in Productivity and Safety but reveals weaknesses in Flexibility, Reliability, and Collaborative Creativity. This underscores the managerial risk of treating HRC only as a production accelerator rather than as an approach for adaptability and innovation.
From the practical standpoint, the proposed framework supplies project managers and site engineers with a scientific and easy-to-operate instrument for enhancing human–robot teaming on construction sites. By applying the indicator set, practitioners can conduct systematic diagnosis of HRC deployments and find out weaknesses that demand improvement. This team-centric evaluation mechanism ensures that critical factors are explicitly considered, thus reducing the risk that neglecting such factors will undermine overall HRC team performance. At the same time, effective HRC allows human workers to concentrate on complex and creative problem-solving tasks, while robots take on physically demanding or highly repetitive operations. Consequently, the framework functions not only as a benchmark for measuring success but also as a practical guideline for integrating robots into construction teams.
Looking into the future, several research directions are recommended to deepen and extend the present work. First, the framework could be integrated with real-time data streams from IoT sensors and BIM to create a dynamic and continuous performance monitoring dashboard for project managers, such as sensor-driven safety distances, log-based uptime, and digital reporting of rework, to reduce subjectivity and enable near-real-time feedback. Second, studies are needed to track HRC team performance over the entire project lifecycle, using the framework to measure performance degradation and long-term sustainability. Thirdly, as data for validation of this framework comes from China, cross-regional applications are needed to examine how cultural norms, workforce practices, and regulatory requirements influence observed indicator scores. In future, multi-country pilots can be conducted to establish local baselines and threshold calibrations and test the robustness of perception-based indicators across differing safety cultures. Finally, future research could employ methods like structural equation modeling to explore the causal relationships between the five performance dimensions, testing hypotheses such as whether improved safety directly leads to higher perceived quality and creative willingness.

Author Contributions

Conceptualization, G.Z., X.L. and Q.L.; methodology, G.Z., X.L., L.Z., W.W. and Q.L.; validation, G.Z., X.L. and Q.L.; investigation, G.Z., L.Z., W.L. and W.W.; resources, G.Z., X.L. and Q.L.; data curation, G.Z., L.Z. and W.L.; writing—original draft preparation, G.Z., X.L., L.Z., W.L. and Q.L.; writing—review and editing, G.Z., X.L. and Q.L.; supervision, X.L. and Q.L.; project administration, G.Z.; funding acquisition, G.Z. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2022YFC3802201), National Natural Science Foundation of China (Grant No. 72301131), and Postgraduate Research & Practice Innovation Program of Jiangsu Province, China (Grant No. KYCX22_0218).

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Acknowledgments

We sincerely appreciate all the experts who participated in this research interview.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ICTInformation and Communication Technology
HRCHuman–Robot Collaboration
STCRSingle-Task Construction Robot
HRIHuman–Robot Interaction
HRTHuman–Robot Team
IPOInput–Process–Output
CLTCognitive Load Theory
TAMTechnology Acceptance Model
TRTTeam Role Theory
KPIKey Performance Indicator
AHPAnalytic Hierarchy Process
TOPSISTechnique for Order Preference by Similarity to Ideal Solution

References

  1. Wu, H.; Zhong, B.; Li, H.; Guo, J.; Wang, Y. On-Site Construction Quality Inspection Using Blockchain and Smart Contracts. J. Manag. Eng. 2021, 37, 04021065. [Google Scholar] [CrossRef]
  2. Edirisinghe, R. Digital skin of the construction site: Smart sensor technologies towards the future smart construction site. Eng. Constr. Archit. Manag. 2019, 26, 184–223. [Google Scholar] [CrossRef]
  3. Pan, M.; Linner, T.; Pan, W.; Cheng, H.M.; Bock, T. Influencing factors of the future utilisation of construction robots for buildings: A Hong Kong perspective. J. Build. Eng. 2020, 30, 101220. [Google Scholar] [CrossRef]
  4. Kamari, M.; Ham, Y. AI-based risk assessment for construction site disaster preparedness through deep learning-based digital twinning. Autom. Constr. 2022, 134, 104091. [Google Scholar] [CrossRef]
  5. Bock, T.; Linner, T. Construction Robots: Elementary Technologies and Single-Task Construction Robots; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  6. Jung, K.; Chu, B.; Hong, D. Robot-based construction automation: An application to steel beam assembly (Part II). Autom. Constr. 2013, 32, 62–79. [Google Scholar] [CrossRef]
  7. Yu, S.-N.; Ryu, B.-G.; Lim, S.-J.; Kim, C.-J.; Kang, M.-K.; Han, C.-S. Feasibility verification of brick-laying robot using manipulation trajectory and the laying pattern optimization. Autom. Constr. 2009, 18, 644–655. [Google Scholar] [CrossRef]
  8. Mir-Nasiri, N.; J, H.S.; Ali, M.H. Portable Autonomous Window Cleaning Robot. Procedia Comput. Sci. 2018, 133, 197–204. [Google Scholar] [CrossRef]
  9. Saidi, K.S.; Bock, T.; Georgoulas, C. Robotics in Construction. In Springer Handbooks; Springer International Publishing: Cham, Switzerland, 2016; pp. 1493–1520. [Google Scholar]
  10. Tanimoto, T.; Shinohara, K.; Yoshinada, H. Research on effective teleoperation of construction machinery fusing manual and automatic operation. ROBOMECH J. 2017, 4, 14. [Google Scholar] [CrossRef]
  11. Bock, T. Construction robotics. Auton. Robot. 2007, 22, 201–209. [Google Scholar] [CrossRef]
  12. Pan, Y.; Zhang, L. Roles of artificial intelligence in construction engineering and management: A critical review and future trends. Autom. Constr. 2021, 122, 103517. [Google Scholar] [CrossRef]
  13. Wu, M.; Lin, J.-R.; Zhang, X.-H. How human-robot collaboration impacts construction productivity: An agent-based multi-fidelity modeling approach. Adv. Eng. Inform. 2022, 52, 101589. [Google Scholar] [CrossRef]
  14. Liang, C.-J.; Wang, X.; Kamat Vineet, R.; Menassa Carol, C. Human–Robot Collaboration in Construction: Classification and Research Trends. J. Constr. Eng. Manag. 2021, 147, 03121006. [Google Scholar] [CrossRef]
  15. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  16. Matheson, E.; Minto, R.; Zampieri, E.G.; Faccio, M.; Rosati, G. Human–robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  17. Malik, A.A.; Bilberg, A. Developing a reference model for human–robot interaction. Int. J. Interact. Des. Manuf. 2019, 13, 1541–1547. [Google Scholar] [CrossRef]
  18. Dörfler, K.; Sandy, T.; Giftthaler, M.; Gramazio, F.; Kohler, M.; Buchli, J. Mobile Robotic Brickwork. In Robotic Fabrication in Architecture, Art and Design 2016; Reinhardt, D., Saunders, R., Burry, J., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 204–217. [Google Scholar]
  19. Stumm, S.; Braumann, J.; von Hilchen, M.; Brell-Cokcan, S. On-Site Robotic Construction Assistance for Assembly Using A-Priori Knowledge and Human-Robot Collaboration. In Advances in Robot Design and Intelligent Control; Springer International Publishing: Cham, Switzerland, 2017; pp. 583–592. [Google Scholar]
  20. Li, R.Y.M. Robots for the Construction Industry. In An Economic Analysis on Automated Construction Safety: Internet of Things, Artificial Intelligence and 3D Printing; Springer: Singapore, 2018; pp. 23–46. [Google Scholar]
  21. Bogue, R. What are the prospects for robots in the construction industry? Ind. Robot: Int. J. 2018, 45, 1–6. [Google Scholar] [CrossRef]
  22. Zhu, Z.; Dutta, A.; Dai, F. Exoskeletons for manual material handling—A review and implication for construction applications. Autom. Constr. 2021, 122, 103493. [Google Scholar] [CrossRef]
  23. Kusuda, Y. A remotely controlled robot operates construction machines. Ind. Robot 2003, 30, 422–425. [Google Scholar] [CrossRef]
  24. Irizarry, J.; Costa, D.B. Exploratory Study of Potential Applications of Unmanned Aerial Systems for Construction Management Tasks. J. Manag. Eng. 2016, 32, 05016001. [Google Scholar] [CrossRef]
  25. Kim, S.; Irizarry, J. Human Performance in UAS Operations in Construction and Infrastructure Environments. J. Manag. Eng. 2019, 35, 04019026. [Google Scholar] [CrossRef]
  26. Ardiny, H.; Witwicki, S.; Mondada, F. Construction automation with autonomous mobile robots: A review. In Proceedings of the 2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM), Tehran, Iran, 7–9 October 2015; pp. 418–424. [Google Scholar]
  27. Kolvenbach, H.; Wisth, D.; Buchanan, R.; Valsecchi, G.; Grandia, R.; Fallon, M.; Hutter, M. Towards autonomous inspection of concrete deterioration in sewers with legged robots. J. Field Robot. 2020, 37, 1314–1327. [Google Scholar] [CrossRef]
  28. Wang, B.; Zheng, P.; Yin, Y.; Shih, A.; Wang, L. Toward human-centric smart manufacturing: A human-cyber-physical systems (HCPS) perspective. J. Manuf. Syst. 2022, 63, 471–490. [Google Scholar] [CrossRef]
  29. Li, R.; Zou, Z. Enhancing construction robot learning for collaborative and long-horizon tasks using generative adversarial imitation learning. Adv. Eng. Inform. 2023, 58, 102140. [Google Scholar] [CrossRef]
  30. Jung, M.; Hinds, P. Robots in the Wild: A Time for More Robust Theories of Human-Robot Interaction. J. Hum.-Robot Interact. 2018, 7, 2. [Google Scholar] [CrossRef]
  31. Beer, J.M.; Fisk, A.D.; Rogers, W.A. Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot Interact. 2014, 3, 74–99. [Google Scholar] [CrossRef]
  32. Park, S.; Wang, X.; Menassa, C.C.; Kamat, V.R.; Chai, J.Y. Natural language instructions for intuitive human interaction with robotic assistants in field construction work. Autom. Constr. 2024, 161, 105345. [Google Scholar] [CrossRef]
  33. Lee, J.S.; Ham, Y.; Park, H.; Kim, J. Challenges, tasks, and opportunities in teleoperation of excavator toward human-in-the-loop construction automation. Autom. Constr. 2022, 135, 104119. [Google Scholar] [CrossRef]
  34. Zhou, T.; Zhu, Q.; Shi, Y.; Du, J. Construction Robot Teleoperation Safeguard Based on Real-Time Human Hand Motion Prediction. J. Constr. Eng. Manag. 2022, 148, 04022040. [Google Scholar] [CrossRef]
  35. You, S.; Kim, J.-H.; Lee, S.; Kamat, V.; Robert, L.P. Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments. Autom. Constr. 2018, 96, 161–170. [Google Scholar] [CrossRef]
  36. Park, S.; Yu, H.; Menassa, C.C.; Kamat, V.R. A Comprehensive Evaluation of Factors Influencing Acceptance of Robotic Assistants in Field Construction Work. J. Manag. Eng. 2023, 39, 04023010. [Google Scholar] [CrossRef]
  37. Shayesteh, S.; Jebelli, H. Toward Human-in-the-Loop Construction Robotics: Understanding Workers? Response through Trust Measurement during Human-Robot Collaboration. In Proceedings of the Construction Research Congress 2022, Arlington, VA, USA, 9–12 March 2022; pp. 631–639. [Google Scholar] [CrossRef]
  38. Ordaz-Rivas, E.; Torres-Treviño, L. Improving performance in swarm robots using multi-objective optimization. Math. Comput. Simul. 2024, 223, 433–457. [Google Scholar] [CrossRef]
  39. Yang, S.; Demichela, M.; Geng, J.; Wang, L.; Ling, Z.W. Risk-based performance assessment from fully manual to human-robot teaming in pressurized tank inspection operations. Saf. Sci. 2024, 176, 106543. [Google Scholar] [CrossRef]
  40. Damian, D.D.; Hernandez-Arieta, A.; Lungarella, M.; Pfeifer, R. An automated metrics set for mutual adaptation between human and robotic device. In Proceedings of the 2009 IEEE International Conference on Rehabilitation Robotics, Kyoto, Japan, 23–26 June 2009; pp. 139–146. [Google Scholar]
  41. Olsen, D.R.; Goodrich, M.A. Metrics for evaluating human-robot interactions. In PERMIS; Faculty Publications: Gaithersburg, MD, USA, 2003; p. 4. [Google Scholar]
  42. Murphy, R.R.; Schreckenghost, D. Survey of metrics for human-robot interaction. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 197–198. [Google Scholar]
  43. Steinfeld, A.; Fong, T.; Kaber, D.; Lewis, M.; Scholtz, J.; Schultz, A.; Goodrich, M. Common metrics for human-robot interaction. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Lake City, UT, USA, 2–3 March 2006; pp. 33–40. [Google Scholar]
  44. Abou Saleh, J.; Karray, F. Towards generalized performance metrics for human-robot interaction. In Proceedings of the 2010 International Conference on Autonomous and Intelligent Systems, AIS 2010, Povoa de Varzim, Portugal, 21–23 June 2010; pp. 1–6. [Google Scholar]
  45. Glas, D.F.; Kanda, T.; Ishiguro, H.; Hagita, N. Teleoperation of multiple social robots. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 42, 530–544. [Google Scholar] [CrossRef]
  46. Simone, V.D.; Pasquale, V.D.; Giubileo, V.; Miranda, S. Human-Robot Collaboration: An analysis of worker’s performance. Procedia Comput. Sci. 2022, 200, 1540–1549. [Google Scholar] [CrossRef]
  47. Chen, J.Y.C.; Barnes, M.J.; Harper-Sciarini, M. Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design. IEEE Trans. Syst. Man Cybern. Part C Appl. Revi. 2011, 41, 435–454. [Google Scholar] [CrossRef]
  48. Viswesvaran, C.; Ones, D.S. Perspectives on models of job performance. Int. J. Sel. Assess. 2000, 8, 216–226. [Google Scholar] [CrossRef]
  49. Motowidlo, S.J. Job performance. Handb. Psychol. Ind. Organ. Psychol. 2003, 12, 39–53. [Google Scholar]
  50. Skehan, P. Processing Perspectives on Task Performance; John Benjamins Publishing Company: Amsterdam, The Netherlands, 2014; Volume 5. [Google Scholar]
  51. Greenidge, D.; Devonish, D.; Alleyne, P. The relationship between ability-based emotional intelligence and contextual performance and counterproductive work behaviors: A test of the mediating effects of job satisfaction. Hum. Perform. 2014, 27, 225–242. [Google Scholar] [CrossRef]
  52. Charbonnier-Voirin, A.; Roussel, P. Adaptive performance: A new scale to measure individual performance in organizations. Can. J. Adm. Sci. Rev. Can. Des Sci. De L’Adm. 2012, 29, 280–293. [Google Scholar] [CrossRef]
  53. Schreckenghost, D.; Milam, T.; Fong, T. Measuring performance in real time during remote human-robot operations with adjustable autonomy. IEEE Intell. Syst. 2010, 25, 36–45. [Google Scholar] [CrossRef]
  54. Bartlett, K.; Blanco, J.; Johnson, J.; Fitzgerald, B.; Mullin, M.; Ribeirinho, M. Rise of the Platform Era: The Next Chapter in Construction Technology. 2020. Available online: https://www.mckinsey.com/industries/private-equity-and-principal-investors/our-insights/rise-of-the-platform-era-the-next-chapter-in-construction-technology (accessed on 15 June 2025).
  55. Sammer, G.; Blecker, C.; Gebhardt, H.; Bischoff, M.; Stark, R.; Morgen, K.; Vaitl, D. Relationship between regional hemodynamic activity and simultaneously recorded EEG-theta associated with mental arithmetic-induced workload. Hum. Brain Mapp. 2007, 28, 793–803. [Google Scholar] [CrossRef]
  56. Lim, B.C.; Klein, K.J. Team mental models and team performance: A field study of the effects of team mental model similarity and accuracy. J. Organ. Behav. Int. J. Ind. Occup. Organ. Psychol. Behav. 2006, 27, 403–418. [Google Scholar] [CrossRef]
  57. Zanchettin, A.M.; Ceriani, N.M.; Rocco, P.; Ding, H.; Matthias, B. Safety in human-robot collaborative manufacturing environments: Metrics and control. IEEE Trans. Autom. Sci. Eng. 2015, 13, 882–893. [Google Scholar] [CrossRef]
  58. Adami, P.; Rodrigues, P.B.; Woods, P.J.; Becerik-Gerber, B.; Soibelman, L.; Copur-Gencturk, Y.; Lucas, G. Impact of VR-Based Training on Human–Robot Interaction for Remote Operating Construction Robots. J. Comput. Civ. Eng. 2022, 36, 04022006. [Google Scholar] [CrossRef]
  59. Morais, C.; Estrada-Lugo, H.D.; Tolo, S.; Jacques, T.; Moura, R.; Beer, M.; Patelli, E. Robust data-driven human reliability analysis using credal networks. Reliab. Eng. Syst. Saf. 2022, 218, 107990. [Google Scholar] [CrossRef]
  60. Rapolienė, L.; Razbadauskas, A.; Sąlyga, J.; Martinkėnas, A. Stress and fatigue management using balneotherapy in a short-time randomized controlled trial. Evid.-Based Complement. Altern. Med. 2016, 2016, 9631684. [Google Scholar] [CrossRef]
  61. Neerincx, M.A.; Diggelen, J.v.; Breda, L.v. Interaction design patterns for adaptive human-agent-robot teamwork in high-risk domains. In Proceedings of the International Conference on Engineering Psychology and Cognitive Ergonomics, Toronto, ON, Canada, 17–22 July 2016; pp. 211–220. [Google Scholar]
  62. Saad, W.; Glass, A.L.; Mandayam, N.B.; Poor, H.V. Toward a consumer-centric grid: A behavioral perspective. Proc. IEEE 2016, 104, 865–882. [Google Scholar] [CrossRef]
  63. Takeno, J. Self-Aware Robots: On the Path to Machine Consciousness; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  64. Elara, M.R.; Calderón, C.A.A.; Zhou, C.; Wijesoma, W.S. Experimenting extended neglect tolerance model for human robot interactions in service missions. In Proceedings of the 2010 11th International Conference on Control Automation Robotics & Vision, Singapore, 7–10 December 2010; pp. 2024–2029. [Google Scholar]
  65. Willems, J.; Schmidthuber, L.; Vogel, D.; Ebinger, F.; Vanderelst, D. Ethics of robotized public services: The role of robot design and its actions. Gov. Inf. Q. 2022, 39, 101683. [Google Scholar] [CrossRef]
  66. Olsen, D.R., Jr.; Wood, S.B. Fan-out: Measuring human control of multiple robots. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 231–238. [Google Scholar]
  67. Schuldt, A.; Berndt, J.O.; Herzog, O. The interaction effort in autonomous logistics processes: Potential and limitations for cooperation. In Autonomous Cooperation and Control in Logistics; Springer: Berlin/Heidelberg, Germany, 2011; pp. 77–90. [Google Scholar]
  68. Savazzi, S.; Sigg, S.; Vicentini, F.; Kianoush, S.; Findling, R. On the use of stray wireless signals for sensing: A look beyond 5G for the next generation of industry. Computer 2019, 52, 25–36. [Google Scholar] [CrossRef]
  69. Edwards, C.; Edwards, A.; Stoll, B.; Lin, X.; Massey, N. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Comput. Hum. Behav. 2019, 90, 357–362. [Google Scholar] [CrossRef]
  70. Dikmen, M.; Burns, C. The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending. Int. J. Hum.-Comput. Stud. 2022, 162, 102792. [Google Scholar] [CrossRef]
  71. Zaccaro, S.J.; Rittman, A.L.; Marks, M.A. Team leadership. Leadersh. Q. 2001, 12, 451–483. [Google Scholar] [CrossRef]
  72. Zhang, L.; Zhang, X. Multi-objective team formation optimization for new product development. Comput. Ind. Eng. 2013, 64, 804–811. [Google Scholar] [CrossRef]
  73. Wang, H.; Lewis, M.; Chien, S.-Y. Teams organization and performance analysis in autonomous human-robot teams. In Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop, Baltimore, MD, USA, 28–30 September 2010; pp. 251–257. [Google Scholar]
  74. Korivand, S.; Galvani, G.; Ajoudani, A.; Gong, J.Q.; Jalili, N. Optimizing Human-Robot Teaming Performance through Q-Learning-Based Task Load Adjustment and Physiological Data Analysis. Sensors 2024, 24, 2817. [Google Scholar] [CrossRef] [PubMed]
  75. Nikolaidis, S.; Lasota, P.; Ramakrishnan, R.; Shah, J. Improved human-robot team performance through cross-training, an approach inspired by human team training practices. Int. J. Robot. Res. 2015, 34, 1711–1730. [Google Scholar] [CrossRef]
  76. You, S.; Robert, L.P. Team robot identification theory (TRIT): Robot attractiveness and team identification on performance and viability in human-robot teams. J. Supercomput. 2022, 78, 19684–19706. [Google Scholar] [CrossRef]
  77. Liu, P.; Li, Z. Task complexity: A review and conceptualization framework. Int. J. Ind. Ergon. 2012, 42, 553–568. [Google Scholar] [CrossRef]
  78. Du, G.; Zhang, P. Markerless human–robot interface for dual robot manipulators using Kinect sensor. Robot. Comput.-Integr. Manuf. 2014, 30, 150–159. [Google Scholar] [CrossRef]
  79. Scilimati, V.; Petitti, A.; Boccadoro, P.; Colella, R.; Di Paola, D.; Milella, A.; Grieco, L.A. Industrial Internet of things at work: A case study analysis in robotic-aided environmental monitoring. IET Wirel. Sens. Syst. 2017, 7, 155–162. [Google Scholar] [CrossRef]
  80. Doisy, G.; Meyer, J.; Edan, Y. The impact of human–robot interface design on the use of a learning robot system. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 788–795. [Google Scholar] [CrossRef]
  81. Berg, S.; Neubauer, C.; Robison, C.; Kroninger, C.; Schaefer, K.E.; Krausman, A. Exploring Resilience and Cohesion in Human-Autonomy Teams: Models and Measurement. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, New York, NY, USA, 25–29 July 2021; pp. 121–127. [Google Scholar]
  82. Lakhmani, S.G.; Neubauer, C.; Krausman, A.; Fitzhugh, S.M.; Berg, S.K.; Wright, J.L.; Rovira, E.; Blackman, J.J.; Schaefer, K.E. Cohesion in human–autonomy teams: An approach for future research. Theor. Issues Ergon. Sci. 2022, 23, 687–724. [Google Scholar] [CrossRef]
  83. Liu, J.; Love, P.E.D.; Davis, P.R.; Smith, J.; Regan, M. Conceptual Framework for the Performance Measurement of Public-Private Partnerships. J. Infrastruct. Syst. 2015, 21, 04014023. [Google Scholar] [CrossRef]
  84. Elyamany, A.; Basha, I.; Zayed, T. Performance Evaluating Model for Construction Companies: Egyptian Case Study. J. Constr. Eng. Manag. 2007, 133, 574–581. [Google Scholar] [CrossRef]
  85. Haponava, T.; Al-Jibouri, S. Proposed System for Measuring Project Performance Using Process-Based Key Performance Indicators. J. Manag. Eng. 2012, 28, 140–149. [Google Scholar] [CrossRef]
  86. Hindiyeh, R.I.; Ocloo, W.K.; Cross, J.A. Systematic Review of Research Trends in Engineering Team Performance. Eng. Manag. J. 2023, 35, 4–28. [Google Scholar] [CrossRef]
  87. He, X.; Yam, M.C.; Zhou, Z.; Zayed, T.; Ke, K. Inhomogeneity in mechanical properties of ductile iron pipes: A comprehensive analysis. Eng. Fail. Anal. 2024, 163, 108459. [Google Scholar] [CrossRef]
  88. He, X.; Ke, K.; Zhou, X. Closed-form design solutions for parallelogram hollow structural sections under bending scenarios. J. Constr. Steel Res. 2023, 207, 107934. [Google Scholar] [CrossRef]
  89. He, X.; Yam, M.C.; Ke, K.; Zhou, X.; Zhang, H.; Gu, Z. Behaviour insights on damage-control composite beam-to-beam connections with replaceable elements. Steel Compos. Struct. 2023, 46, 773–791. [Google Scholar]
  90. Fu, Y.; Chen, J.; Lu, W. Human-robot collaboration for modular construction manufacturing: Review of academic research. Autom. Constr. 2024, 158, 105196. [Google Scholar] [CrossRef]
  91. Jang, Y.; Jeong, I.; Chauhan, H.; Pakbaz, A. Workers? Physiological/Psychological Responses during Human-Robot Collaboration in an Immersive Virtual Reality Environment. In Computing in Civil Engineering 2023, Proceedings of the ASCE International Conference on Computing in Civil Engineering, Corvallis, OR, USA, 25–28 June 2023; ASCE: Reston, VA, USA, 2024; pp. 461–469. [Google Scholar]
  92. Sam, M.; Franz, B.; Sey-Taylor, E.; McCarty, C. Evaluating the Perception of Human-Robot Collaboration among Construction Project Managers. In Construction Research Congress 2022; ASCE: Reston, VA, USA, 2022; pp. 550–559. [Google Scholar]
  93. Wang, L.; Xue, X.; Yang, R.J.; Luo, X.; Zhao, H. Built environment and management: Exploring grand challenges and management issues in built environment. Front. Eng. Manag. 2019, 6, 313–326. [Google Scholar] [CrossRef]
  94. Wang, T.; Abdallah, M.; Clevenger, C.; Monghasemi, S. Time–cost–quality trade-off analysis for planning construction projects. Eng. Constr. Archit. Manag. 2021, 28, 82–100. [Google Scholar] [CrossRef]
  95. Pan, Z.; Yu, Y. Learning Multi-Granularity Task Primitives from Construction Videos for Human-Robot Collaboration. In Computing in Civil Engineering 2023, Proceedings of the ASCE International Conference on Computing in Civil Engineering, Corvallis, OR, USA, 25–28 June 2023; ASCE: Reston, VA, USA, 2024; pp. 674–681. [Google Scholar]
  96. Sun, Y.; Jeelani, I.; Gheisari, M. Safe human-robot collaboration in construction: A conceptual perspective. J. Saf. Res. 2023, 86, 39–51. [Google Scholar] [CrossRef]
  97. Gervasi, R.; Mastrogiacomo, L.; Franceschini, F. A conceptual framework to evaluate human-robot collaboration. Int. J. Adv. Manuf. Technol. 2020, 108, 841–865. [Google Scholar] [CrossRef]
  98. Molitor, M.; Renkema, M. Human-Robot Collaboration in a Smart Industry Context: Does HRM Matter? In Smart Industry—Better Management; Bondarouk, T., Olivas-Luján, M.R., Eds.; Advanced Series in Management; Emerald Publishing Limited: Leeds, UK, 2022; Volume 28, pp. 105–123. [Google Scholar]
  99. Zhao, F.; Shi, S.; Zhang, C.; Zhang, H. Exploring the path of human-robot collaboration decision making on team performance driven by digital technology. In Proceedings of the 2023 3rd International Conference on Robotics, Automation and Intelligent Control (ICRAIC), Zhangjiajie, China, 24–26 November 2023; pp. 151–155. [Google Scholar]
  100. Sweller, J. Cognitive load theory. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 2011; Volume 55, pp. 37–76. [Google Scholar]
  101. Marangunić, N.; Granić, A. Technology acceptance model: A literature review from 1986 to 2013. Univers. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  102. Zhu, L.; Sun, J.; Zhang, L.; Du, J.; Li, D.; Zhao, X. ANP-MEAT-based evaluation of the performance of rural infrastructure provision in Mainland China. Eng. Constr. Archit. Manag. 2025. [Google Scholar] [CrossRef]
  103. Zhang, M.; Xu, R.; Wu, H.; Pan, J.; Luo, X. Human–robot collaboration for on-site construction. Autom. Constr. 2023, 150, 104812. [Google Scholar] [CrossRef]
  104. Matsas, E.; Vosniakos, G.-C.; Batras, D. Prototyping proactive and adaptive techniques for human-robot collaboration in manufacturing using virtual reality. Robot. Comput.-Integr. Manuf. 2018, 50, 168–180. [Google Scholar] [CrossRef]
  105. Wang, X.; Veeramani, D.; Dai, F.; Zhu, Z. Context-aware hand gesture interaction for human–robot collaboration in construction. Comput.-Aided Civ. Infrastruct. Eng. 2024, 39, 3489–3504. [Google Scholar] [CrossRef]
  106. Nassar, Y.; Albeaino, G.; Jeelani, I.; Gheisari, M.; Issa Raja, R.A. Human-Robot Collaboration Levels in Construction: Focusing on Individuals? Cognitive Workload. In Proceedings of the Construction Research Congress 2024, Des Moines, IA, USA, 20–23 March 2024; ASCE: Reston, VA, USA, 2024; pp. 639–648. [Google Scholar]
  107. Liu, Y.; Habibnezhad, M.; Jebelli, H. Brainwave-driven human-robot collaboration in construction. Autom. Constr. 2021, 124, 103556. [Google Scholar] [CrossRef]
  108. Liu, D.; Ham, Y. Investigating the Cognition-Control Pattern of Multi-Worker Human-Robot Collaboration in Construction. In Computing in Civil Engineering 2023, Proceedings of the ASCE International Conference on Computing in Civil Engineering, Corvallis, OR, USA, 25–28 June 2023; ASCE: Reston, VA, USA, 2024; pp. 571–578. [Google Scholar]
  109. Brosque, C.; Galbally, E.; Khatib, O.; Fischer, M. Human-Robot Collaboration in Construction: Opportunities and Challenges. In Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 26–28 June 2020; pp. 1–8. [Google Scholar]
  110. Baltrusch, S.J.; Krause, F.W.; de Vries, A.W.; van Dijk, W.; de Looze, M.P. What about the human in human robot collaboration? Ergonomics 2022, 65, 719–740. [Google Scholar] [CrossRef]
  111. Kokotinis, G.; George, M.; Zoi, A.; Makris, S. On the quantification of human-robot collaboration quality. Int. J. Comput. Integr. Manuf. 2023, 36, 1431–1448. [Google Scholar] [CrossRef]
  112. Isaac, O.; Abdullah, Z.; Ramayah, T.; Mutahar, A.M.; Alrajawy, I. Integrating user satisfaction and performance impact with technology acceptance model (TAM) to examine the internet usage within organizations in Yemen. Asian J. Inf. Technol. 2018, 17, 60–78. [Google Scholar]
  113. Burden, A.G.; Caldwell, G.A.; Guertler, M.R. Towards human–robot collaboration in construction: Current cobot trends and forecasts. Constr. Robot. 2022, 6, 209–220. [Google Scholar] [CrossRef]
  114. Lundeen, K.M.; Kamat, V.R.; Menassa, C.C.; McGee, W. Scene understanding for adaptive manipulation in robotized construction work. Autom. Constr. 2017, 82, 16–30. [Google Scholar] [CrossRef]
  115. Feng, C.; Xiao, Y.; Willette, A.; McGee, W.; Kamat, V.R. Vision guided autonomous robotic assembly and as-built scanning on unstructured construction sites. Autom. Constr. 2015, 59, 128–138. [Google Scholar] [CrossRef]
  116. Taheri, A.; Khatiri, S.; Seyyedzadeh, A.; Ghorbandaei Pour, A.; Siamy, A.; Meghdari, A.F. Investigating the Impact of Human-Robot Collaboration on Creativity and Team Efficiency: A Case Study on Brainstorming in Presence of Robots. In Proceedings of the Social Robotics, Singapore, 16–18 August 2024; pp. 94–103. [Google Scholar]
  117. Dhanda, M.; Rogers, B.A.; Hall, S.; Dekoninck, E.; Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0. Robot. Comput.-Integr. Manuf. 2025, 93, 102937. [Google Scholar] [CrossRef]
  118. Liang, X.; Rasheed, U.; Cai, J.; Wibranek, B.; Awolusi, I. Impacts of Collaborative Robots on Construction Work Performance and Worker Perception: Experimental Analysis of Human–Robot Collaborative Wood Assembly. J. Constr. Eng. Manag. 2024, 150, 04024087. [Google Scholar] [CrossRef]
  119. Liu, L.; Schoen, A.J.; Henrichs, C.; Li, J.; Mutlu, B.; Zhang, Y.; Radwin, R.G. Human Robot Collaboration for Enhancing Work Activities. Hum. Factors 2022, 66, 158–179. [Google Scholar] [CrossRef]
  120. Zimmerman, T.A. Metrics and Key Performance Indicators for Robotic Cybersecurity Performance Analysis; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2017. [CrossRef]
  121. Vieira, M.; Samuel, M.; Gonçalves, B.S.; Tânia, P.-V.; Paula, B.-P.A.; Neto, P. A two-level optimisation-simulation method for production planning and scheduling: The industrial case of a human–robot collaborative assembly line. Int. J. Prod. Res. 2022, 60, 2942–2962. [Google Scholar] [CrossRef]
  122. Coronado, E.; Kiyokawa, T.; Ricardez, G.A.G.; Ramirez-Alpizar, I.G.; Venture, G.; Yamanobe, N. Evaluating quality in human-robot interaction: A systematic search and classification of performance and human-centered factors, measures and metrics towards an industry 5.0. J. Manuf. Syst. 2022, 63, 392–410. [Google Scholar] [CrossRef]
  123. Kumar, S.; Sahin, F. A framework for an adaptive human-robot collaboration approach through perception-based real-time adjustments of robot behavior in industry. In Proceedings of the 2017 12th System of Systems Engineering Conference (SoSE), Waikoloa, HI, USA, 18–21 June 2017; pp. 1–6. [Google Scholar]
  124. Caiazzo, C.; Nestic, S.; Savković, M. A Systematic Classification of Key Performance Indicators in Human-Robot Collaboration; Springer International Publishing: Cham, Switzerland, 2022; pp. 479–489. [Google Scholar]
  125. Saenz, J.; Vogel, C.; Penzlin, F.; Elkmann, N. Safeguarding Collaborative Mobile Manipulators—Evaluation of the VALERI Workspace Monitoring System. Procedia Manuf. 2017, 11, 47–54. [Google Scholar] [CrossRef]
  126. Ferraguti, F.; Bertuletti, M.; Landi, C.T.; Bonfè, M.; Fantuzzi, C.; Secchi, C. A Control Barrier Function Approach for Maximizing Performance While Fulfilling to ISO/TS 15066 Regulations. IEEE Robot. Autom. Lett. 2020, 5, 5921–5928. [Google Scholar] [CrossRef]
  127. Degeorges, T.; Sziebig, G. Human-Robot Collaboration: Safety by Design. In Proceedings of the 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), Kyoto, Japan, 20–23 June 2021; pp. 1–4. [Google Scholar]
  128. Berx, N.; Decré, W.; Morag, I.; Chemweno, P.; Pintelon, L. Identification and classification of risk factors for human-robot collaboration from a system-wide perspective. Comput. Ind. Eng. 2022, 163, 107827. [Google Scholar] [CrossRef]
  129. Berx, N.; Decré, W.; Pintelon, L. Examining the Role of Safety in the Low Adoption Rate of Collaborative Robots. Procedia CIRP 2022, 106, 51–57. [Google Scholar] [CrossRef]
  130. Alenjareghi, M.J.; Keivanpour, S.; Chinniah, Y.A.; Jocelyn, S.; Oulmane, A. Safe human-robot collaboration: A systematic review of risk assessment methods with AI integration and standardization considerations. Int. J. Adv. Manuf. Technol. 2024, 133, 4077–4110. [Google Scholar] [CrossRef]
  131. Müller, M.; Ruppert, T.; Jazdi, N.; Weyrich, M. Self-improving situation awareness for human–robot-collaboration using intelligent Digital Twin. J. Intell. Manuf. 2024, 35, 2045–2063. [Google Scholar] [CrossRef]
  132. Chauhan, H.; Jang, Y.; Jeong, I. Predicting human trust in human-robot collaborations using machine learning and psychophysiological responses. Adv. Eng. Inform. 2024, 62, 102720. [Google Scholar] [CrossRef]
  133. Karbouj, B.; Alshamaa, O.; Al Rashwany, K.; Krüger, J. Enhancing Human-Robot Collaborative Predictability through Rational Action Modeling of Robot Trajectories. Procedia CIRP 2024, 130, 516–523. [Google Scholar] [CrossRef]
  134. Demichela, M.; Ling, Z.; Geng, J. Evolving process maintenance through human-robot collaboration: An agent-based system performance analysis. Adv. Eng. Inform. 2025, 65, 103241. [Google Scholar] [CrossRef]
  135. Halder, S.; Afsari, K.; Chiou, E.; Patrick, R.; Hamed, K.A. Construction inspection & monitoring with quadruped robots in future human-robot teaming: A preliminary study. J. Build. Eng. 2023, 65, 105814. [Google Scholar] [CrossRef]
  136. Freedy, A.; DeVisser, E.; Weltman, G.; Coeyman, N. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 21–25 May 2007; pp. 106–114. [Google Scholar]
  137. Hämmerle, A.; Kollingbaum, M.J.; Steiner, F.; Ebenhofer, G.; Widmoser, F.; Ikeda, M.; Bauer, H.; Pichler, A. An approach to task scheduling in an end-of-line quality assurance situation with human-robot cooperation. Procedia Comput. Sci. 2025, 253, 524–532. [Google Scholar] [CrossRef]
  138. Carissoli, C.; Luca, N.; Marta, B.; Alexander, S.F.; Delle Fave, A. Mental Workload and Human-Robot Interaction in Collaborative Tasks: A Scoping Review. Int. J. Hum.-Comput. Interact. 2024, 40, 6458–6477. [Google Scholar] [CrossRef]
  139. Lu, L.; Xie, Z.; Wang, H.; Li, L.; Xu, X. Mental stress and safety awareness during human-robot collaboration—Review. Appl. Ergon. 2022, 105, 103832. [Google Scholar] [CrossRef]
  140. Yanco, H.A.; Drury, J.L.; Scholtz, J. Beyond usability evaluation: Analysis of human-robot interaction at a major robotics competition. Hum.-Comput. Interact. 2004, 19, 117–149. [Google Scholar]
  141. Prati, E.; Villani, V.; Grandi, F.; Peruzzini, M.; Sabattini, L. Use of interaction design methodologies for human–robot collaboration in industrial scenarios. IEEE Trans. Autom. Sci. Eng. 2021, 19, 3126–3138. [Google Scholar] [CrossRef]
  142. Zhang, Y.; Ding, K.; Hui, J.; Lv, J.; Zhou, X.; Zheng, P. Human-object integrated assembly intention recognition for context-aware human-robot collaborative assembly. Adv. Eng. Inform. 2022, 54, 101792. [Google Scholar] [CrossRef]
  143. Sandrini, S.; Faroni, M.; Pedrocchi, N. Learning Action Duration and Synergy in Task Planning for Human-Robot Collaboration. In Proceedings of the 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Stuttgart, Germany, 6–9 September 2022; pp. 1–6. [Google Scholar]
  144. Tchane Djogdom, G.V.; Meziane, R.; Otis, M.J.D. Robust dynamic robot scheduling for collaborating with humans in manufacturing operations. Robot. Comput.-Integr. Manuf. 2024, 88, 102734. [Google Scholar] [CrossRef]
  145. Wu, D.; Zheng, P.; Zhao, Q.; Zhang, S.; Qi, J.; Hu, J.; Zhu, G.-N.; Wang, L. Empowering natural human–robot collaboration through multimodal language models and spatial intelligence: Pathways and perspectives. Robot. Comput.-Integr. Manuf. 2026, 97, 103064. [Google Scholar] [CrossRef]
  146. Bibbo, D.; Corvini, G.; Schmid, M.; Ranaldi, S.; Conforto, S. The Impact of Human-Robot Collaboration Levels on Postural Stability During Working Tasks Performed While Standing: Experimental Study. JMIR Hum. Factors 2025, 12, e64892. [Google Scholar] [CrossRef]
  147. Barrett, M.S.; Creech, A.; Zhukov, K. Creative collaboration and collaborative creativity: A systematic literature review. Front. Psychol. 2021, 12, 713445. [Google Scholar] [CrossRef]
  148. Weingart, L.R.; Todorova, G.; Cronin, M.A. Task conflict, problem-solving, and yielding: Effects on cognition and performance in functionally diverse innovation teams. Negot. Confl. Manag. Res. 2010, 3, 312–337. [Google Scholar] [CrossRef]
  149. Meeners, M. Human-Robot Collaboration in Creative Innovation Processes: The Influence of Functional, Relational and Social-Emotional Elements on the Intention to Collaborate with a Creative Social Robot in the Work Environment. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2022. [Google Scholar]
  150. Jacob, F.; Grosse, E.H.; Morana, S.; König, C.J. Picking with a robot colleague: A systematic literature review and evaluation of technology acceptance in human–robot collaborative warehouses. Comput. Ind. Eng. 2023, 180, 109262. [Google Scholar] [CrossRef]
  151. Lee, S.M.; Olson, D.L.; Trimi, S. Innovative collaboration for value creation. Organ. Dyn. 2012, 41, 7. [Google Scholar] [CrossRef]
  152. ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. ISO: Geneva, Switzerland, 2016.
  153. Tavana, M.; Soltanifar, M.; Santos-Arteaga, F.J. Analytical hierarchy process: Revolution and evolution. Ann. Oper. Res. 2023, 326, 879–907. [Google Scholar] [CrossRef]
  154. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  155. Xun, X.; Yuan, Y. Research on the urban resilience evaluation with hybrid multiple attribute TOPSIS method: An example in China. Nat. Hazards 2020, 103, 557–577. [Google Scholar] [CrossRef]
  156. Liu, D.; Qi, X.; Li, M.; Zhu, W.; Zhang, L.; Faiz, M.A.; Khan, M.I.; Li, T.; Cui, S. A resilience evaluation method for a combined regional agricultural water and soil resource system based on Weighted Mahalanobis distance and a Gray-TOPSIS model. J. Clean. Prod. 2019, 229, 667–679. [Google Scholar] [CrossRef]
Figure 1. The organization of methodology adopted in this study.
Figure 1. The organization of methodology adopted in this study.
Buildings 15 02734 g001
Figure 2. The conceptual framework of HRC team performance.
Figure 2. The conceptual framework of HRC team performance.
Buildings 15 02734 g002
Table 1. Comparison of characteristics between humans and robots.
Table 1. Comparison of characteristics between humans and robots.
AspectHuman CharacteristicsRobot Characteristics
PerceptionEmotional perception, subjectivity, and holistic perspectiveObjective and data-driven
Decision-makingEthical considerations, justice, and fairnessEfficiency and logical calculations
Problem-solvingCreative and highly adaptableAlgorithm-driven but lacks flexibility
Learning and adaptationLearn from experience and adaptableRequires explicit programming or updates to adapt
Errors and correctionsProne to subjective errors but capable of self-correctionConsistent output yet lacks autonomous self-correction
Table 2. List of indicators for HRC team performance in construction projects.
Table 2. List of indicators for HRC team performance in construction projects.
DimensionsIndicatorsExplanationReference
ProductivityTask completion time (P1)This is the total time from the start of a construction task to its final acceptance. The interval covers human actions, robot operations, and any necessary pauses for safety or inspection.[42,120]
Production capacity (P2)It measures throughput as the volume of work the human–robot team delivers per unit of time. For example, cubic meters of concrete, square meters of brickwork, or the number of components completed.[121]
Human–robot ratio (P3)This means the number of human workers and robots on a task, assessing whether team configuration is reasonable.[42]
Human–robot time ratio (P4)This indicator compares the total active working time spent by workers to the active operating time of robots on the same task or project segment. A lower ratio indicates that robots are carrying a larger share of the “heavy, dull, and dirty” work, freeing humans to focus on higher-value cognitive or creative activities. The H/R time ratio helps managers assess automation utilization, identify labor-saving opportunities, and balance workforce deployment with robot.[42]
Collaboration efficiency (P5)This quantifies how productively the human–robot team converts its combined resources into finished work.[122]
SafetyDynamic separation distance (S1)This indicator measures the real-time distance that the robot keeps from the human operator during collaborative work. A stable distance within the prescribed safety envelope shows that the perception and motion-planning modules can track the worker accurately and adjust robot speed to avoid contact.[123,124,125]
Collision compliance (S2)The indicator records the peak force exerted on a human body part when accidental contact occurs.[126]
Robot hazard response time (S3)It denotes the total time elapsed from the moment a sensor detects a potential hazard to the moment the robot has fully completed safety action.[127]
Collaborative robot’s autonomous hazard identification rate (S4)This expresses the proportion of safety-relevant events that the robot detects on its own, without predefined triggers.[128]
Safe collaboration efficiency (S5)It means productive time to the number and duration of safety-related stops, evaluating fluency and production efficiency.[129]
Safety risk assessment update frequency (S6)This indicator counts how often the risk assessment for a specific human–robot task is updated. Regular updates reflect an awareness of safety.[130]
Situation awareness level (S7)It measures the worker’s ability to understand the robot’s current state, infer its behavioral intent, and predict its forthcoming actions.[131]
Trust degree (S8)This is an estimate of how much the worker believes the collaborative robot will act in a safe, reliable, and predictable manner during a shared task.[132,133]
Human error rate (S9)This is the frequency of worker mistakes, such as issuing the wrong command or misinterpreting the robot’s motion, which may lead to task delays or safety interventions.[134]
Task and collaboration qualityApproved co-executed tasks and effectiveness (Q1)This indicator counts the construction tasks that are finished jointly by workers and robots and pass the first-time quality inspection.[122,135]
Confidence in collaboration quality (Q2)It records the extent to which workers believe that the collaboration process is reliable and accurate.[136]
Rework incident (Q3)This indicator counts the amount of work that must be reprocessed because of the quality requirements.[137]
Physical and cognitive workload (Q4)This indicator reflects the biomechanical load borne, attention, and information-processing demand by workers during collaboration.[119,138]
Perceived stress level (Q5)It represents workers’ self-reported psychological stress during the collaborative task.[139]
Collaboration satisfaction (Q6)This indicator means overall worker satisfaction with the collaboration.[110]
Worker collaboration experience (Q7)This indicator captures how intuitive and natural the workers perceive the overall interaction with the robot to be.[140,141]
Intention recognition (Q8)It measures the robot’s ability to correctly identify the operator’s command, gesture, or implicit motion on the first attempt.[142]
Flexibility and reliabilityCollaboration duration (F1)This indicator records the cumulative running time of the human–robot collaboration team. A longer collaboration duration implies higher system reliability and fewer unexpected stoppages.[143]
Collaborative task reconfiguration time (F2)When a construction task or site condition changes, this metric measures the time required for the human–robot team to decompose the new task, re-plan paths, and resume normal operation.[144]
Environmental adaptation ability (F3)This expresses the capacity of HRC to recognize and cope with unplanned changes in its surroundings while sustaining safe and continuous operation. It reflects how effectively perception, reasoning, and control modules integrate with human strategies to accommodate contextual variability.[145]
Stability under extreme conditions (F4)It refers to the extent to which HRC can preserve its functional characteristics when exposed to severe physical or organizational stressors.[146]
Robustness (F5)Robustness denotes the overall ability of the HRC system to uphold acceptable levels of quality and safety in the presence of simultaneous disturbances or uncertainties.[144]
Collaborative creativityCreative contribution (C1)This denotes the extent to which the human–robot team produces solutions or process improvements that are simultaneously novel and appropriate to the construction task.[147]
Creative task willingness (C2)It captures the worker’s motivation to engage in higher-order, non-routine activities once routine or hazardous task have been delegated to the robot. It reflects the behavioral readiness of personnel to invest cognitive resources in exploration or problem-solving endeavors.[110]
Perceived collaboration creative climate (C3)This indicator refers to the collective perception that the HRC work environment supports idea generation, risk-taking, and open knowledge exchange. It encompasses socio-psychological facets such as autonomy, idea support, and cross-disciplinary dialogue that are regarded as antecedents of team creativity.[148]
Adopted innovation proposal (C4)This indicator refers to the improvement or creative ideas put forward by on-site workers or robot-operations staff that have been formally approved by the project manager and placed on the implementation.[149]
Implemented robot-generated alternatives (C5)This captures the degree to which alternative plans or configurations are produced autonomously by the robot.[150]
Creative value added (C6)This refers to the economic, temporal, or qualitative benefits realized because of the creative outputs.[151]
Table 3. Profile of experts.
Table 3. Profile of experts.
VariableCategoryNumberPercentage
GenderMale960.0%
Female640.0%
Affiliation typeUniversity and research institute746.7%
Construction contractor426.6%
Construction robotics firm213.3%
Government department213.3%
Years of experience in construction>0 and ≤3213.3%
>3 and ≤5960.0%
>5426.7%
Table 4. Descriptive statistics for the candidate HRC performance indicators.
Table 4. Descriptive statistics for the candidate HRC performance indicators.
IndicatorsMean (µ)Standard Deviation (σ)Rank
Task completion time (P1)4.5330.6401
Production capacity (P2)3.9331.1009
Human–robot ratio (P3)3.8000.67611
Human–robot time ratio (P4)2.6000.63230
Collaboration efficiency (P5)3.6000.50717
Dynamic separation distance (S1)4.4670.5162
Collision compliance (S2)3.7331.03314
Robot hazard response time (S3)3.6000.73717
Collaborative robot’s autonomous hazard identification rate (S4)3.6000.98617
Safe collaboration efficiency (S5)2.6670.72429
Safety risk assessment update frequency (S6)3.1330.91525
Situation awareness level (S7)3.6670.72415
Trust degree (S8)2.3330.97633
Human error rate (S9)2.4001.05632
Approved co-executed tasks and effectiveness (Q1)4.0000.7568
Confidence in collaboration quality (Q2)3.5330.74320
Rework incident (Q3)3.8001.20711
Physical and cognitive workload (Q4)3.8000.86211
Perceived stress level (Q5)2.9330.70427
Collaboration satisfaction (Q6)4.0670.5947
Worker collaboration experience (Q7)2.8001.08228
Intention recognition (Q8)2.9330.79926
Collaboration duration (F1)4.3330.6173
Collaborative task reconfiguration time (F2)3.6670.48815
Environmental adaptation ability (F3)3.4670.83421
Stability under extreme conditions (F4)4.1330.646
Robustness (F5)4.2000.7755
Creative contribution (C1)4.2670.8844
Creative task willingness (C2)3.4000.82822
Perceived collaboration creative climate (C3)3.2670.88424
Adopted innovation proposal (C4)3.9330.9619
Implemented robot-generated alternatives (C5)2.5330.83431
Creative value added (C6)3.3330.72423
Table 5. Reason for indicators excluded from the HRC performance framework.
Table 5. Reason for indicators excluded from the HRC performance framework.
CodeIndicatorReason for Low Importance
P4Human–robot time ratioExperts considered absolute productivity indicators, e.g., task completion time (P1) and production capacity (P2)—more intuitive and easier to capture than a relative time ratio, which is sensitive to crew size, break schedules, and shift patterns. Consequently, P4 was viewed as redundant.
S5Safe collaboration efficiencySafety effectiveness is already captured by higher-scoring, more concrete metrics such as dynamic separation distance (S1) and robot hazard response time (S3). Because S5 blends safety and productivity into a composite efficiency term that lacks a standard definition, experts found it hard to interpret and measure reliably.
S8Trust degreeTrust is acknowledged as an influencing factor that shapes how humans and robots collaborate, not an outcome that directly expresses collaboration performance. Because the objective of the indicator set is to measure performance, items that act mainly as antecedents were deprioritized.
S9Human error rateIt is an influencing factor and is hard to isolate human-only errors from system or environment-induced events. Experts therefore questioned the feasibility and accuracy of this metric, preferring observable safety events already covered by other indicators.
Q5Perceived stress levelPhysical and cognitive workload (Q4) captures the main sources of stress on site, making Q5 partially redundant.
Q7Worker collaboration experienceThis variable reflects a worker’s background rather than real-time system performance. Experts felt that experience should be addressed through hiring and training policies, not as a live performance indicator.
Q8Intention recognitionIntention recognition primarily functions as an upstream enabler of good collaboration, not as a direct expression of collaboration performance. In other words, accurate recognition of human intent makes high performance possible, but it is not a performance outcome.
C5Implemented robot-generated alternativesAdoption of robot-proposed alternatives is currently rare on most sites. This indicator is too early for regular performance evaluation.
Table 6. Weights for dimensions and indicators of HRC team performance.
Table 6. Weights for dimensions and indicators of HRC team performance.
DimensionsWeight (Wi)IndicatorsLocal Weight (Wj)Global Weight (Wij)
Productivity (P)0.2327Task completion time (P1)0.32790.0763
Production capacity (P2)0.24540.0571
Human–robot ratio (P3)0.20630.0480
Collaboration efficiency (P5)0.22050.0513
Safety (S)0.2708Dynamic separation distance (S1)0.24630.0667
Collision compliance (S2)0.17980.0487
Robot hazard response time (S3)0.11120.0301
Collaborative robot’s autonomous hazard identification rate (S4)0.21420.0580
Safety risk assessment update frequency (S6)0.07830.0212
Situation awareness level (S7)0.17060.0462
Task and collaboration quality (Q)0.1827Approved co-executed tasks and effectiveness (Q1)0.21460.0392
Confidence in collaboration quality (Q2)0.20690.0378
Rework incident (Q3)0.12480.0228
Physical and cognitive workload (Q4)0.22440.041
Collaboration satisfaction (Q6)0.22930.0419
Flexibility and reliability (F)0.1633Collaboration duration (F1)0.21070.0344
Collaborative task reconfiguration time (F2)0.19530.0319
Environmental adaptation ability (F3)0.15980.0261
Stability under extreme conditions (F4)0.24430.0399
Robustness (F5)0.18980.0310
Collaborative creativity (C)0.1505Creative contribution (C1)0.29440.0443
Creative task willingness (C2)0.07570.0114
Perceived collaboration creative climate (C3)0.13690.0206
Adopted innovation proposal (C4)0.13290.0200
Creative value added (C6)0.36010.0542
Table 7. HRC team performance indicator evaluation grade standards.
Table 7. HRC team performance indicator evaluation grade standards.
DimensionsIndicatorsGrade I (Poor)—Score 1Grade II (Fair)—Score 2Grade III (Good)—Score 3Grade IV (Excellent)—Score 4
Productivity (P)Task completion time (P1)Efficiency lower than manual baseline0–25% efficiency improvement25–50% efficiency improvement>50% efficiency improvement
Production capacity (P2)Output lower than manual baseline0–25% output increase25–50% output increase>50% output increase
Human–robot ratio (P3)Unreasonable ratio, severe waste of manpowerRatio mostly reasonable, occasional redundancyReasonable ratio, good allocationOptimized ratio, highly efficient human–robot collaboration
Collaboration efficiency (P5)Frequent delays due to poor collaborationOccasional delays and some collaboration issuesSmooth collaboration with minor interruptionsSeamless collaboration
Safety (S)Dynamic separation distance (S1)Frequent violations of minimum safe distanceOccasional, minor violations of safe distanceConsistently maintains safe distanceProactively adapts distance to optimize safety
Collision compliance (S2)Contact force exceeds limits and has risk of injuryContact force approaches but does not exceed limitsContact force is well below limitsAlmost imperceptible flexible contact
Robot hazard response time (S3)Delayed response and ineffective hazard avoidanceLong response time and barely avoids hazardQuick response and effectively avoids hazardPredictive response, proactively avoids hazard
Collaborative robot’s autonomous hazard identification rate (S4)No autonomous detection<60% recognition60–90% recognition>90% recognition
Safety risk assessment update frequency (S6)Never updatedIrregular, lags realityUpdated after major process changesUpdated real-time based on site dynamics
Situation awareness level (S7)Workers completely misunderstand robot’s statusWorkers have a basic understanding of robot state without predictionWorkers can predict the next robot actionsWorkers have a deep, intuitive understanding of robot intent and predict long-term robot behaviors
Task and collaboration quality (Q)Approved co-executed tasks and effectiveness (Q1)<80% first-pass yield rate80–90% first-pass yield rate90–98% first-pass yield rate>98% first-pass yield rate
Confidence in collaboration quality (Q2)Low confidence; workers frequently double-check robot’s workModerate confidence; occasional verification neededHigh confidence; workers trust the robot’s outputComplete confidence; workers rely on robot without hesitation
Rework incident (Q3)Frequent; major delaysOccasional occurrences, impact is controllableVery few occurrencesAlmost no occurrences
Physical and cognitive workload (Q4)Extremely high load; workers are exhaustedHigh load; leads to fatigueModerate load; workers feel acceptableLoad significantly reduced, workers are relaxed
Collaboration satisfaction (Q6)Workers report frequent frustration and conflictWorkers report some difficulties but collaboration is functionalWorkers report a generally positive and smooth experienceWorkers report high satisfaction and seamless synergy
Flexibility and reliability (F)Collaboration duration (F1)Frequent failures; <50% effective timeOccasional failures; 50–75% uptimeStale; 75–95% uptimeRare failures; >95% uptime
Collaborative task reconfiguration time (F2)Extended; roughly an hour or longerModerate; on the order of tens of minutesQuick; around a quarter hourRapid; just a few minutes
Environmental adaptation ability (F3)Cannot adapt to any unexpected changesCan only adapt to a few preset changesCan adapt to most unstructured environmental changesCan autonomously adapt to complex, dynamic environments
Stability under extreme conditions (F4)Significant performance degradation under stressNoticeable but manageable performance degradationMinor performance degradationNo significant degradation in performance
Robustness (F5)System fails when a single disturbance occursSystem can only resist a single disturbanceSystem can resist multiple disturbances, with slight performance degradationSystem maintains stable performance despite multiple disturbances
Collaborative creativity (C)Creative contribution (C1)No novel solutions or improvements generatedFew, minor improvements suggested by the teamSeveral valuable improvements generatedTeam generates breakthrough solutions or process innovations
Creative task willingness (C2)Workers show no interest in problem-solving activitiesWorkers are hesitant to engage in creative tasksWorkers are willing to engage when promptedWorkers proactively seek opportunities for innovation
Perceived collaboration creative climate (C3)Environment is perceived as discouraging new ideasEnvironment is neutral; ideas are neither encouraged nor discouragedEnvironment is supportive of new ideasEnvironment actively fosters and rewards innovation and risk-taking
Adopted innovation proposal (C4)0 proposals adopted1–2 minor proposals adopted per taskMultiple valuable proposals adopted per taskA culture of continuous improvement is evident; many proposals adopted
Creative value added (C6)No measurable value from creative outputsMinor, localized benefits (e.g., small time saving on one task)Measurable project-level benefits (e.g., cost reduction, quality improvement)Significant, strategic benefits (e.g., new construction method, competitive advantage)
Table 8. Classification of performance level.
Table 8. Classification of performance level.
Performance LevelRelative Closeness ( C i )
Low Performance0
Fair Performance0.448
Good Performance0.723
Excellent Performance1
Table 9. Case study data for HRC team performance.
Table 9. Case study data for HRC team performance.
Indicator CodeGradeScore
P1IV3
P2IV3
P3III4
P5III3
S1III4
S2III3
S3IV3
S4II4
S6II3
S7II3
Q1IV3
Q2III3
Q3IV3
Q4III3
Q6III3
F1III3
F2I2
F3I2
F4III3
F5II2
C1II2
C2II2
C3II2
C4II2
C6I2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, G.; Luo, X.; Zhang, L.; Li, W.; Wang, W.; Li, Q. A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects. Buildings 2025, 15, 2734. https://doi.org/10.3390/buildings15152734

AMA Style

Zhang G, Luo X, Zhang L, Li W, Wang W, Li Q. A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects. Buildings. 2025; 15(15):2734. https://doi.org/10.3390/buildings15152734

Chicago/Turabian Style

Zhang, Guodong, Xiaowei Luo, Lei Zhang, Wei Li, Wen Wang, and Qiming Li. 2025. "A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects" Buildings 15, no. 15: 2734. https://doi.org/10.3390/buildings15152734

APA Style

Zhang, G., Luo, X., Zhang, L., Li, W., Wang, W., & Li, Q. (2025). A Framework of Indicators for Assessing Team Performance of Human–Robot Collaboration in Construction Projects. Buildings, 15(15), 2734. https://doi.org/10.3390/buildings15152734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop