Next Article in Journal
Predictors Affecting Effects of Virtual Influencer Advertising among College Students
Next Article in Special Issue
Research on Full-Element and Multi-Time-Scale Modeling Method of BIM for Lean Construction
Previous Article in Journal
Multidimensional Evaluation of Urban Land-Use Efficiency and Innovation Capability Analysis: A Case Study in the Pearl River Delta Region, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Critical Review for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges with Human-In-The-Loop

1
Faculty of Architecture, Civil and Transportation Engineering, Beijing University of Technology, Beijing 100124, China
2
The Key Laboratory of Urban Security and Disaster Engineering of the Ministry of Education, Beijing University of Technology, Beijing 100124, China
3
School of Mechanics and Civil Engineering, China University of Mining & Technology, Beijing 100083, China
4
CUCDE Environmental Technology Co., Ltd., Beijing 100088, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(8), 6389; https://doi.org/10.3390/su15086389
Submission received: 13 February 2023 / Revised: 25 March 2023 / Accepted: 28 March 2023 / Published: 7 April 2023

Abstract

:
Trustworthy and explainable structural health monitoring (SHM) of bridges is crucial for ensuring the safe maintenance and operation of deficient structures. Unfortunately, existing SHM methods pose various challenges that interweave cognitive, technical, and decision-making processes. Recent development of emerging sensing devices and technologies enables intelligent acquisition and processing of massive spatiotemporal data. However, such processes always involve human-in-the-loop (HITL), which introduces redundancies and errors that lead to unreliable SHM and service safety diagnosis of bridges. Comprehending human-cyber (HC) reliability issues during SHM processes is necessary for ensuring the reliable SHM of bridges. This study aims at synthesizing studies related to HC reliability for supporting the trustworthy and explainable SHM of bridges. The authors use a bridge inspection case to lead a synthesis of studies that examined techniques relevant to the identified HC reliability issues. This synthesis revealed challenges that impede the industry from monitoring, predicting, and controlling HC reliability in bridges. In conclusion, a research road map was provided for addressing the identified challenges.

1. Introduction

According to the report card for America’s infrastructure, the nation’s aging civil infrastructures (CIs) (e.g., bridges) are deteriorating at an alarming rate, with overwhelming CI system failures warning of the coming national crisis. For example, 42 percent of 617,000 bridges within the United States were built more than 50 years ago, and 7.5 percent of all bridges are in poor condition. Such aging CIs pose unique challenges to the interwoven human-in-the-loop (HITL) technical processes in civil infrastructure operation and maintenance (CIS O&M). For example, bridge engineers are required to conduct routine inspections to examine bridge conditions and ensure safe bridge operations. In addition, various bridge users, stakeholders, and decision-makers could significantly influence operational risks and performance of bridges. Rigorous inspection, maintenance planning, and field considerations of bridge engineers determine the priority of maintenance options of CIs [1,2]. Engineers working on structural health monitoring (SHM) of bridges should consider coupled infrastructure failures and quantify the risks in order to prioritize maintenance and repair actions of CIs. Delayed reactions to vulnerable parts of CIs, incorrect judgment about the relative importance of maintenance options, miscommunications, and improper cooperation can cause cascading failures within connected infrastructure elements and systems.
Traditional SHM of bridges is mainly conducted through visual inspection by bridge engineers. Unfortunately, condition ratings for the same bridge structure provided by different bridge engineers vary a lot due to different engineering backgrounds and field experiences. The development and application of sensing and information technologies (e.g., GNSS, LiDAR, etc.) bring opportunities to achieve reliable real-time situation awareness and predictive data-driven systems control that could support the trustworthy and explainable SHM of bridges. Still, human involvement when performing cognitive and decision-making activities during massive spatiotemporal data processing and model simulation processes creates barriers for the reliable SHM of bridges [3]. Figure 1 illustrates human-cyber interactive processes and reliability issues during SHM processes of bridges. The human aspect involves the interactions between human, data, and digital models for monitoring, predicting, and analyzing SHM and maintenance options based on the conditions of bridges and network-level maintenance performance. This aspect refers to the (1) human reliability that quantifies the risks of the cognition and decision-making processes of workers during field inspection of bridges, and (2) data analysis reliability that captures how human decisions about the selection and settings of data analysis methods influence the accuracy, precision, and reliability of the information derived from raw data. The cyber aspect involves information technologies that collect, process, store, and transmit data and digital models for generating Digital Twins (DTs) of the physical infrastructure to support engineers in monitoring health conditions of bridges and ensure safe bridge operations. This aspect involves three reliability concerns: (1) data reliability, that quantifies the data quality issues (e.g., uncertainties caused by low-resolution images and missing data points); (2) computation reliability issues, where computational processes applied to data and digital models introduce uncertainties in the information derived from raw data and digital models; (3) data storage, exchange, and transmission reliability issues, where information losses occur due to improper data storing, format conversion, and transmission processes.
The reliability theory provides theoretical tools for comprehending the safety and efficiency of bridge inspection as a dynamic human-cyber system. Existing reliability studies scattered around multiple disciplines and domains have the potential of collectively addressing the vision shown in Figure 1. For example, due to the advantages of the application of computer vision techniques, such as non-contact, long distance, fast, low cost and labor, and low interference with the daily operation of the structure, the authors propose the use of computer vision techniques for local and global structural health monitoring. Existing studies of computer-vision-based SHM focus on the implementation and integration of two-dimensional computer vision techniques to solve SHM challenges and the conversion of three-dimensional problems into two-dimensional problems using implemented projective geometry methods [4]. The rapid development of wireless technology has led to a significant development in the integration of SHM systems with wireless sensor network technology. SHM systems based on wireless sensor networks introduce a novel technology that has the benefit of reducing the installation and maintenance compared to traditional wired systems [5]. The advances in sensing technologies and data acquisition platforms have led to a new era of big data, where a large amount of heterogeneous data is collected by a variety of sensors. The increasing accessibility and diversity of data resources provide new opportunities for SHM, while it remains challenging to aggregate information obtained from multiple sensors to make robust decisions [6]. In addition, machine learning (ML) has powerful computational and image processing capabilities in dealing with different aspects of reinforced concrete bridges. Once the ML model is trained, the efficiency of prediction is significantly improved. It surpasses the speed of traditional methods for structural damage identification and strength prediction, achieving almost real-time performance [7]. Numerical simulations of the dynamic response of structures subjected to different types of excitations are performed to assess the variability of the spectrum-driven approach in terms of the type and location of the excitation source [8].
SHM studies have examined methods for monitoring the physical infrastructure systems and assessing systems reliability [9,10]. Psychologists have explored cognitive reliability issues in various contexts [11,12]. Studies in the human systems engineering (HSE) domain have examined reliability issues related to manual data analysis [13,14,15], interpersonal communications [16,17], and the vulnerabilities of collaborative data-driven decision-making processes [18,19]. Some engineering studies examined the quality issues of data and models used in various engineering applications [20,21,22]. Additional examples that examine the reliability of cyberinfrastructure in engineered systems control are the reliability assessment of computing workflows [23,24], data storage and compression methods [25,26], and data exchange and transmission mechanisms [27,28].
Human-cyber (HC) reliability refers to the uncertainties that could arise during data analysis processes while analyzing large amounts of data related to civil infrastructure operation and decision processes. Such reliability demands more accurate and effective human-cyber interactions to ensure the safety and efficiency of the CIS O&M [29]. However, the lack of a systematic review of a wide range of HC reliability issues impedes SHM of bridges and prevents researchers and professionals from effectively using existing theoretical and practical tools in resolving their difficulties related to HC reliability problems. The overall goal of this literature review is to synthesize various literature scattered around multiple domains to form a coherent framework that can guide SHM researchers in identifying similar HC reliability issues in relevant domains. Such identification of similar problems will lead these researchers to find useful theoretical solutions and practical tools developed in those domains. The following sections will first present an SHM case to demonstrate the various HC reliability issues shown in Figure 1 using a bridge inspection case (Section 2). The following two sections will then synthesize existing studies related to the two aspects of HC reliability issues shown in Figure 1 (Section 3 and Section 4). These two sections comment on existing studies on each aspect of HC reliability issues and discuss the knowledge gaps and challenges of transferring those studies to the SHM domain (Section 5). Finally, the last section summarizes a research road map to approach these challenges and knowledge gaps based on the specific challenges and knowledge gaps identified (Section 6).

2. Motivating Case: Human-Cyber Reliability Issues in Structural Health Monitoring and Risk Prognosis of Bridges

Routine visual inspections are necessary for (1) discovering bridge defects, (2) providing bridge condition ratings, and (3) establishing guidance for necessary bridge maintenance and repairs. Trustworthy and explainable bridge inspection is crucial for extending the service life of bridges and ensure operational safety within its service life. Such visual inspections require engineers to conduct visual inspections at field, update the finite element model based on the identified bridge defects and spatiotemporal changes, and predict deterioration patterns based on the updated finite element model. Unfortunately, various HC reliability issues and challenges exist that create barriers for achieving the reliable SHM of bridges.
Figure 2 shows various human-cyber reliability issues during the SHM of bridges with HITL. For example, even the same inspector could have variations in their performance of identifying bridge defects comprehensively for producing reliable condition ratings due to the inherent diversity and complexity of cognitive interaction between bridge inspectors and field environments. Such variations usually cause conflict or the omission of bridge defect information from being documented in historical bridge inspection records, which could bring significant challenges for reliable bridge condition prediction. In addition, updating bridge digital models based on information from multiple sources (e.g., images, textural data, contact sensory data) requires (1) effective data processing workflows for processing data in many formats, (2) information fusion methods for fusing data from multiple data sources, and (3) model updating algorithms for updating the digital model to reflect the true bridge condition. Unfortunately, reliability issues are inevitable while establishing data processing workflows, selecting algorithms, and setting up parameters. All such reliability issues could generate biases and errors that prevent bridge engineers from having a comprehensive understanding of the true bridge condition.

2.1. Human Reliability Issues

The human reliability aspect refers to the reliability issues during the interactions between a human, the data, and digital models for monitoring, analyzing, and predicting bridge conditions and making recommendations for reliable bridge maintenance and repairs.
Traditional visual inspection requires bridge engineers to visually look for bridge defects during field inspections to diagnose bridge health condition. Performance issues still exist when identifying critical bridge defects and predicting the underlying reasons, even with well-designed qualitative inspection standards and procedures. For example, experienced engineers tend to perform more consistently and reliably than less experienced ones while looking for critical bridge defects and assigning condition ratings for bridge elements. The “subjective” nature of bridge condition assessment conducted by bridge engineers could hardly achieve reliable bridge condition assessment due to (1) the uniqueness of different structure types, (2) dynamic site conditions, (3) the availability of data and data quality, and (4) the knowledge level and years of inspection experience of different bridge engineers. Unfortunately, limited studies have examined the impacts of the human factors of bridge engineers on the reliability of SHM results. Even experienced engineers have the potential to fail to identify all critical defects during field inspections, which could cause biases in bridge condition ratings. Another challenge is the lack of sufficient datasets that document how the SHM of bridges is conducted by engineers with different background and experiences. All such studies and datasets are vital for comprehending the processes of field inspections, data analysis, and condition assessments conducted by experienced bridge engineers.
This case aims to illustrate the HC reliability issues that could occur when bridge engineers are conducting field inspections, data analysis, and condition assessments. For capturing all such HC reliability issues, previous studies (as shown in Figure 3) have established a game environment for simulating the process of bridge inspection with augmented finite element analysis (FEA). In this inspection game, the established finite element model (FEM) provides FEA simulation data under different loading conditions. In addition, inspection reports contain observed bridge defect information (e.g., cracks, locations, etc.) to help the inspectors examine the bridge condition. During the inspection game, multiple participants, including both experienced and inexperienced bridge engineers, are hired to examine the bridge condition through FEA and searching information from the inspection reports. All cognitive behaviors during bridge inspection processes are logged and analyzed. Using all such “virtual bridge inspection” log data can then help tracking the behavioral pattern differences between experienced and inexperienced bridge engineers when conducting bridge inspections. In addition, log mining of such inspection behavioral data could help improve the teachability of a reliable bridge inspection process. In addition, the captured inspection behavioral patterns can be used as guidance for improving the bridge inspection process. For example, Liu and Xiong examined the performance differences of the SHM of a bridge between an inexperienced bridge engineer (on the right) and an experienced bridge engineer (on the left); the experienced engineer had a much more organized pattern when searching for bridge defects when inspecting a large span continuous rigid frame bridge (e.g., more attention was been paid to the bottom slabs, especially to the mid-span of every major span) [30,31].
HC reliability issues still exist in remote sensing techniques and influence the data quality for bridge inspection and management. Less experienced bridge engineers could choose locations for imaging sensors where critical parts of the bridge are invisible to the sensor, use the wrong imaging parameters that lead to lengthy data collection, or use sensors that lack the required resolution for capturing critical bridge features. Engineers processing the imagery data could use the wrong data processing algorithms that compromise the accuracy of the information derived from images. A systematic review of HC reliability issues related to using imaging technologies in bridge inspection and maintenance can guide engineers to identify and monitor such issues. Moreover, such a review can help bridge engineers identify relevant theoretical and technical methods for reducing the impacts of many HC issues on the reliability of bridge management decision making based on remote sensory data.

2.2. Cyber Reliability Issues

Cyber reliability issues refer to the reliability of the processes that use various data and information sources to form information models, such as digital as-built models of bridge structures. In bridge inspection, numerical models (i.e., finite element models) of bridges are usually established using design drawings, and remote sensory images are useful for engineers to examine bridge deteriorations under different loading conditions [32]. Using deformations, defects, and other spatiotemporal data collected from bridge inspection reports and structural health monitoring systems could help estimate and update the parameters of finite element models based on as-built bridge conditions. Bridge engineers can then use the updated digital model to run simulations of bridges under various scenarios and discover bridge deterioration patterns and failure modes.
The algorithm plays an important role in the process of model updating. Various algorithms form data processing workflows for producing FEM models augmented by environmental conditions to support bridge maintenance planning. These algorithms could introduce various errors into the derived information and influence the accuracy, timeliness, comprehensiveness, and detail-level of bridge information models. Data storage and exchange processes could introduce additional technical problems that influence the quality of bridge inspection models. Comprehending how various parameters in data processing workflows can influence information models’ quality is challenging due to the exponentially large search space of data processing workflows. The next paragraph uses a data processing workflow for supporting the updating of an FEM model based on field data.
The FEM updating process uses observed geometric, visual, and sensory information to update the parameter of a model until the FEM simulation results finely agree with the physical conditions of the bridge (Figure 4) [33,34]. These parameters in the digital models of bridges need updates to reflect physical conditions and predict deterioration trends. Comprehending how all these data processing algorithms and their parameters influence the quality of the FEM is critical for efficient diagnosis of bridge conditions based on digital models and field data. The challenge to identifying a reliable data processing workflow for producing a reliable FEM is that the combination explosion of hundreds of data processing parameters makes the search space of data processing workflows exponentially large [35,36].

3. Human Reliability Analysis and Team Cognition for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges

This section synthesizes studies that examine human reliability issues during the SHM of bridges. The authors found 322 related studies in the literature ranging from 2010 to 2022 on Web of Science search engines (search criteria—Topic: “human reliability” and “operation”) (see Figure 5). Among all these studies, Table 1 lists five highly cited papers that have examined human reliability issues in various aspects.
Human reliability issues need two levels of studies: individual level and team level. Both levels have three types of reliability issues: (1) perception reliability, which is related to the data collection and sensing performance of human individuals and teams when engaging with the physical environments; (2) cognition reliability, which is related to the decision-making performances of human individuals and teams based on the available information of the workspace and people’s domain knowledge, expertise, and prior working experiences; and (3) response reliability, which is related to the specific actions of and cooperation between human individuals while working on the assigned activities. This section synthesizes studies that examine multiple interactions between human, data, and digital models for monitoring, predicting, and analyzing SHM options based on the health conditions of bridges. Two aspects include the individual-level human reliability (Section 3.1) and the team-level human reliability (Section 3.2).

3.1. Human Reliability (Individual Level)

Human reliability research examines the perception, cognition, and response reliabilities of human individuals at the individual level. Table 2 lists human factor studies exploring how various cognitive factors, decision contexts, and individual capability influence SHM reliability in carrying out assigned activities.
Overall, existing human factors studies conducted extensive exploration of the reliability of individual human workers. The major challenge is the lack of reliable brain signal interpretation methods for recovering the meaningful mental processes of workers in reaction to the human individual’s physical conditions and environmental conditions [66]. Most studies examined human individuals’ observed decisions and decision outcomes in different physical human body conditions and environments. As conducting experiments with human subjects in real working environments is extremely time-consuming, many studies use simulators [67,68,69] and Virtual Reality (VR) games [70,71] to collect human behavioral data in controlled environments. Such simulator experiments or VR games allow human individuals to repeat similar processes in controlled virtual or simulator environments. Limited studies were on reconstructing the contents of thinking processes based on brain signals. Brain-Computer Interface (BCI) studies aim at recovering meaningful thoughts based on brain signals. However, these BCIs could only recover simple control commands for human individuals focusing on simple tasks in controlled environments [72].
Most human reliability analysis (HRA) studies originated from other industries, and they are not specifically used in the construction industry. For tackling the above challenges, the authors find suitable HRA methods by drawing on research from other fields. Some scholars proposed a dynamic HRA approach-IDAC (cognition, decisions and executions in crew context) model. The model was developed to probabilistically predict the response of nuclear power plant (NPP) control room operators facing system anomalies for dynamic probabilistic risk assessment (DPRA). IDAC considers operator cognitive response during the process of mitigating consequences or/and bringing the system to a safe state during an accident [73]. Figure 6 shows a keywords network related to HRA.
There are also some experts who believe that the main causes of human errors are different in different operation scenarios and therefore divide the power system operating processes into three categories. Time-centered human reliability is quantified by using the proportional hazard model (PHM), process-centered human reliability is analyzed by the modified Cognitive Reliability and Error Analysis Method (CREAM), and human error probability (HEP) is quantified by the human cognitive reliability method for the emergency-centered scenario, respectively [74]. In these studies, scholars have found shortcomings in the current CREAM model, which needs to consider and examine the interdependencies among the common performance conditions simultaneously. Therefore, a hybrid HRA model combined with CREAM has been proposed to overcome these drawbacks [75].
At the individual level, the human reliability analysis (HRA) framework created for risk monitoring of nuclear power plant operations [76] covers all three aspects of human reliability assessment: (a) cognition reliability, (b) decision reliability, and (c) execution reliability. Certain HRA methods (e.g., Standardized Plant Analysis Risk Human Reliability Analysis, SPAR-H) [65,77] could help to classify field incidents or accidents into diagnosis (perception, decision) failures and action (execution) failures. SPAR-H formulates a framework about how various factors influence individual workers’ performance. All such “performance shaping factors” (i.e., PSFs) can be human physical conditions, features of the task or operational process design, or environmental conditions. The author found that these PSFs are of three categories, as shown in Figure 7 [78]. Unfortunately, SPAR-H mostly focuses on the performance of individual workers. Limited studies focused on extending the human performance assessment framework of SPAR-H to teamwork performance assessment. Additionally, SPAR-H considered human performance’s cognition aspect but did not fully consider the changing contexts of task executions [79].
Some studies examined one or more of the three aspects of individual-level human reliability in various engineering systems. Along the perception reliability direction, some researchers investigated the visual perception reliability of field workers to design better data visualization and wearable information systems for supporting filed condition diagnoses [80,81]. The Cognitive Reliability and Error Analysis Method (CREAM) has been a theoretical framework for integrating human individuals’ perception behaviors into predicting human errors [82]. Some augmented reality (AR) devices have the potential to integrate these visualization methods, designed with consideration of the visual perception performance of human individuals [81,83]. Most of these studies examine advanced data visualization software or hardware design while claiming such systems should allow human to better perceive data patterns [84,85]. The assumption is that better perceptions of visual information could help workers make appropriate decisions with improved awareness of the information underlying those data patterns. Unfortunately, few studies conducted extensive and quantitative user tests for quantifying the improvement of human perception and decision performance with the augmentation of visualized information delivered through mobile devices or immersive VR-AR environments.
Perception reliability issues of human individuals have other dimensions, such as temperature and taste senses. However, the authors found that relatively fewer studies comprehend humans’ sensing mechanisms and relevant reliability issues [86]. Some studies designed automated audio processing methods for augmenting human perception so that workers could better hear task-relevant information or commands in complex and changing environments with noisy backgrounds [87,88]. Few studies were on human individuals’ sensing reliability of smelling and tasting [89,90]. Overall, most existing studies with engineering applications were on vision and auditory senses. Integrated characterization of sensorial performances (e.g., smells of different odor, tastes of different flavors, reactions to temperature changes, etc.) in civil and infrastructure engineering workers is still in the infancy.
On the topic of cognition reliability, previous methods had limited discussion about human individuals’ analysis and decision mechanisms based on the perceived information and situation awareness. The challenge is how to measure human individuals’ mental processes to reveal their cognitive and decision processes [91]. Some recent development of EEG hardware and brain interface equipment provides some potential for monitoring human thinking processes in controlled environments. Brain interface and EEG instruments can help examine how various perceived information influences human individuals’ analytical capabilities [92]. Some researchers examined how 2D and 3D visualization information influences firefighters’ navigation decisions and field operations during emergency responses [93,94]. In the context of the SHM of bridges, this technology could help researchers analyze how various external factors influence inspectors’ judgment and decision-making processes, thereby improving the reliability of inspection results.
Reliability issues of human individuals are the focus of HRA methods. Most HRA methods predict human errors based on task analysis without considering the dynamic nature of structure engineers’ contexts and field workflows. Examples of such static HRA methods include the SPAR-H HRA method [65]. Some HRA methods consider contexts and dependency by considering the quantitative impacts of related events on human error probabilities (HEP) in various contexts. Unfortunately, dependency modeling produces an overall HEP for given tasks with known performance shaping factors (e.g., workload, training, experiences, and task complexity). It does not systematically model the variations of PSFs during complex field processes. PSFs can change when human individuals work in changing environments that might have events that interrupt their cognitive processes. In most cases, a person’s capability and physical conditions can deteriorate and cause significant PSF changes along with the SHM processes. None of the static HRA methods could reliably handle PSF variations for achieving reliable predictions of HEP with full consideration of SHM and maintenance events and environmental changes in the field [95].
Some researchers started to capture PSF evolutions along various working processes in different environments through computational simulations and simulator experiments. Simultaneously, some fundamental challenges remain to integrate the simulation and simulator data into traditional HRA methods. The main challenge is that conventional HRA methods are not using detailed step-by-step analysis. Instead, conventional HRA methods focus on general overall task analysis. Therefore, the detailed simulation models could produce data incompatible with conventional HRA approaches [95,96,97]. In many cases, the HEPs calculated based on detailed process models can overestimate the actual human error rates. The integration of conventional HRA methods with detailed computer simulations based on step-by-step human error assessment and computational simulations requires further studies [95].

3.2. Human Reliability (Team Level)

At the team level, reliability research studies aim to examine: (1) What attributes of communication and data exchange processes could help reliably quantify uncertainties in the shared mental model critical for team situation awareness? (2) What properties of changing workspaces, real-time states of workflows, and co-workers determine the team’s performance and SHM and maintenance risks? These questions are not about the analysis and communication reliability concerns mentioned above; instead, they focus on quantifying how communication, site condition and decision optimization and prioritization analysis, and information quality issues influence the efficiency and effectiveness of executing teamwork. Table 3 provides an overview of three categories of studies that examine the team level reliability issues in three aspects: perception, cognition, and response.
Table 3 shows that team level studies still need extensive investigations compared with many active individual-level studies of human cognitive behaviors. Team cognition reliability in dynamic environments is an emerging and challenging area [113]. The main reason is that team decision and collaboration processes involve analyzing various teams’ compositions with different team members and combinations of various teamwork parameters that capture team dynamics [114]. Such complexity limits the studies investigating team decision and execution reliability [115].
Figure 8 shows a keywords network related to team cognition. At the team level, some researchers have developed frameworks for characterizing cognition, decision, and cooperation at the team level. Some studies examined the “shared mental model” that defines the reliability of coordinating team collaborations through a shared situational awareness of the team based on shared cognition, focusing on all team members’ static knowledge structures. Such a shared situation awareness integrates multiple team members’ knowledge and memories [116]. One challenge is how to achieve a theoretically rigorous resolution of conflicting facts obtained by different team members [117]. Some scholars proposed a method for quantitively examining the neural synchronization between subjects in the collaborative process through electroencephalogram (EEG) hyperscanning. The authors assumed that the neural synchronization in EEGs changes with different team performances and communication effectiveness [118].
Cooke et al. conducted a series of studies to profile team performance [119]. They mostly examined the communication network dynamics of various collaborating teams, such as UAV controllers, aircraft/fighter pilots, and fight teams of soldiers. These studies found that research on team cognition and teamwork performance should emphasize team members’ interactions [120]. Moreover, the changing and dynamic context of these interactions is critical for comprehending team cognition as a behavior of a group of collaborating people [121]. As a result, measuring interactive team cognition (ITC) should be at the team level with full consideration of various activities (e.g., communication and decision-making) [122]. These studies pointed out that relatively little work has been published along the dimension of dynamic decision reliability at the team level with full consideration of team cognition processes and interactions between team members [120].
Some NASA-funded research projects examined long flight teams of astronauts to understand how to establish technical environments to ensure astronaut teams’ reliable long-term operation [123,124,125]. The current study was conducted to investigate the dynamics of team coordination using an extended version of the NDS method. The authors compared three team conditions and the results showed that the experimental group exhibited better team efficiency. The results indicate that future studies should explore synthetic teams and examine the coordination dynamics within teams [126]. Those studies have revealed many team parameters and factors, such as team members’ diverse training and background knowledge and the dynamic history of team coordination patterns, that need systematic characterization.

4. Cyber Reliability for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges

This section synthesizes studies that examine cyber reliability issues during SHM and risk prognosis of bridges. The authors found 121 related studies in the literature ranging from 2010 to 2022 on Web of Science search engines (search criteria—Topic: “data quality” and “SHM”). Table 4 lists the five most cited studies that examine cyber reliability issues and risk prognosis during SHM. Figure 9 shows the studies collected from the Web of Science database. This section synthesizes studies that examine multiple technical aspects of computer hardware and software systems that can produce uncertainties and errors while processing and managing the data and information. These aspects include the reliability of the data and information models (Section 4.1), computational processes that transform data and information (Section 4.2), and the reliability of storing and exchanging data of various formats (Section 4.3).

4.1. Data and Model Reliability

Data and model reliability refers to the quality of data and information extracted from the data to form information models, such as digital as-built models of structures and facilities. Previous studies explored metrics for measuring the quality of various data and information sources. Figure 10 shows a keywords network related to data and model reliability.
Table 5 indicates that data reliability issues have attracted more attention from the domain of civil engineering. Some researchers examined the quality of 3D imagery data in terms of accuracy, level of comprehensiveness, and detail [132,133]. Some researchers define data quality metrics for quantifying the information losses and uncertainties caused by missing data [134,135,136,137,138,139,140,141,142,143]. The data can be tabular data, multimedia data such as images and audios, natural language reports, and sensory time series. Time series data usually capture motions and vibrations of structures, people, vehicles, equipment, or other objects of interest. Some datasets have metadata that specify the meanings and organization of the data contents and data collection contexts. Any errors in those metadata could cause errors in data analysis, interpretation, and data use in practice [144].
These data-quality studies still have not yet addressed some challenges related to quantifying the data and model reliability in engineering application contexts. Compared with systematic quality quantification of structured data and images, one challenge is that relatively limited studies examined metrics for measuring the quality of audio, video, and natural language reports (Taleb et al. 2018 [166]). Another gap is that limited studies focused on characterizing unreliable metadata’s impacts on the data interpretation in practice. Some researchers pointed out that misleading metadata could result in propagative misunderstanding of the raw data and cause failures in data use [144].
Finally, compared with many studies on data reliability, fewer studies were on characterizing the reliability of information models derived from data. Most studies discussed sampling and statistical quality assessment and quality control (QA/QC) methods for checking the information models against reality [167,168]. Unfortunately, those studies have not yet addressed the following questions: (1) how to produce a context-related information quality index that can quantify the confidence level of using specific digital models against given engineering requirements [169]; (2) how to assess the impact of qualitative information errors, such as missing objects or wrong properties of objects, in information models on decision risks [170].

4.2. Computational Reliability

Computation reliability studies examine how computational processes of transforming data and information introduce errors or uncertainties in the information derived from raw data and digital models. Such studies are of two categories: (1) numeric error studies and (2) workflow studies. Numeric error studies examine how digital computers’ discrete nature produces numerical errors in representing numbers and how numeric errors influence data and model transformation results’ reliability. Workflow studies examine the propagation of numeric errors within deterministic or stochastic processes that transform data and information models into useful information. Figure 11 illustrates most used keywords related to computational reliability studies.
Numeric errors are at the algorithm level for examining different numeric stability problems due to cut-off errors or specific computation processes that magnify errors and residuals into biases in the results [171]. Traditional numerical analysis methods examine numeric errors and algorithm stability problems [172]. Many researchers reviewed various numeric analysis studies and how algorithms produce varying results due to numeric errors [172,173,174,175]. The major challenge lies in fundamental theories for capturing random numerical errors with unknown factors underlying hardware and software designs [176,177,178]. For example, Kendall and Gall have explored the feasibility of modeling aleatoric and epistemic uncertainties through Bayesian deep learning models for computer vision tasks [176]. Different types of algorithmic processes have their areas and related topics for comprehending individual algorithms’ reliabilities, such as optimization and inverse calculations of matrices. Some algorithmic processes pose significant challenges for researchers to fully reveal the reliability and numeric errors associated with those algorithmic processes. For example, Franco et al. claimed that some floating-point arithmetic-based algorithms of machine learning, computer vision, and computer graphics heavily rely on numerical libraries for calculations. Significant challenges exist for accurately estimating numerical errors based on the underlying numerical libraries [179]. In addition, algorithms for the camera pose estimation based on three 2D/3D point correspondences always accumulate numerical errors that jeopardize the accuracy [180].
Numeric analysis research examines numerical errors at the micro-level. In contrast, workflow level studies examine how numeric errors generated from each algorithm within a workflow influence the following data processing algorithms. Relatively fewer studies were on the characterization of data processing workflows in terms of error accumulation and transformation [181]. Previous studies focused on individual algorithms’ characterization [182] or modeling uncertainty propagation through a pipeline of algorithms connected by input-output relationships [176]. The challenge is to integrate individual algorithms’ performance models into performance models of complex workflows. Exploring the exponentially large solution spaces of all possible combinations of parameters involved in multiple algorithms in the workflow is challenging.
At the workflow level, the exponentially growing complexity of investigating various combinations of various algorithmic processes makes the tracking and characterization of error propagation and accumulation challenging. The emerging theories of process resilience and stochastic process vulnerability have produced new mathematical models for characterizing workflows. “Anti-patterns” of workflows and systems dynamics studies also produce methods for revealing process patterns that could cause unwanted accumulations of errors and result in workflow failures due to small variations of data inputs or individual data processing parameters [183,184].
In brief, the knowledge gap about computational reliability exists at multiple levels of manipulating data and information. Future studies should focus on those challenging algorithmic processes with highly uncertain numeric error generation behaviors and process patterns that pose difficult-to-quantify behaviors of error escalation and process failures. New process modeling and characterization theories, such as process pattern discovery [185], process resilience, and systems dynamics models [186,187], will bring opportunities to overcome the difficulties associated with the characterization of the reliability of various data processing workflows.

4.3. Data Storage, Exchange, and Transmission Reliability

Data storage, exchange, and transmission reliability research focuses on characterizing data and information losses during storing, converting the formats of, and transmitting data files containing various information. Such characterization investigates how data storage and exchange mechanism design and hardware factors influence the data and information losses and determine upper and lower bounds of such losses within specific hardware and software environments [168,188]. Table 6 below provides a synthesis of example studies that examine reliability issues related to data storage, exchange, and transmission. Figure 12 shows a keywords network related to data reliability.
Specifically, data storage reliability studies examine how to store large amounts of data of different data types without causing information loss. Some studies quantify geometric information losses due to 3D image compression using different algorithms [189]. Data exchange reliability studies examine semantic losses while converting files into different formats and reconstruct semantic relationships lost during format conversions. Such studies focus on modeling information losses due to the transformation between different data formats [197]. These studies also examined methods that identify similar objects across different formats for enabling translation between data formats [193]. Data transmission reliability studies investigate how to transmit and exchange data across different computing devices or platforms without causing information loss [198,199].
Although the current studies about data storage, exchange, and transmission reliability have significant contributions in their fields, a few gaps still exist. First, there are limited studies on how errors in data storage, exchange, and transmission propagate along the data analysis workflows and cause misleading results due to information losses introduced by interconnected data management and transformation systems [200,201]. Second, non-loss semantic data exchange needs a systematic investigation into semantic equivalence and quantification of semantic information losses due to the lack of consideration of changing contexts of the entity. In practice, many entities have changing meanings in changing contexts, while the lack of context modeling can cause information losses and misinterpretations of data [201].

5. Human-Cyber Reliability for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges

Human-cyber (HC) reliability considers the reliability of data analysis with HITL. The data analysis reliability dimension captures how various factors influence the quality and reliability of information derived from the data to support decision-making. The data analysis usually involves three sequential stages that gradually derive various information from raw data sources (e.g., images, sensory time series, and inspection reports): (1) data pre-processing, (2) data processing and (3) data interpretation. The data pre-processing stage takes the raw datasets as inputs to transform raw datasets into structured or semi-structured data that can serve as inputs for the data processing stage. The data processing stage takes pre-processed datasets as inputs and extracts features or patterns corresponding to certain objects or events captured in the data. The data interpretation stage derives correlations between the features and data patterns extracted by data processing algorithms and derives behavior and process information that represent how objects and events evolve along the timeline so that engineers can obtain meaningful views of objects and events for diagnosing the engineered systems or workspaces.
Table 7 defines these three stages of data analysis and shows how these three stages have different reliability issues. This table shows that some previous studies examined how various factors involved in data pre-processing, data processing, and data interpretation influence the reliability of the processed results. The authors found that relatively more studies are on the reliability issues related to data pre-processing and data analysis, while limited studies were on the reliability of data interpretation. The reason is that data interpretation work focuses on detecting and assessing various relationships between individual features, objects, and events extracted from the data. Relationship detection relies on the detection of features, objects, and events, which are necessary bases before carrying out extensive relationship modeling and assessment [202,203]. As a result, data scientists invest more efforts into extracting features, objects, and events from data first, and then focus on relationship analysis. On the other hand, computational modeling researchers started modeling various relationships from the knowledge representation point of view without considering the challenges of extracting those relationships from various data [204].
Some recent research studies have started examining approaches for automatic generations of various relationships between objects and events from diverse datasets in the domains of big data analytics [214], geospatial analysis [215], and data-driven simulations [216]. Many semantic relationships, such as those representing safety rules between specific types of objects in workspaces [193,214], contain subtle semantic information in natural language documents or semantic rich digital models (e.g., Building Information Models). Extracting material properties and other semantic information about objects through integrated analysis of textual reports, images, and various sensory data is still challenging [217,218,219,220].

6. Conclusions: A Research Road Map for Advancing Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges

This review of HC reliability offers a comprehensive characterization of identified knowledge gaps related to HC reliability issues. This section provides a summary of all identified grand challenges and aims at helping researchers to recognize the various demands of research activities for advancing the reliable structural health monitoring of bridges. The following content shows a summary of the significant challenges and uses a three-part coding system to label these challenges. The codes for each aspect or reliability issue are also provided. Additionally, the “number” is an integer that is used to code the challenges in the category in which the aspect and reliability issues were identified. For example, challenge H.DA.1 is the challenge for the “Human” (code: H) aspect which is associated with the reliability issue related to “Data Analysis Reliability” (code: DA) and is the first challenge in the category (code: 1).
These challenges fall into the following five main areas. Data analysis reliability: H.DA.1—Lack of methods for assessing the reliability of the rules and relationships generated automatically from multiple unstructured data sources. H.DA.2—Lack of methods for reliable integrated analysis of images, audio, and unstructured documents. Operational Reliability: H.O.1—Lack of methods for handling variations of performance shaping factors (PSFs) reliably for achieving accurate predictions of HEP with full consideration of detailed working processes and environmental changes. H.O.2—Limited investigations along the dimension of dynamic team decision reliability with full consideration of team cognition processes and various interactions between team members. Data and Model Reliability: C.DM.1—Lack of methods for characterizing the impacts of errors in natural language reports and metadata on automatic document and data interpretation results and related decisions based on such interpretations. C.DM.2—Lack of methods for generating the context-aware information quality index necessary for quantifying the risks of using certain information models in given decision contexts with specific information requirements and changing environments. Computational Reliability: C.CP.1—Lack of methods for characterizing the data processing workflows composed of multiple data and information transformation algorithms with diverse numeric error generation and propagation behaviors. Data Storage, Exchange, and Transmission Reliability: C.SET.1—Lack of reliable application context and semantic mapping between different file formats to allow reliable data exchange across different software used in different domains. C.SET.2—Lack of methods for assessing both qualitative and quantitative error propagation along data processing workflows composed of algorithms that handle diverse datasets to produce decision information (e.g., deterioration status of a bridge based on reports, images, and time series collected by contact sensors).
In response to the major challenges discussed in this phase, the authors give the corresponding research line, which represents a route that encompasses various types of scientific research efforts (e.g., data collection, data processing, and theoretical modeling) aimed at addressing all the challenges discussed in the previous sections, while understanding dynamic HC reliability issues to ensure bridge safety. The research line is divided into the following three areas: data collection, data processing, and theoretical modeling. Data Collection: (1) Collect large amounts of data (e.g., bridge condition data, human behavior data, computer logs, human network interaction logs) in a controlled laboratory environment or under field conditions. (2) Use the collected data to evaluate the performance of bridges and behavior patterns of human individuals in different decision contexts and environmental conditions. This proposal is intended to address issues H.O.2 and C.DM.2 above. Data Processing: (1) Establish data processing models for representing data processing processes. (2) Develop and test data processing methods for integrated analyses of multiple heterogeneous data sources. (3) Develop scientific workflow systems that connect data processing algorithms into workflows to support the characterization of various workflows. This proposal is intended to address issues H.O.2, H.DA.1, H.DA.2, C.DM.1, C.DM.2, C.CP.1, C.SET.1, and C.SET.2 above. Theoretical Modeling: (1) Create models based on collected data for predicting the deterioration trends of bridges in various environments with various maintenance plans. (2) Create models based on collected data for predicting the dynamic reliability of given teams in changing decision contexts. (3) Create models based on collected data for predicting the impact of both qualitative and numerical data errors on the reliability of the information derived from given data processing workflows. This proposal is intended to address issues H.O.1, H.O.2, H.DA.1, H.DA.2, C.DM.1, C.DM.2, C.CP.1, C.SET.1, and C.SET.2 above. The theoretical modeling aspect of this research line illustrates the imperative and specific research activities that the authors think would resolve the identified grand challenges and bridge the knowledge gaps to ensure the security of civil infrastructures.
This paper synthesizes the literature towards a research vision of “Reliable Structural Health Monitoring and Risk Prognosis of Bridges with Human-In-The-Loop”. The major conclusion is that relevant research studies, knowledge, methods, and data from multiple domains (e.g., human systems engineering, computer science, civil engineering, infrastructure systems, etc.) can contribute to such a vision for ensuring the reliable SHM of bridges. At the same time, certain fundamental limitations still exist for future explorations. This paper presents a review of the relevant literature to promote multi-disciplinary collaborations and discussions of the limitations and challenges identified by the authors in this critical review effort.

Author Contributions

Conceptualization, Z.S.; Methodology, Z.S.; Software, T.C. and L.H.; Validation, Z.S.; Formal analysis, Z.S. and T.C.; Investigation, Z.S., L.H. and R.Z; Resources, Z.S., Y.B. and X.M.; Data curation, Z.S., Y.B. and X.M.; Writing—original draft, Z.S., T.C. and R.Z.; Writing—review & editing, Z.S., Y.B., X.M. and L.H.; Visualization, Z.S., T.C., L.H. and R.Z.; Supervision, Z.S., Y.B. and X.M.; Project administration, Z.S.; Funding acquisition, Z.S., Y.B. and X.M. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based on work supported by the Beijing University of Technology. The support is gratefully acknowledged.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Durango-Cohen, P.L.; Madanat, S.M. Optimization of inspection and maintenance decisions for infrastructure facilities under performance model uncertainty: A quasi-Bayes approach. Transp. Res. Part A Policy Pract. 2008, 42, 1074–1085. [Google Scholar] [CrossRef] [Green Version]
  2. Van Riel, W.; Langeveld, J.; Herder, P.; Clemens, F. The influence of information quality on decision-making for networked infrastructure management. Struct. Infrastruct. Eng. 2017, 13, 696–708. [Google Scholar] [CrossRef] [Green Version]
  3. Gil, M.; Albert, M.; Fons, J.; Pelechano, V. Engineering human-in-the-loop interactions in cyber-physical systems. Inf. Softw. Technol. 2020, 126, 106349. [Google Scholar] [CrossRef]
  4. Dong, C.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
  5. Abdulkarem, M.; Samsudin, K.; Rokhani, F.Z.; Rasid, M.F.A. Wireless sensor network for structural health monitoring: A contemporary review of technologies, challenges, and future direction. Struct. Health Monit. 2020, 19, 693–735. [Google Scholar] [CrossRef]
  6. Wu, R.; Jahanshahi, M.R. Data fusion approaches for structural health monitoring and system identification: Past, present, and future. Struct. Health Monit. 2020, 19, 552–586. [Google Scholar] [CrossRef]
  7. Fan, W.; Chen, Y.; Li, J.; Sun, Y.; Feng, J.; Hassanin, H.; Sareh, P. Machine learning applied to the design and inspection of reinforced concrete bridges: Resilient methods and emerging applications. Structures 2021, 33, 3954–3963. [Google Scholar] [CrossRef]
  8. Masciotta, M.; Ramos, L.F.; Lourenço, P.B.; Vasta, M.; De Roeck, G. A spectrum-driven damage identification technique: Application and validation through the numerical simulation of the Z24 Bridge. Mech. Syst. Signal Process. 2016, 70–71, 578–600. [Google Scholar] [CrossRef] [Green Version]
  9. Astroza, R.; Ebrahimian, H.; Conte, J.P. Performance comparison of Kalman−based filters for nonlinear structural finite element model updating. J. Sound Vib. 2019, 438, 520–542. [Google Scholar] [CrossRef]
  10. Moaveni, B.; Conte, J.P. System and Damage Identification of Civil Structures. Encyclopedia of Earthquake Engineering; University of California: Berkeley, CA, USA, 2014; pp. 1–9. [Google Scholar]
  11. Averell, L.; Heathcote, A. The form of the forgetting curve and the fate of memories. J. Math. Psychol. 2011, 55, 25–35. [Google Scholar] [CrossRef]
  12. Tribukait, A.; Eiken, O. On the time course of short-term forgetting: A human experimental model for the sense of balance. Cogn. Neurodynamics 2016, 10, 7–22. [Google Scholar] [CrossRef] [Green Version]
  13. Brown, A.W.; Kaiser, K.A.; Allison, D.B. Issues with data and analyses: Errors, underlying themes, and potential solutions. Proc. Natl. Acad. Sci. USA 2018, 115, 2563–2570. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Kirwan, B.; Smith, A.; Rycraft, H. Human Error Data Collection and Data Generation. Int. J. Qual. Reliab. Manag. 1990, 7. [Google Scholar] [CrossRef]
  15. Kotek, L.; Mukhametzianova, L. Validation of Human Error Probabilities with Statistical Analysis of Misbehaviours. Procedia Eng. 2012, 42, 1955–1959. [Google Scholar] [CrossRef] [Green Version]
  16. Bolton, M.L. Model Checking Human–Human Communication Protocols Using Task Models and Miscommunication Generation. J. Aerosp. Inf. Syst. 2015, 12, 476–489. [Google Scholar] [CrossRef] [Green Version]
  17. Pan, D.; Bolton, M.L. Properties for formally assessing the performance level of human-human collaborative procedures with miscommunications and erroneous human behavior. Int. J. Ind. Ergon. 2015, 63, 75–88. [Google Scholar] [CrossRef]
  18. Gonzalez, C. The boundaries of instance-based learning theory for explaining decisions from experience. Prog. Brain Res. 2013, 202, 73–98. [Google Scholar] [PubMed]
  19. Zhu, X.S.; Wolfson, M.A.; Dalal, D.K.; Mathieu, J.E. Team Decision Making: The Dynamic Effects of Team Decision Style Composition and Performance via Decision Strategy. J. Manag. 2021, 47, 1281–1304. [Google Scholar] [CrossRef]
  20. Kosoris, N.; Chastine, J. A Study of the Correlations between Augmented Reality and its Ability to Influence User Behavior. IEEE 2015, 113–118. [Google Scholar]
  21. Love, P.E.D.; Edwards, D.J.; Han, S.; Goh, Y.M. Design error reduction: Toward the effective utilization of building information modeling. Res. Eng. Des. 2011, 22, 173–187. [Google Scholar] [CrossRef]
  22. Shin, J.-C.; Baek, Y.-I.; Park, W.-T. Analysis of Errors in Tunnel Quantity Estimation with 3D-BIM Compared with Routine Method Based 2D. J. Korean Geotech. Soc. 2011, 27, 63–71. [Google Scholar] [CrossRef]
  23. Oberkampf, W.L.; Roy, C.J. Verification and Validation in Scientific Computing; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  24. Randell, B.; Lee, P.; Treleaven, P.C. Reliability Issues in Computing System Design. ACM Comput. Surv. 1978, 10, 123–165. [Google Scholar] [CrossRef]
  25. Tsapatsoulis, N.; Djouvas, C. Opinion Mining from Social Media Short Texts: Does Collective Intelligence Beat Deep Learning? Front. Robot. AI 2019, 5, 138. [Google Scholar] [CrossRef] [Green Version]
  26. Das, M.; Cheng, J.C.P.; Kumar, S.S. BIMCloud: A Distributed Cloud-Based Social BIM Framework for Project Collaboration. In Computing in Civil and Building Engineering; ASCE: Orlando, FL, USA, 2014. [Google Scholar]
  27. Xu, Z.; Zhang, L.; Li, H.; Lin, Y.H.; Yin, S. Combining IFC and 3D Tiles to Create 3D Visualization for Building Information Modeling. Autom. Constr. 2020, 109, 1–16. [Google Scholar] [CrossRef]
  28. Chen, W.; Chen, K.; Cheng, J.C.; Wang, Q.; Gan, V.J. BIM-based framework for automatic scheduling of facility maintenance work orders. Autom. Constr. 2018, 91, 15–30. [Google Scholar] [CrossRef]
  29. Sun, Z.; Xing, J.; Tang, P.; Cooke, N.J.; Boring, R.L. Human reliability for safe and efficient civil infrastructure operation and maintenance–A review. Dev. Built Environ. 2020, 4, 100028. [Google Scholar] [CrossRef]
  30. Liu, P.; Xiong, R.; Tang, P. Mining Observation and Cognitive Behavior Process Patterns of Bridge Inspectors. J. Comput. Civ. Eng. 2021, 2022, 604–612. Available online: https://ascelibrary.org/doi/abs/10.1061/9780784483893.075 (accessed on 12 February 2023).
  31. Xiong, R.; Liu, P.; Tang, P. Human Reliability Analysis and Prediction for Visual Inspection in Bridge Maintenance. In Computing in Civil Engineering; ASCE: Reston, VA, USA, 2021; pp. 254–262. [Google Scholar]
  32. Zong, Z.; Xia, Z.; Liu, H.; Li, Y.; Huang, X. Collapse Failure of Prestressed Concrete Continuous Rigid-Frame Bridge under Strong Earthquake Excitation: Testing and Simulation. J. Bridg. Eng. 2016, 21, 04016047. [Google Scholar] [CrossRef]
  33. Yang, H.; Xu, X.; Neumann, I. Laser Scanning-Based Updating of a Finite-Element Model for Structural Health Monitoring. IEEE Sens. J. 2016, 16, 2100–2104. [Google Scholar] [CrossRef]
  34. Sun, Z.; Shi, Y.; Xiong, W.; Tang, P. Vision-Based Correlated Change Analysis for Supporting Finite Element Model Updating on Curved Continuous Rigid Frame Bridges. In Proceedings of the Construction Research Congress 2020: Infrastructure Systems and Sustainability, American Society of Civil Engineers (ASCE), Tempe, Arizona, 8–10 March 2020; pp. 380–389. [Google Scholar]
  35. Posenato, D.; Lanata, F.; Inaudi, D.; Smith, I.F. Model-free data interpretation for continuous monitoring of complex structures. Adv. Eng. Inform. 2008, 22, 135–144. [Google Scholar] [CrossRef]
  36. Raphael, B.; Smith, I.F.C. Global Search through Sampling Using a PDF; Springer: Berlin/Heidelberg, Germany, 2003; pp. 71–82. [Google Scholar]
  37. Panteli, M.; Mancarella, P. Modeling and Evaluating the Resilience of Critical Electrical Power Infrastructure to Extreme Weather Events. IEEE Syst. J. 2017, 11, 1733–1742. [Google Scholar] [CrossRef]
  38. Gupta, S.; Kumar, P.; Raju, G.Y. A fuzzy causal relational mapping and rough set-based model for context-specific human error rate estimation. Int. J. Occup. Saf. Ergon. 2019, 27, 63–78. [Google Scholar] [CrossRef]
  39. Akyuz, E.; Celik, M.; Akgun, I.; Cicek, K. Prediction of human error probabilities in a critical marine engineering operation on-board chemical tanker ship: The case of ship bunkering. Saf. Sci. 2018, 110, 102–109. [Google Scholar] [CrossRef]
  40. Akyuz, E.; Celik, M. A modified human reliability analysis for cargo operation in single point mooring (SPM) off-shore units. Appl. Ocean. Res. 2016, 58, 11–20. [Google Scholar] [CrossRef]
  41. Akyuz, E.; Celik, M. A methodological extension to human reliability analysis for cargo tank cleaning operation on board chemical tanker ships. Saf. Sci. 2015, 75, 146–155. [Google Scholar] [CrossRef]
  42. Liversedge, S.P.; Findlay, J.M. Saccadic eye movements and cognition. Trends Cogn. Sci. 2000, 4, 6–14. [Google Scholar] [CrossRef]
  43. McAlpine, D. Creating a sense of auditory space. J. Physiology. 2005, 566, 21–28. [Google Scholar] [CrossRef]
  44. Wang, Y.; Sun, Y.; Joseph, P.V. Contrasting Patterns of Gene Duplication, Relocation, and Selection Among Human Taste Genes. Evol. Bioinform. 2021, 17, 1–6. [Google Scholar] [CrossRef] [PubMed]
  45. Gostelow, P.; Parsons, S.A.; Stuetz, R.M. Sewage Treatment Works Odour Measurement. Waterence Technol. 2000, 41, 6. [Google Scholar] [CrossRef]
  46. Dijkerman, H.C.; de Haan, E.H.F. Somatosensory processes subserving perception and action. Behav. Brain Sci. 2007, 30, 224–230. [Google Scholar] [CrossRef]
  47. Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information; MIT Press: Cambridge, MA, USA, 1982. [Google Scholar]
  48. Moussaïd, M.; Helbing, D.; Theraulaz, G. How simple rules determine pedestrian behavior and crowd disasters. Proc. Natl. Acad. Sci. USA 2011, 108, 6884–6888. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Thomas, M.L.; Green, M.F.; Hellemann, G.; Sugar, C.A.; Tarasenko, M.; Calkins, M.E.; Greenwood, T.A.; Gur, R.E.; Gur, R.C.; Lazzeroni, L.C.; et al. Modeling Deficits from Early Auditory Information Processing to Psychosocial Functioning in Schizophrenia. JAMA Psychiatry 2017, 74, 37–46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Hoyer, W.D.; Stokburger-Sauer, N.E. The role of aesthetic taste in consumer behavior. J. Acad. Mark. Sci. 2012, 40, 167–180. [Google Scholar] [CrossRef]
  51. Shabgou, M.; Daryani, S.M. Towards the sensory marketing: Stimulating the five senses (sigHC, hearing, smell, touch and taste) and its impact on consumer behavior. Indian J. Fundam. Appl. Life Sci. 2014, 4, 573–581. [Google Scholar]
  52. Borghi, A.M.; Cimatti, F. Embodied cognition and beyond: Acting and sensing the body. Neuropsychologia 2010, 48, 763–773. [Google Scholar] [CrossRef] [PubMed]
  53. Kang, Y.; Williams, L.E.; Clark, M.S.; Gray, J.R.; Bargh, J.A. Physical temperature effects on trust behavior: The role of insula. Soc. Cogn. Affect. Neurosci. 2011, 6, 507–515. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Wieser, M.J.; Pauli, P.; Grosseibl, M.; Molzow, I.; Mühlberger, A. Virtual social interactions in social anxiety-The impact of sex, gaze, and interpersonal distance. Cyberpsychology Behav. Soc. Netw. 2010, 13, 547–554. [Google Scholar] [CrossRef]
  55. Zitouni, M.S.; Sluzek, A.; Bhaskar, H. Visual analysis of socio-cognitive crowd behaviors for surveillance: A survey and categorization of trends and methods. Eng. Appl. Artif. Intell. 2019, 82, 294–312. [Google Scholar] [CrossRef]
  56. Miller, W.R.; Rose, G.S. Toward a Theory of Motivational Interviewing. Am. Psychol. 2009, 64, 527. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Lee, B.C.; Duffy, V.G. The Effects of Task Interruption on Human Performance: A Study of the Systematic Classification of Human Behavior and Interruption Frequency. Hum. Factors Ergon. Manuf. 2015, 25, 137–152. [Google Scholar] [CrossRef]
  58. Naujoks, F.; Wiedemann, K.; Schömig, N. The importance of interruption management for usefulness and acceptance of automated driving. In Proceedings of the Automotive UI 2017-9th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, Oldenburg, Germany, 24–27 September 2017. [Google Scholar]
  59. Núñez, R.; Cooperrider, K. The tangle of space and time in human cognition. Trends Cogn. Sci. 2013, 17, 220–229. [Google Scholar] [CrossRef] [PubMed]
  60. Silber, B.; Schmitt, J. Effects of tryptophan loading on human cognition, mood, and sleep. Neurosci. Biobehav. Rev. 2010, 34, 387–407. [Google Scholar] [CrossRef]
  61. Salman, I.; Turhan, B.; Vegas, S. A controlled experiment on time pressure and confirmation bias in functional software testing. Empir. Softw. Eng. 2019, 24, 1727–1761. [Google Scholar] [CrossRef] [Green Version]
  62. Zakay, D. The Impact of Time Perception Processes on Decision Making under Time Stress. Time Press. Stress Hum. Judgm. Decis. Mak. 1993, 59–72. [Google Scholar] [CrossRef]
  63. Blakely, M.J.; Kemp, S.; Helton, W.S. Volitional Running and Tone Counting: The Impact of Cognitive Load on Running over Natural Terrain. IIE Trans. Occup. Ergon. Hum. Factors 2016, 4, 104–114. [Google Scholar] [CrossRef]
  64. Blakely, M.J.; Wilson, K.; Russell, P.N.; Helton, W.S. The impact of cognitive load on volitional running. In Proceedings of the Human Factors and Ergonomics Society; SAGE Publications: Los Angeles, CA, USA, 2016. [Google Scholar]
  65. Laumann, K.; Rasmussen, M. Suggested Improvements to the Definitions of Standardized Plant Analysis of Risk-Human Reliability Analysis (SPAR-H) Performance Shaping Factors, their Levels and Multipliers and the Nominal Tasks. Reliab. Eng. Syst. Saf. 2016, 145, 287–300. [Google Scholar] [CrossRef]
  66. Schirner, G.; Erdogmus, D.; Chowdhury, K.; Padir, T. The future of human-in-the-loop cyber-physical systems. Computer 2013, 46, 36–45. [Google Scholar] [CrossRef]
  67. Chen, J.; Liu, Y.; Cooke, N.; Tang, P. Real-time Facial Expression and Head Pose Analysis for Monitoring the Workloads of Air Traffic Controllers. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17–21 June 2019; p. 3412. [Google Scholar]
  68. Demir, M.; McNeese, N.J.; Cooke, N.J. Team situation awareness within the context of human-autonomy teaming. Cogn. Syst. Res. 2017, 46, 3–12. [Google Scholar] [CrossRef]
  69. Sun, Z.; Tang, P. Automatic Communication Error Detection Using Speech Recognition and Linguistic Analysis for Proactive Control of Loss of Separation. Transp. Res. Rec. J. Transp. Res. Board 2020, 2675, 1–12. [Google Scholar] [CrossRef]
  70. Chalhoub, J.; Alsafouri, S.; Ayer, S.K. Leveraging site survey points for mixed reality bim visualization. In Proceedings of the Construction Research Congress 2018: Construction Information Technology-Selected Papers from the Construction Research Congress, New Orleans, Louisiana, 2–4 April 2018; pp. 326–335. [Google Scholar]
  71. Shi, Y.; Du, J. Simulation of Spatial Memory for Human Navigation Based on Visual Attention in Floorplan Review. In Proceedings of the Winter Simulation Conference, National Harbor, MD, USA, 8–11 December 2019; pp. 3031–3040. [Google Scholar]
  72. Wolpaw, J.; Birbaumer, N.; Heetderks, W.; McFarland, D.; Peckham, P.; Schalk, G.; Donchin, E.; Quatrano, L.; Robinson, C.; Vaughan, T. Brain-computer interface technology: A review of the first international meeting. IEEE Trans. Rehabil. Eng. 2000, 8, 164–173. [Google Scholar] [CrossRef] [PubMed]
  73. Chang, Y.; Mosleh, A. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents: Part 1: Overview of the IDAC Model. Reliab. Eng. Syst. Saf. 2007, 92, 997–1013. [Google Scholar] [CrossRef]
  74. Bao, Y.; Guo, C.; Zhang, J.; Wu, J.; Pang, S.; Zhang, Z. Impact analysis of human factors on power system operation reliability. J. Mod. Power Syst. Clean Energy 2018, 6, 27–39. [Google Scholar] [CrossRef] [Green Version]
  75. Chen, X.; Liu, X.; Qin, Y. An extended CREAM model based on analytic network process under the type-2 fuzzy environment for human reliability analysis in the high-speed train operation. Qual. Reliab. Eng. Int. 2021, 37, 284–308. [Google Scholar] [CrossRef]
  76. Boring, R. Top-Down and Bottom-up Definitions of Human Failure Events in Human Reliability Analysis. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2014, 58, 563–567. [Google Scholar] [CrossRef]
  77. Gertman, D.I.; Blackman, H.S.; Marble, J.L.; Smith, C.; Boring, R.L.; O’Reilly, P. The SPAR H human reliability analysis method. In Proceedings of the American Nuclear Society 4th International Topical Meeting on Nuclear Plant Instrumentation, Control and Human Machine Interface Technology, Charlotte, NC, USA, 1 January 2004. [Google Scholar]
  78. Blackman, H.S.; Gertman, D.I.; Boring, R.L. Human error quantification using performance shaping factors in the SPAR-H method. In Proceedings of the Human Factors and Ergonomics Society, New York City, NY, USA, 22–26 September 2008. [Google Scholar]
  79. Boring, R.L.; Blackman, H.S. The origins of the SPAR-H method’s performance shaping factor multipliers. In Proceedings of the IEEE Conference on Human Factors and Power Plants, Monterey, CA, USA, 26–31 August 2007. [Google Scholar]
  80. Demirkesen, S.; Arditi, D. Construction safety personnel′s perceptions of safety training practices. Int. J. Proj. Manag. 2015, 33, 1160–1169. [Google Scholar] [CrossRef]
  81. Wang, T.-K.; Huang, J.; Liao, P.-C.; Piao, Y. Does Augmented Reality Effectively Foster Visual Learning Process in Construction? An Eye-Tracking Study in Steel Installation. Adv. Civ. Eng. 2018, 2018, 2472167. [Google Scholar] [CrossRef]
  82. Hollnagel, E. Cognitive Reliability and Error Analysis Method (CREAM); Elsevier: Amsterdam, The Netherlands, 1998. [Google Scholar]
  83. Alsafouri, S.; Ayer, S.K. Mobile Augmented Reality to Influence Design and Constructability Review Sessions. J. Arch. Eng. 2019, 25, 04019016. [Google Scholar] [CrossRef]
  84. Alvarenga, M.; e Melo, P.F. A review of the cognitive basis for human reliability analysis. Prog. Nucl. Energy 2019, 117, 103050. [Google Scholar] [CrossRef]
  85. French, S.; Bedford, T.; Pollard, S.J.; Soane, E. Human reliability analysis: A critique and review for managers. Saf. Sci. 2011, 49, 753–763. [Google Scholar] [CrossRef] [Green Version]
  86. Smart, P.R.; Shadbolt, N.R. Modelling the dynamics of team sensemaking: A constraint satisfaction approach. Knowl. Syst. Coalit. Oper. 2012, 1–10. [Google Scholar]
  87. Bost, X.; Senay, G.; El-Bèze, M.; De Mori, R. Multiple topic identification in human/human conversations. Comput. Speech Lang. 2015, 34, 18–42. [Google Scholar] [CrossRef] [Green Version]
  88. Erdogan, H.; Sarikaya, R.; Chen, S.F.; Gao, Y.; Picheny, M. Using semantic analysis to improve speech recognition performance. Comput. Speech Lang. 2005, 19, 321–343. [Google Scholar] [CrossRef]
  89. Gostelow, P.; Parsons, S.; Stuetz, R. Odour measurements for sewage treatment works. Water Res. 2001, 35, 579–597. [Google Scholar] [CrossRef] [PubMed]
  90. Lynch, E.J.; Petrov, A.P. The Sense of Taste; Nova Biomedical: Waltham, MA, USA, 2012. [Google Scholar]
  91. Loft, S.; Sanderson, P.; Neal, A.; Mooij, M. Modeling and Predicting Mental Workload in en Route Air Traffic Control: Critical Review and Broader Implications. Hum. Factors J. Hum. Factors Ergon. Soc. 2007, 49, 376–399. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  92. Borghini, G.; Aricò, P.; DI Flumeri, G.; Cartocci, G.; Colosimo, A.; Bonelli, S.; Golfetti, A.; Imbert, J.P.; Granger, G.; Benhacene, R.; et al. EEG-Based Cognitive Control Behaviour Assessment: An Ecological study with Professional Air Traffic Controllers. Sci. Rep. 2017, 7, 547. [Google Scholar] [CrossRef] [Green Version]
  93. Kapalo, K.A.; Bockelman, P.; LaViola, J.J. ‘Sizing up’ Emerging technology for firefigHCing: Augmented reality for incident assessment. In Proceedings of the Human Factors and Ergonomics Society, Philadelphia, PA, USA, 1–5 October 2018. [Google Scholar]
  94. Yang, L.; Liang, Y.; Wu, D.; Gault, J. Train and equip firefighters with cognitive virtual and augmented reality. In Proceedings of the 4th IEEE International Conference on Collaboration and Internet Computing, CIC 2018, Philadelphia, PA, USA, 18–20 October 2018. [Google Scholar]
  95. Boring, R.L. Dynamic human reliability analysis: Benefits and challenges of simulating human performance. In Proceedings of the European Safety and Reliability Conference 2007, ESREL 2007-Risk, Reliability and Societal Safety, Stavanger, Norway, 25–27 June 2007. [Google Scholar]
  96. Lyons, M.; Adams, S.; Woloshynowych, M.; Vincent, C. Human reliability analysis in healthcare: A review of techniques. Int. J. Risk Saf. Med. 2004, 16, 223–237. [Google Scholar]
  97. Pyy, P. Human Reliability Analysis Methods for Probabilistic Safety Assessment; VTT Publications: Espoo, Finland, 2000. [Google Scholar]
  98. Goldin-Meadow, S. The role of gesture in communication and thinking. Trends Cogn. Sci. 1999, 3, 419–429. [Google Scholar] [CrossRef]
  99. Motty, A.; Yogitha, A.; Nandakumar, R. Flag semaphore detection using tensorflow and opencv. Int. J. Recent Technol. Eng. 2019, 7, 2277–3878. [Google Scholar]
  100. Pigou, L.; Dieleman, S.; Kindermans, P.J.; Schrauwen, B. Sign language recognition using convolutional neural networks. In Proceedings of the Computer Vision-ECCV 2014 Workshops, Zurich, Switzerland, 6–7 and 12 September 2014. [Google Scholar]
  101. Prather, J.F.; Peters, S.; Nowicki, S.; Mooney, R. Precise auditory-vocal mirroring in neurons for learned vocal communication. Nature 2008, 451, 305–310. [Google Scholar] [CrossRef] [PubMed]
  102. Kendon, A.; Birdwhistell, R.L. Kinesics and Context: Essays on Body Motion Communication. Am. J. Psychol. 1972, 85, 441. [Google Scholar] [CrossRef]
  103. Keysers, C.; Kaas, J.H.; Gazzola, V. Somatosensation in social perception. Nat. Rev. Neurosci. 2010, 11, 417–428. [Google Scholar] [CrossRef]
  104. Bourbousson, J.; Feigean, M.; Seiler, R. Team cognition in sport: How current insights into how teamwork is achieved in naturalistic settings can lead to simulation studies. Front. Psychol. 2019, 10, 2082. [Google Scholar] [CrossRef] [Green Version]
  105. Cooke, N.J.; Gorman, J.C.; Myers, C.; Duran, J. Theoretical underpinning of interactive team cognition. In Theories of Team Cognition: Cross-Disciplinary Perspectives; Routledge: Abingdon, Oxfordshire, 2013. [Google Scholar]
  106. Salas, E.; Rosen, M.A.; Held, J.D.; Weissmuller, J.J. Performance measurement in simulation-based training: A review and best practices. Simul. Gaming 2009, 40, 328–376. [Google Scholar] [CrossRef]
  107. Williams, A.M.; Ericsson, K.A.; Ward, P.; Eccles, D.W. Research on Expertise in Sport: Implications for the Military. Mil. Psychol. 2008, 20, S123–S145. [Google Scholar] [CrossRef]
  108. Gutwin, C.; Greenberg, S. The importance of awareness for team cognition in distributed collaboration. In Team Cognition: Understanding the Factors That Drive Process and Performance; APA: Washington DC, WA, USA, 2005. [Google Scholar]
  109. Kaplan, S.; LaPort, K.; Waller, M.J. The role of positive affectivity in team effectiveness during crises. J. Organ. Behav. 2013, 34, 473–491. [Google Scholar] [CrossRef]
  110. Talat, A.; Riaz, Z. An integrated model of team resilience: Exploring the roles of team sensemaking, team bricolage and task interdependence. Pers. Rev. 2020, 49, 2007–2033. [Google Scholar] [CrossRef]
  111. Wang, M.-H.; Yang, T.-Y. Explaining Team Creativity through Team Cognition Theory and Problem Solving based on Input-Mediator-Output Approach. J. Electron. Commer. 2015, 17, 91–138. [Google Scholar]
  112. Cooke, N.J.; Gorman, J.C. Interaction-Based Measures of Cognitive Systems. J. Cogn. Eng. Decis. Mak. 2009, 3, 27–46. [Google Scholar]
  113. Cooke, N.J.; Gorman, J.C.; Winner, J.L. Team Cognition. Handbook of Applied Cognition, 2nd ed.; APA: Washington DC, WA, USA, 2008. [Google Scholar]
  114. Lai, H.-Y.; Chen, C.-H.; Khoo, L.-P.; Zheng, P. Unstable approach in aviation: Mental model disconnects between pilots and air traffic controllers and interaction conflicts. Reliab. Eng. Syst. Saf. 2019, 185, 383–391. [Google Scholar] [CrossRef]
  115. Liston, K.; Fischer, M.; Winograd, T. Focused sharing of information for multi-disciplinary decision making by project teams. Electron. J. Inf. Technol. Constr. 2001, 6, 69–82. [Google Scholar]
  116. Gorman, J.C.; Cooke, N.J.; Winner, J.L. Measuring team situation awareness in decentralized command and control environments. Ergonomics 2006, 49, 1312–1325. [Google Scholar] [CrossRef] [PubMed]
  117. Bell, S.T.; Brown, S.G.; Mitchell, T. What We Know about Team Dynamics for Long-Distance Space Missions: A Systematic Review of Analog Research. Front. Psychol. 2019, 10, 811. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  118. Cha, K.-M.; Lee, H.-C. A novel qEEG measure of teamwork for human error analysis: An EEG hyperscanning study. Nucl. Eng. Technol. 2018, 51, 683–691. [Google Scholar] [CrossRef]
  119. Gorman, J.C.; Hessler, E.E.; Amazeen, P.G.; Cooke, N.J.; Shope, S.M. Dynamical analysis in real time: Detecting perturbations to team communication. Ergonomics 2012, 55, 825–839. [Google Scholar] [CrossRef]
  120. Gorman, J.C.; Amazeen, P.G.; Cooke, N.J. Team coordination dynamics. Nonlinear Dyn. Psychol. Life Sci. 2010, 14, 265–289. [Google Scholar] [CrossRef]
  121. Cooke, N.J. Team Cognition as Interaction. Curr. Dir. Psychol. Sci. 2015, 24, 415–419. [Google Scholar]
  122. Cooke, N.J.; Gorman, J.C.; Myers, C.W.; Duran, J.L. Interactive Team Cognition. Cogn. Sci. 2013, 37, 255–285. [Google Scholar] [CrossRef] [PubMed]
  123. Keebler, J.R.; Dietz, A.S.; Baker, A. Effects of communication lag in long duration space fligHC missions: Potential mitigation strategies. In Proceedings of the Human Factors and Ergonomics Society; SAGE Publications: Los Angeles, CA, USA, 2015. [Google Scholar]
  124. Landon, L.B.; Slack, K.J.; Barrett, J.D. Teamwork and collaboration in long-duration space missions: Going to extremes. Am. Psychol. 2018, 73, 563–575. [Google Scholar] [CrossRef] [Green Version]
  125. Noe, R.a.; Mcconnell Dachner, A.; Saxton, B.; Keeton, K.E. Team Training for Long-duration Missions in Isolated and Confined Environments: A Literature Review, an Operational Assessment, and Recommendations for Practice and Research. Res. Net 2011, 44. [Google Scholar]
  126. Demir, M.; Likens, A.D.; Cooke, N.J.; Amazeen, P.G.; McNeese, N.J. Team Coordination and Effectiveness in Human-Autonomy Teaming. IEEE Trans. Hum. Mach. Syst. 2019, 49, 150–159. [Google Scholar] [CrossRef]
  127. Tang, Z.; Chen, Z.; Bao, Y.; Li, H. Convolutional Neural Network-based Data Anomaly Detection Method using Multiple Information for Structural Health Monitoring. Struct. Control. Health Monit. 2019, 26. [Google Scholar] [CrossRef] [Green Version]
  128. Sun, L.; Shang, Z.; Xia, Y.; Bhowmick, S.; Nagarajaiah, S. Review of Bridge Structural Health Monitoring Aided by Big Data and Artificial Intelligence: From Condition Assessment to Damage Detection. J. Struct. Eng. 2020, 146, 04020073. [Google Scholar] [CrossRef]
  129. Ni, F.; Zhang, J.; Noori, M.N. Deep learning for data anomaly detection and data compression of a long-span suspension bridge. Comput. Civ. Infrastruct. Eng. 2020, 35, 685–700. [Google Scholar] [CrossRef]
  130. Smarsly, K.; Law, K.H. Decentralized fault detection and isolation in wireless structural health monitoring systems using analytical redundancy. Adv. Eng. Softw. 2014, 73, 1–10. [Google Scholar] [CrossRef] [Green Version]
  131. Okasha, N.M.; Frangopol, D.; Saydam, D.; Salvino, L.W. Reliability analysis and damage detection in high-speed naval craft based on structural health monitoring data. Struct. Health Monit. 2011, 10, 361–379. [Google Scholar] [CrossRef]
  132. Barnhart, T.B.; Crosby, B.T. Comparing Two Methods of Surface Change Detection on an Evolving Thermokarst Using High-Temporal-Frequency Terrestrial Laser Scanning, Selawik River, Alaska. Remote Sens. 2013, 5, 2813–2837. [Google Scholar] [CrossRef] [Green Version]
  133. Dai, F.; Rashidi, A.; Brilakis, I.; Vela, P. Comparison of Image-Based and Time-of-FligHC-Based Technologies for Three-Dimensional Reconstruction of Infrastructure. J. Constr. Eng. Manag. 2013, 139, 69–79. [Google Scholar] [CrossRef]
  134. Cao, H.; Tian, Y.; Lei, J.; Tan, X.; Gao, D.; Kopsaftopoulos, F.; Chang, F.K. Deformation data recovery based on compressed sensing in bridge structural health monitoring. Struct. Health Monit. 2017, 1, 888–895. [Google Scholar]
  135. Chen, Z.; Bao, Y.; Li, H.; Spencer, B.F. LQD-RKHS-based distribution-to-distribution regression methodology for restoring the probability distributions of missing SHM data. Mech. Syst. Signal Process. 2019, 121, 655–674. [Google Scholar] [CrossRef] [Green Version]
  136. Choudhury, A.; Kosorok, M.R. Missing data imputation for classification problems. arXiv 2020, arXiv:2002.10709. [Google Scholar]
  137. Deng, X.; Hu, Y.; Deng, Y. Bridge condition assessment using D numbers. Sci. World J. 2014, 2014, 358057. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  138. Law, K.H.; Jeong, S.; Ferguson, M. A data-driven approach for sensor data reconstruction for bridge monitoring. In Proceedings of the 2017 World Congress on Advances in Structural Engineering and Mechanics, Seoul, Republic of Korea, 28 August–1 September 2017. [Google Scholar]
  139. Ma, J.W.; Czerniawski, T.; Leite, F. Semantic segmentation of point clouds of building interiors with deep learning: Augmenting training datasets with synthetic BIM-based point clouds. Autom. Constr. 2020, 113, 103144. [Google Scholar] [CrossRef]
  140. Saydam, D.; Frangopol, D.M.; Dong, Y. Assessment of Risk Using Bridge Element Condition Ratings. J. Infrastruct. Syst. 2013, 19, 252–265. [Google Scholar] [CrossRef]
  141. Tang, H.; Schrimpf, M.; Lotter, W.; Moerman, C.; Paredes, A.; Caro, J.O.; Hardesty, W.; Cox, D.; Kreiman, G. Recurrent computations for visual pattern completion. Proc. Natl. Acad. Sci. USA 2018, 35, 8835–8840. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  142. Ye, S.; Lai, X.; Bartoli, I.; Aktan, A.E. Technology for condition and performance evaluation of highway bridges. J. Civ. Struct. Health Monit. 2020, 10, 573–594. [Google Scholar] [CrossRef]
  143. Zhang, C.; Tang, P. Visual complexity analysis of sparse imageries for automatic laser scan planning in dynamic environments. In Proceedings of the Congress on Computing in Civil Engineering, Proceedings, Austin, TX, USA, 21–23 June 2015; pp. 271–279. [Google Scholar]
  144. Yee, W.G.; Frieder, O. On search in peer-to-peer file sharing systems. In Proceedings of the ACM Symposium on Applied Computing, Santa Fe, NM, USA, 13–17 March 2005. [Google Scholar]
  145. Fan, H.; Su, H.; Guibas, L. A point set generation network for 3D object reconstruction from a single image. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  146. Pontes, J.K.; Kong, C.; Sridharan, S.; Lucey, S.; Eriksson, A.; Fookes, C. Image2Mesh: A Learning Framework for Single Image 3D Reconstruction. In Proceedings of the Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2019. [Google Scholar]
  147. Brand, M. Morphable 3D models from video. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  148. Din, Z.U.; Tang, P. Automatic Logical Inconsistency Detection in the National Bridge Inventory. Procedia Eng. 2016, 145, 729–737. [Google Scholar] [CrossRef] [Green Version]
  149. Moore, M.; Phares, B.; Graybeal, B.; Rolander, D.; Washer, G. Reliability of Visual Inspection for Highway Bridges. FHWA-RD-01-021. 2001. Available online: https://www.fhwa.dot.gov/publications/research/nde/pdfs/01021a.pdf (accessed on 12 February 2023).
  150. Emer, M.C.F.P.; Vergilio, S.R.; Jino, M. Testing relational database schemas with alternative instance analysis. In Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering, SEKE 2008, San Francisco, CA, USA, 1–3 July 2008. [Google Scholar]
  151. Chaudhuri, S.; Dayal, U. An Overview of Data Warehousing and OLAP Technology. ACM Sigmod Rec. 1997, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  152. Hunter, A.; Konieczny, S. Measuring inconsistency through minimal inconsistent sets. In Proceedings of the International Workshop on Temporal Representation and Reasoning, Montréal, QC, Canada, 16–18 June 2008. [Google Scholar]
  153. Farfoura, M.E.; Horng, S.-J.; Lai, J.-L.; Run, R.-S.; Chen, R.-J.; Khan, M.K. A blind reversible method for watermarking relational databases based on a time-stamping protocol. Expert Syst. Appl. 2012, 39, 3185–3196. [Google Scholar] [CrossRef]
  154. Storey, V.C. Understanding semantic relationships. VLDB J. 1993, 2, 455–488. [Google Scholar] [CrossRef]
  155. Chen, Z.; Li, H.; Bao, Y. Analyzing and modeling inter-sensor relationships for strain monitoring data and missing data imputation: A copula and functional data-analytic approach. Struct. Health Monit. 2019, 18, 1168–1188. [Google Scholar] [CrossRef]
  156. Delmarco, S.; Tom, V.; Webb, H.; Lefebvre, D. A Verification Metric for Multi-Sensor Image Registration; SPIE: Bellingham, WA, USA, 2007; Volume 6567, p. 656718. [Google Scholar]
  157. Snineh, S.M.; Bouattane, O.; Youssfi, M.; Daaif, A. Towards a multi-agents model for errors detection and correction in big data flows. In Proceedings of the 2019 3rd International Conference on Intelligent Computing in Data Sciences, ICDS 2019, Marrakech, Morocco, 28–30 October 2019. [Google Scholar]
  158. Vosselman, G.; Kessels, P.; Gorte, B. The utilisation of airborne laser scanning for mapping. Int. J. Appl. Earth Obs. Geoinf. 2005, 6, 177–186. [Google Scholar] [CrossRef]
  159. Yan, L.; Liu, H.; Tan, J.; Li, Z.; Xie, H.; Chen, C. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds. Sensors 2016, 16, 903. [Google Scholar] [CrossRef] [PubMed]
  160. Inatsuka, H.; Uchino, M.; Okuda, M. Level of detail control for texture on 3D maps. In Proceedings of the International Conference on Parallel and Distributed Systems-ICPADS, Fuduoka, Japan, 20–22 July 2005. [Google Scholar]
  161. Guerneve, T.; Petillot, Y. Underwater 3D reconstruction using BlueView imaging sonar. In MTS/IEEE OCEANS 2015-Genova: Discovering Sustainable Ocean Energy for a New World; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  162. Salman Azhar, S. Building Information Modeling (BIM): Trends, Benefits, Risks, and Challenges for the AEC Industry. Leadersh. Manag. Eng. 2011, 11, 241–252. [Google Scholar] [CrossRef]
  163. Cheng, J.C.P.; Deng, Y. An Integrated BIM-GIS Framework for Utility Information Management and Analyses. In Proceedings of the Computing in Civil Engineering 2015, Austin, TX, USA, 21–23 June 2015; pp. 667–674. [Google Scholar] [CrossRef]
  164. Kalasapudi, V.S.; Tang, P.; Zhang, C.; Diosdado, J.; Ganapathy, R. Adaptive 3D Imaging and Tolerance Analysis of Prefabricated Components for Accelerated Construction. Procedia Eng. 2015, 118, 1060–1067. [Google Scholar] [CrossRef] [Green Version]
  165. Boton, C.; Kubicki, S.; Halin, G. The Challenge of Level of Development in 4D/BIM Simulation Across AEC Project Lifecyle. A Case Study. Procedia Eng. 2015, 123, 59–67. [Google Scholar] [CrossRef] [Green Version]
  166. Taleb, I.; Serhani, M.A.; Dssouli, R. Big Data Quality Assessment Model for Unstructured Data. In Proceedings of the 2018 13th International Conference on Innovations in Information Technology (IIT), Al Ain, United Arab Emirates, 18–19 November 2018; pp. 69–74. [Google Scholar] [CrossRef]
  167. Gorla, N.; Somers, T.M.; Wong, B. Organizational impact of system quality, information quality, and service quality. J. Strateg. Inf. Syst. 2010, 19, 207–228. [Google Scholar] [CrossRef]
  168. Hunt, L.; White, J.; Hoogenboom, G. Agronomic data: Advances in documentation and protocols for exchange and use. Agric. Syst. 2001, 70, 477–492. [Google Scholar] [CrossRef]
  169. Chen, K.; Lu, W.; Xue, F.; Tang, P.; Li, L.H. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge. Autom. Constr. 2018, 93, 22–34. [Google Scholar] [CrossRef]
  170. Yilmaz, A.; Li, X.; Shah, M. Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1531–1536. [Google Scholar] [CrossRef] [PubMed]
  171. Wang, P.; Li, J.; Li, Q. Computational uncertainty and the application of a high-performance multiple precision scheme to obtaining the correct reference solution of Lorenz equations. Numer. Algorithms 2012, 59, 147–159. [Google Scholar] [CrossRef]
  172. Ansehel, O.; Baram, N.; Shimkin, N. Averaged-DQN: Variance reduction and stabilization for Deep Reinforcement Learning. In Proceedings of the 34th International Conference on Machine Learning, ICML, Sydney, Australia, 6–11 August 2017. [Google Scholar]
  173. Sukhoy, V.; Stoytchev, A. Numerical error analysis of the ICZT algorithm for chirp contours on the unit circle. Sci. Rep. 2020, 10, 4852. [Google Scholar] [CrossRef] [Green Version]
  174. Tucker, W. Validated Numerics: A Short Introduction to Rigorous Computations; JSTOR: New York City, NY, USA, 2011. [Google Scholar]
  175. Kendall, A.; Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  176. Mesnil, G.; Dauphin, Y.; Glorot, X.; Rifai, S.; Bengio, Y.; Goodfellow, I.J.; Lavoie, E.; Muller, X.; Desjardins, G.; Warde-Farley, D.; et al. Unsupervised and Transfer Learning Challenge: A Deep Learning approach. In Proceedings of the Unsupervised and Transfer Learning Challenge and Workshop, Bellevue, WA, USA, 2 July 2012. [Google Scholar]
  177. Seeger, M. Gaussian Processes for Machine Learning. Int. J. Neural Syst. 2004, 14, 69–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  178. Di Franco, A.; Guo, H.; Rubio-Gonzalez, C. A comprehensive study of real-world numerical bug characteristics. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, Urbana, IL, USA, 30 October–3 November 2017. [Google Scholar]
  179. Zhou, L.; Ye, J.; Kaess, M. A Stable Algebraic Camera Pose Estimation for Minimal Configurations of 2D/3D Point and Line Correspondences. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2019; pp. 273–288. [Google Scholar]
  180. Tang, P.; Akinci, B. Automatic execution of workflows on laser-scanned data for extracting bridge surveying goals. Adv. Eng. Inform. 2012, 26, 889–903. [Google Scholar] [CrossRef]
  181. Chen, J.; Zhang, C.; Tang, P. Geometry-based optimized point cloud compression methodology for construction and infrastructure management. In Proceedings of the Congress on Computing in Civil Engineering, Seattle, WA, USA, 25–27 June 2017. [Google Scholar]
  182. Trčka, N.; Van Der Aalst, W.M.P.; Sidorova, N. Data-flow anti-patterns: Discovering data-flow errors in workflows. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  183. Wu, I.-C.; Borrmann, A.; Beißert, U.; König, M.; Rank, E. Bridge construction schedule generation with pattern-based construction methods and constraint-based simulation. Adv. Eng. Inform. 2010, 24, 379–388. [Google Scholar] [CrossRef]
  184. Wilson, A.G.; Adams, R.P. Gaussian process kernels for pattern discovery and extrapolation. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
  185. AbouRizk, S.M.; Hajjar, D. A framework for applying simulation in construction. Can. J. Civ. Eng. 2011, 25, 604–617. [Google Scholar] [CrossRef]
  186. Lee, S.H.; Peña-Mora, F.; Park, M. Dynamic planning and control methodology for strategic and operational construction project management. Autom. Constr. 2006, 15, 84–97. [Google Scholar] [CrossRef]
  187. Vurukonda, N.; Rao, B.T. A Study on Data Storage Security Issues in Cloud Computing. Procedia Comput. Sci. 2016, 92, 128–135. [Google Scholar] [CrossRef] [Green Version]
  188. Chen, H.-M.; Chang, K.-C.; Lin, T.-H. A cloud-based system framework for performing online viewing, storage, and analysis on big data of massive BIMs. Autom. Constr. 2016, 71, 34–48. [Google Scholar] [CrossRef]
  189. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [Green Version]
  190. Alherbawi, N.; Shukur, Z.; Sulaiman, R. Systematic Literature Review on Data Carving in Digital Forensic. Procedia Technol. 2013, 11, 86–92. [Google Scholar] [CrossRef] [Green Version]
  191. Fernández-Alemán, J.L.; Señor, I.C.; Lozoya, P.O.; Toval, A. Security and privacy in electronic health records: A systematic literature review. J. Biomed. Inform. 2013, 46, 541–562. [Google Scholar] [CrossRef]
  192. Zhang, J.; El-Gohary, N.M. Automated Information Transformation for Automated Regulatory Compliance Checking in Construction. J. Comput. Civ. Eng. 2015, 29, B4015001. [Google Scholar] [CrossRef] [Green Version]
  193. Garrett, J.; Akinci, B.; Wang, H. Towards Domain-Oriented Semi-Automated Model Matching for Supporting Data Exchange. In Proceedings of the International Conference on Computing in Civil and Building Engineering, ICCCBE, Weimar, Germany, 2–4 June 2004. [Google Scholar]
  194. Wang, H.; Akinci, B.; Garrett, J.H.; Nyberg, E.; Reed, K.A. Semi-automated model matching using version difference. Adv. Eng. Inform. 2009, 23, 1–11. [Google Scholar] [CrossRef]
  195. Jiao, J.; Nie, S.X.; Yang, Y.; Gu, S.S.; Wu, S.H.; Zhang, Q.Y. Distributed systematic raptor coding scheme in deep space communications. Yuhang Xuebao/J. Astronaut. 2016, 37, 1232–1238. [Google Scholar]
  196. Afsari, K.; Eastman, C.M.; Castro-Lacouture, D. JavaScript Object Notation (JSON) data serialization for IFC schema in web-based BIM data exchange. Autom. Constr. 2017, 77, 24–51. [Google Scholar] [CrossRef]
  197. Mahmood, M.A.; Seah, W.K.; Welch, I. Reliability in wireless sensor networks: A survey and challenges ahead. Comput. Netw. 2015, 79, 166–187. [Google Scholar] [CrossRef]
  198. Shen, J.; Tan, H.; Wang, J.; Wang, J.; Lee, S. A novel routing protocol providing good transmission reliability in underwater sensor networks. J. Internet Technol. 2015, 16, 171–178. [Google Scholar]
  199. Bello-Orgaz, G.; Jung, J.J.; Camacho, D. Social big data: Recent achievements and new challenges. Inf. Fusion 2016, 28, 45–59. [Google Scholar] [CrossRef] [PubMed]
  200. Patil, L.; Dutta, D.; Sriram, R. Ontology-based exchange of product data semantics. IEEE Trans. Autom. Sci. Eng. 2005, 2, 213–225. [Google Scholar] [CrossRef]
  201. Shrestha, K.; Shrestha, P.P.; Bajracharya, D.; Yfantis, E.A. Hard-Hat Detection for Construction Safety Visualization. J. Constr. Eng. 2015, 2015, 721380. [Google Scholar] [CrossRef]
  202. Petricek, T.; Svoboda, T. Point cloud registration from local feature correspondences—Evaluation on challenging datasets. PLoS ONE 2017, 12, e0187943. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  203. Gard, N.A.; Chen, J.; Tang, P.; Yilmaz, A. Deep Learning and Anthropometric Plane Based Workflow Monitoring by Detecting and Tracking Workers. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-1, 149–154. [Google Scholar] [CrossRef] [Green Version]
  204. Puttonen, E.; Lehtomäki, M.; Kaartinen, H.; Zhu, L.; Kukko, A.; Jaakkola, A. Improved Sampling for Terrestrial and Mobile Laser Scanner Point Cloud Data. Remote Sens. 2013, 5, 1754–1773. [Google Scholar] [CrossRef] [Green Version]
  205. Lu, M.; Zhao, J.; Guo, Y.; Ma, Y. Accelerated Coherent Point Drift for Automatic Three-Dimensional Point Cloud Registration. IEEE Geosci. Remote Sens. Lett. 2016, 13, 162–166. [Google Scholar] [CrossRef]
  206. Schneider, K.M. Using information extraction to build a directory of conference announcements. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  207. Wang, Z.; Chung, R. Recovering human pose in 3D by visual manifolds. In Proceedings of the International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012. [Google Scholar]
  208. Possegger, H.; Mauthner, T.; Roth, P.M.; Bischof, H. Occlusion geodesics for online multi-object tracking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  209. Chellappa, R.; Sankaranarayanan, A.C.; Veeraraghavan, A.; Turaga, P. Statistical Methods and Models for Video-Based Tracking, Modeling, and Recognition. Found. Trends Signal Process. 2009, 3, 1–151. [Google Scholar] [CrossRef]
  210. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  211. Yang, D.; Liu, Y.; Li, S.; Li, X.; Ma, L. Gear fault diagnosis based on support vector machine optimized by artificial bee colony algorithm. Mech. Mach. Theory 2015, 90, 219–229. [Google Scholar] [CrossRef]
  212. Lorenz, R.; Hampshire, A.; Leech, R. Neuroadaptive Bayesian Optimization and Hypothesis Testing. Trends Cogn. Sci. 2017, 21, 155–167. [Google Scholar] [CrossRef] [PubMed]
  213. Rosman, B.; Ramamoorthy, S. Learning spatial relationships between objects. Int. J. Robot. Res. 2011, 30, 1328–1342. [Google Scholar] [CrossRef]
  214. Yuan, L.; Yu, Z.; Luo, W. Towards the next-generation GIS: A geometric algebra approach. Ann. GIS 2019, 25, 195–206. [Google Scholar] [CrossRef] [Green Version]
  215. Kim, B.S.; Kang, B.G.; Choi, S.H.; Kim, T.G. Data modeling versus simulation modeling in the big data era: Case study of a greenhouse control system. Simulation 2017, 93, 579–594. [Google Scholar] [CrossRef]
  216. Lee, H.; Lee, J.-K.; Park, S.; Kim, I. Translating building legislation into a computer-executable format for evaluating building permit requirements. Autom. Constr. 2016, 71, 49–61. [Google Scholar] [CrossRef]
  217. Tanguy, L.; Tulechki, N.; Urieli, A.; Hermann, E.; Raynal, C. Natural language processing for aviation safety reports: From classification to interactive analysis. Comput. Ind. 2016, 78, 80–95. [Google Scholar] [CrossRef] [Green Version]
  218. Tixier, A.J.-P.; Hallowell, M.R.; Rajagopalan, B.; Bowman, D. Automated content analysis for construction safety: A natural language processing system to extract precursors and outcomes from unstructured injury reports. Autom. Constr. 2016, 62, 45–56. [Google Scholar] [CrossRef] [Green Version]
  219. Liu, K.; El-Gohary, N. Ontology-based semi-supervised conditional random fields for automated information extraction from bridge inspection reports. Autom. Constr. 2017, 81, 313–327. [Google Scholar] [CrossRef]
  220. Zou, Y.; Kiviniemi, A.; Jones, S.W. Retrieving similar cases for construction project risk management using Natural Language Processing techniques. Autom. Constr. 2017, 80, 66–76. [Google Scholar] [CrossRef]
Figure 1. Human-cyber (HC) reliability issues in SHM of bridges.
Figure 1. Human-cyber (HC) reliability issues in SHM of bridges.
Sustainability 15 06389 g001
Figure 2. Human-cyber reliability in the SHM of bridges.
Figure 2. Human-cyber reliability in the SHM of bridges.
Sustainability 15 06389 g002
Figure 3. Behavioral patterns of searching for bridge defects [30].
Figure 3. Behavioral patterns of searching for bridge defects [30].
Sustainability 15 06389 g003
Figure 4. FEM updating using information extracted from inspection reports [34].
Figure 4. FEM updating using information extracted from inspection reports [34].
Sustainability 15 06389 g004
Figure 5. Studies collected from the Web of Science database.
Figure 5. Studies collected from the Web of Science database.
Sustainability 15 06389 g005
Figure 6. Keywords network related to HRA.
Figure 6. Keywords network related to HRA.
Sustainability 15 06389 g006
Figure 7. SPAR-H PSFs during information processing processes [78].
Figure 7. SPAR-H PSFs during information processing processes [78].
Sustainability 15 06389 g007
Figure 8. Keywords network related to team cognition.
Figure 8. Keywords network related to team cognition.
Sustainability 15 06389 g008
Figure 9. Studies collected from the Web of Science database.
Figure 9. Studies collected from the Web of Science database.
Sustainability 15 06389 g009
Figure 10. Keywords network related to data and model reliability.
Figure 10. Keywords network related to data and model reliability.
Sustainability 15 06389 g010
Figure 11. Keywords network related to computational reliability.
Figure 11. Keywords network related to computational reliability.
Sustainability 15 06389 g011
Figure 12. Keywords network related to data reliability.
Figure 12. Keywords network related to data reliability.
Sustainability 15 06389 g012
Table 1. Top five cited papers related to human reliability.
Table 1. Top five cited papers related to human reliability.
TitleObjectives
Modeling and Evaluating the Resilience of Critical Electrical Power Infrastructure to Extreme Weather Events [37]This study established a framework for comprehending the impact of human responses on power systems resilience during severe weather events.
A fuzzy causal relational mapping and rough set-based model for context-specific human error rate estimation [38]This study established a fuzzy rule-based causal relational mapping approach for deriving human error rates under different contexts.
Prediction of human error probabilities in a critical marine engineering operation on-board chemical tanker ship: The case of ship bunkering [39]This study presents a Shipboard Operation Human Reliability Analysis (SOHRA) method for predicting human errors during bunkering operations.
A modified human reliability analysis for cargo operation in single point mooring (SPM) off-shore units [40]This study established a framework for a human error assessment and reduction technique (HEART) with human uncertainties in decision-making.
A methodological extension to human reliability analysis for cargo tank cleaning operation on board chemical tanker ships [41]This study developed a method for augmenting human reliability analysis in examining human reliability impacts on cargo tank cleaning operations.
Table 2. Human reliability (individual level) studies.
Table 2. Human reliability (individual level) studies.
Perception Reliability—Reliability of the sensed spatiotemporal information about the self and environmental objectsVisual perception [42]; Auditory sense [43]; Taste sense [44]; Sense of smell [45]; Tactile and somatosensory [46]
Cognition Reliability—Impact of the self-sensed physical conditions of human bodies and environmental conditions on the decisions of human individuals and teamsVisual information [47,48]; Auditory information [49]; Taste [50]; Smell [51]; Body motions [52]; Temperature [53]; Space size (confined space) [54]; Motion speeds [55]; Frequencies of changes [56]; Interruptions/Distractions [57,58]
Response Reliability—Impact of the individual’s capability and team’s situational awareness on the risks and efficiency of collaborative operations of a teamReaction time [59,60]; Time limits [61,62]; Physical demand [63,64]; The impact of the environmental conditions (performance shaping factors—PSFs) on the operational performance of individual workers [65]
Table 3. Human reliability (team level) studies.
Table 3. Human reliability (team level) studies.
Perception Reliability—Reliability of the sensed spatiotemporal information about the self and environmental objectsVisual communication: Gestures [98]; Flag [99]; Signs [100]; Auditory communication [101]; Motion communication [102]; Somatosensory and visual and auditory [103]
Cognition Reliability—Impact of the self-sensed physical conditions of human bodies and environmental conditions on the decisions of human individuals and teamsMotions and positions [104]; Voice [105]; Impact of environmental conditions gained through team communication and collaboration on the team decisions; Relative motions [106,107]; Relative differences between workspaces [108]; Speeds of changes in remote workspaces [108]
Response Reliability—Impact of the individual’s capability and team’s situational awareness on the risks and efficiency of team operationsTeam reaction time [109]; Task independence [110,111]; The impact of environmental conditions on team performance [112]
Table 4. Top five cited papers related to cyber reliability.
Table 4. Top five cited papers related to cyber reliability.
TitleObjectives
Convolutional neural network-based data anomaly detection method using multiple information for structural health monitoring [127]This study established an anomaly detection method based on convolutional neural networks that mimic human vision and decision making.
Review of Bridge Structural Health Monitoring Aided by Big Data and Artificial Intelligence: From Condition Assessment to Damage Detection [128]This study has established a method that uses big data (BD) and artificial intelligence (AI) techniques to solve the data interpretation problem.
Deep learning for data anomaly detection and data compression of a long-span suspension bridge [129]This study has established a method for data compression and reconstruction based on deep learning.
Decentralized fault detection and isolation in wireless structural health monitoring systems using analytical redundancy [130]This study has established a decentralized approach for automatic sensor fault detection and isolation for wireless SHM systems.
Reliability analysis and damage detection in high-speed naval craft based on structural health monitoring data [131]This study has established a method for reliability analysis and damage detection of high-speed ships (HSNC) using SHM data.
Table 5. Synthesis of example studies exploring various reliability issues of data and information models.
Table 5. Synthesis of example studies exploring various reliability issues of data and information models.
CategoryExample Studies of Reliability Issues
DataVisual and Geometric DataAccuracy and level of detail of 3D imagery data reconstructed from photos [145]; spatial resolutions of images [146]; temporal resolution of videos [147]
ReportsErrors in field notes [148]; omitted structural defects in inspection reports [149]
Tabular DataMissing and anomalous values of locations, structural condition ratings in the NBI database [148]
Relational
Database
Incorrect external keys for representing the related columns in two tables and linking the information from two tables [150]; redundant information items having inconsistent values at different parts of the database [151,152]; missing relationships between two tables while the link should exist for linking common columns in two tables [153,154]
Sensory DataErrors or missing values in time series sensory data that measures structure vibrations [155]
MetadataErrors in the metadata for specifying the formats and organization of datasets, such as the meaning of columns of numbers in a data file [144]; errors in the metadata for specifying the time and data collection environments [156]; errors in the metadata for specifying the methods of processing and transforming the data, such as a transformation matrix for transforming point clouds to a global coordinate system [157]
Model2D/3D MapsLocation errors of points [158]; length and direction errors of lines representing paths on 2D or 3D maps [159]; level of detail of maps [160]; missing values in the properties of objects on 2D or 3D maps [161]
Semantic-Rich Digital ModelsMissing and additional objects [162]; dimensional and shape deviations from actual dimensions [163,164]; wrong type information of objects [165]
Table 6. A summary of example studies investigating data storage, exchange, and transmission reliabilities.
Table 6. A summary of example studies investigating data storage, exchange, and transmission reliabilities.
Reliability IssuesExample Studies
Data
Storage
Data and information losses due to compression of data for saving storage spacePoint cloud compression research for reducing point cloud data sizes while keeping the geometric changes captured in the point clouds [189]
Losses of data and information due to data saving errors and hardware defectsCorrupted files or missing parts of files due to problematic file saving processes for saving data of large sizes or unique data structures, such as Gigabytes of imagery datasets [190]
Losses of data and information due to decaying hardware devices for storing the data filesCorrupted files or missing parts of files due to storage unit failures under unfavorable environmental conditions or decaying of storage media materials [191,192]
Data
Exchange
Losses of data and information while converting files between different formatsMapping the same objects stored in different formats based on properties of objects while accepting losses of semantics associated with specific properties uniquely stored in only one of the formats [193]
Losses of data and information while updating the data schemaMapping the entity definitions in different versions of a schema for automated updating of building information model files into files that use a new version of the schema [194,195]
Data TransmissionLosses of data and information due to problems in communication protocolsLosses of data packets due to problems in data and file transmission protocols, especially for transferring large images and data files [196]
Table 7. Three stages of data analysis that derive information from raw data sources, and their reliability issues.
Table 7. Three stages of data analysis that derive information from raw data sources, and their reliability issues.
StageInputs/OutputsReliability Issues
Data Pre-Processing
(Prepare the raw data in formats that are suitable for reliable feature extraction and pattern recognition)
Inputs: raw data (images, Excel tables, field notes, and inspection reports).
Outputs: cleaned and sub-sampled data, linked or combined data (e.g., cleaned and registered laser scanning point clouds).
Losses of object or event details due to improper data cleaning actions and sampling rates [189,205].
Errors in linking or combining datasets due to improper selections of corresponding objects or properties for data linking and integration [203,206].
Data Processing
(Extract features and data patterns that are corresponding to objects and changes captured in the spatiotemporal patterns of features)
Inputs: cleaned, sampled, and linked/combined datasets.
Outputs: Objects, object properties, changes of objects (location and material property changes of concrete elements) and changing rates (moving speed).
Missing features or feature extraction errors due to improper selection of the parameters of feature extraction algorithms [207,208].
Errors in object/change detection due to mismatches between feature and pattern definitions and features/patterns extracted from the pre-processed data [209].
Data Interpretation
(Analyze relationships between objects and events to interpret the correlated objects and events into meaningful change information of the facilities and workspaces)
Inputs: cleaned, sampled, and linked/combined datasets.
Outputs: Various relationships (e.g., statistical and spatiotemporal relationships) between objects and changes of objects (location and material property changes of concrete elements).
Missing or errors in the detection of existing relationships between objects and changes due to improper selection of statistical methods [210].
Improper setting of the parameters that are not suitable for the processed data and decision contexts [211,212].
Improper use of statistical metrics in identifying statistically significant relationships between objects and changes [213].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Z.; Chen, T.; Meng, X.; Bao, Y.; Hu, L.; Zhao, R. A Critical Review for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges with Human-In-The-Loop. Sustainability 2023, 15, 6389. https://doi.org/10.3390/su15086389

AMA Style

Sun Z, Chen T, Meng X, Bao Y, Hu L, Zhao R. A Critical Review for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges with Human-In-The-Loop. Sustainability. 2023; 15(8):6389. https://doi.org/10.3390/su15086389

Chicago/Turabian Style

Sun, Zhe, Tiantian Chen, Xiaolin Meng, Yan Bao, Liangliang Hu, and Ruirui Zhao. 2023. "A Critical Review for Trustworthy and Explainable Structural Health Monitoring and Risk Prognosis of Bridges with Human-In-The-Loop" Sustainability 15, no. 8: 6389. https://doi.org/10.3390/su15086389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop