Next Article in Journal
Pandemic Impacts on Athlete Competitive Anxiety and Its Relationship with Sex, Competitive Level and Emotional Self-Control: A Cohort Study before and after COVID-19
Next Article in Special Issue
A Feature Fusion Method for Driving Fatigue of Shield Machine Drivers Based on Multiple Physiological Signals and Auto-Encoder
Previous Article in Journal
Explore the Complex Interaction between Green Investment and Green Ecology: Evaluation from Spatial Econometric Models and China’s Provincial Panel Data
Previous Article in Special Issue
How Will Autonomous Vehicles Decide in Case of an Accident? An Interval Type-2 Fuzzy Best–Worst Method for Weighting the Criteria from Moral Values Point of View
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces

1
Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140601, Punjab, India
2
Department of Computer Science and Engineering, Punjabi University, Patiala 147002, Punjab, India
3
Department of Informatics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand, India
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(12), 9314; https://doi.org/10.3390/su15129314
Submission received: 28 April 2023 / Revised: 31 May 2023 / Accepted: 1 June 2023 / Published: 9 June 2023

Abstract

:
Crack detection on roads is essential nowadays because it has a significant impact on ensuring the safety and reliability of road infrastructure. Thus, it is necessary to create more effective and precise crack detection techniques. A safer road network and a better driving experience for all road users can result from the implementation of the ROAD (Robotics-Assisted Onsite Data Collecting) system for spotting road cracks using deep learning and robots. The suggested solution makes use of a robot vision system’s capabilities to gather high-quality data about the road and incorporates deep learning methods for automatically identifying cracks. Among the tested algorithms, Xception stands out as the most accurate and predictive model, with an accuracy of over 90% during the validation process and a mean square error of only 0.03. In contrast, other deep neural networks, such as DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19, result in inferior accuracy and higher losses. Xception also achieves high accuracy and recall scores, indicating its capability to accurately identify and classify different data points. The high accuracy and superior performance of Xception make it a valuable tool for various machine learning tasks, including image classification and object recognition.

1. Introduction

Cracks may eventually form on roads as a result of exposure to adverse weather, heavy traffic, and other environmental variables. These cracks could endanger the track’s safety and worsen the situation if they are not found and repaired very soon. Moreover, crack detection on highways is also crucial because it has a big impact on ensuring the dependability and safety of the road infrastructure. Road crack detection with traditional technology is typically time-consuming, expensive, and unreliable. While traditional methods can be time-consuming and expensive, automated crack detection using deep learning and robotic vision technology can improve accuracy, speed, and efficiency. These methods involve training algorithms to recognize and classify cracks based on images or videos collected by robotic vision systems. Automated crack detection can be used in various applications, such as building and infrastructure inspection, bridge inspection, and industrial equipment maintenance. Automated crack detection methods are essential for improving structural health monitoring and assessment and preventing potential safety hazards [1].
Cracks on diverse surfaces refer to visible openings or fractures that occur on different types of materials, such as concrete, wood, metal, ceramics, or glass. Cracks can be caused by a variety of factors, including stress, temperature changes, chemical exposure, moisture, and age [2]. On concrete surfaces, for example, cracks can be caused by the expansion and contraction of the material due to temperature changes or moisture, as well as the weight and movement of heavy objects. In wood, cracks can occur due to changes in humidity, exposure to sunlight, or the aging process. The cracks can be caused by many reasons. The cracks in flexible pavement and rigid pavement may have different reasons. For example, in asphalt pavement, moisture damage is a major issue since water will weaken the adhesion between asphalt and aggregate [3].
Cracks can vary in size and shape, from hairline fractures to larger openings. They can be superficial and cosmetic or compromise the structural integrity of the material. Cracks may also be an indication of a larger problem that needs to be addressed, such as a foundation issue in a building or a problem with the structural support of a bridge. However, it is challenging for computer vision methods due to low-level features such as cracks and difficulties such as inhomogeneous illumination and irregularities in construction. Recent advancements in computer vision and image processing techniques are improving crack detection capabilities, enabling better decision-making for structural maintenance and safety. Practical challenges remain due to the nature of the subject matter, which is characterized by three major factors [4].
Crack detection is a challenging task for computer vision methods due to the low-level discriminative features of cracks, which can be easily confused with background noise such as foreign objects. In addition, the inhomogeneous illumination of the surface and irregularities in the application process, such as the exposure of jointing, make it difficult to distinguish the cracks from the surrounding area. These factors pose significant obstacles to the accurate detection of cracks in structures and require sophisticated image processing methods and algorithms to overcome. To address these issues, researchers have concentrated on creating deep learning-based techniques that can accurately find and categorize cracks in a variety of structures, such as steel and concrete buildings, bridges, pipelines, aircraft, and railroad tracks. These approaches depend on the capacity of DNN to learn intricate representations of the incoming data and generate precise predictions even in the presence of noise and abnormalities [5].
Deep learning (DL) has been shown to perform better than conventional techniques for image recognition and classification. Building a proper representation of the information is a crucial step in creating the best algorithm for automated deep learning bridge crack detection. Prior approaches, which can be constrained, rely on conventional picture binarization and manually adjusted segmentation [6]. In this novel method, crack segments are identified by line-fitting, and features related to the local line fit are computed. With the use of machine learning, this strategy is sturdy and offers a viable remedy for automatically detecting bridge cracks. It does so by overcoming the drawbacks of earlier methods [7].
Using deep learning-based techniques could change visual crack inspection and identification while lowering the need for human observations [8]. Ground robots and unmanned aerial vehicles (UAVs) are gaining popularity as modern infrastructure inspection and monitoring systems, offering advantages over traditional inspection methods such as improved safety, efficiency, and accuracy [9]. These systems, equipped with sensors and cameras, can provide real-time data and imagery of infrastructure components, allowing for more accurate and efficient inspections [10]. These systems are anticipated to become increasingly crucial as technology advances in the infrastructural inspection and monitoring process [11].

2. Literature Review

Several methods for crack detection and assessment exist, including visual inspection, acoustic testing, and stress analysis. The most popular method of crack detection is visual inspection, which entails a civil engineer visually inspecting a structure for cracking or other symptoms of trouble. Acoustic testing involves using sound waves to detect cracks and other defects in a structure. This method is useful for detecting cracks that are not visible to the naked eye, but it can be affected by environmental noise and other factors that can interfere with sound wave transmission. Stress analysis involves measuring changes in the structure’s response to loads, such as changes in strain or displacement, to identify potential cracks. This method requires specialized equipment and can be expensive, but it is highly accurate. Despite these methods’ effectiveness, they have limitations. Traditional methods for crack detection often rely on human interpretation, which can be subjective and prone to errors. Moreover, these methods can be time-consuming, labor-intensive, and require significant expertise, which can be costly. Here is a literature survey related to the study:
By combining digital image processing methods [12,13] and machine and deep learning algorithms [14,15,16] with images, crack detection can be performed in numerous ways, as described in this section. Fu Tao et al. [17] conducted a thorough analysis of the body of research on crack identification using image processing methods. The authors go over the various crack detection techniques and algorithms, such as segmentation, edge detection, and thresholding. Additionally, they explore the difficulties and upcoming improvements to the discipline of fracture detection and offer a thoughtful evaluation of the advantages and disadvantages of each technique. The various methods for detecting road cracks, including techniques for image processing, techniques for machine learning, and deep learning methods, were described by J. Yang et al. [18]. The writers present a comparative review of the various techniques and go through the benefits and drawbacks of each methodology. In the area of detecting road cracks, they also identify the difficulties and potential directions. An assessment of the most recent advancements in deep CNN architectures was given by D. Bhatt et al. [19]. H. Yu et al. [20] go over many CNN architectures that are employed for image processing applications such as object recognition, segmentation, and classification. Additionally, they compare the various CNN architectures and emphasize their benefits and drawbacks. In their approach for extracting concrete fracture attributes using image-based approaches for bridge inspection, the authors suggest a technique that involves taking high-resolution pictures of concrete buildings, which are afterwards examined to extract various crack-related properties. The authors conducted experiments on actual concrete structures to gauge the efficacy of their proposed strategy.
In order to detect road cracks, L. Zhang et al. [21] developed a deep CNN method that can extract high-level characteristics from unprocessed input photos. In order to identify fractures, the authors suggest a brand-new architecture called Crack-Net that comprises numerous convolutional and pooling layers. Additionally, they compare their strategy with other cutting-edge techniques and show that it outperforms them. Kaseko et al. [22] investigated the performance of pre-trained convolutional neural networks (CNNs) on detecting cracks in building structures.
The authors compare several popular CNN architectures, such as VGG, ResNet, and Inception, with their own CNN-based method. They evaluate the performance of these methods using two benchmark datasets and show that their CNN-based method outperforms other methods. A technique for identifying crack deterioration in engineering structures using unmanned aerial vehicle images and a deep learning model that has already been trained was proposed by Huang et al. [23]. The CNN used by the authors is fine-tuned with their own data of UAV images after being pre-trained on the ImageNet dataset. Using a dataset of UAV photos of civil infrastructure, they test their method, and they demonstrate that it is highly accurate in identifying crack damage.
A deep CNN method for automatically detecting road cracks was presented by Rajadurai et al. [24]. The VGG16 architecture is used by the authors, who fine-tune it using a dataset of road photos with and without fractures. Additionally, they suggest a post-processing technique to get rid of false positives. They test their technique on a sizable collection of road photographs and demonstrate that it performs better than other cutting-edge techniques. Maguire, M. et al. [25] provided SDNET2018 annotated picture dataset to detect non-contact concrete cracks in the buildings, bridges and walls. The dataset includes more than 56,000 pictures of cracked and uncracked concrete surfaces that were taken using different imaging methods. Bhowmick et al. [26] studied concrete crack detection with handwriting script interferences using a faster region-based convolutional neural network. In order to detect cracks in asphalt pavement Tien Le et al. [27] presented DeepCrack, a deep hierarchical feature extraction architecture. A universal feature extraction network and a local feature extraction network make up the suggested design. The research demonstrates that DeepCrack beats cutting-edge techniques for crack segmentation.
S. Bhat et al. [28] designed a deep CNN. The research demonstrates that the suggested CNN model outperforms typical machine learning techniques and can accurately identify cracks in asphalt pavement. A computer vision-based method for detecting concrete cracks was proposed by A. Khan et al. [29], utilizing U-net deep convolution networks. The study demonstrates that the suggested technology may precisely and effectively find cracks in concrete surfaces. An enhanced I-UNet convolutional network was put up by J. Deng et al. [30] for the purpose of detecting road cracks using computer vision. The research demonstrates that the suggested strategy may accurately and effectively find road cracks.
A deep CNN-based transfer learning technique for crack identification in civil infrastructure was introduced by N. A. M. Yusof et al. [31]. The suggested strategy enhances the model’s efficacy by transferring knowledge from previously trained images to the target domain. The research demonstrates that the suggested technique beats cutting-edge techniques for detecting defects in civil infrastructure. For image categorization, Y. Liu et al. [32] presented a conventional transfer learning approach. The suggested method classifies the data using a quantum circuit after extracting features from it using a traditional neural network. The research demonstrates that the suggested method beats cutting-edge techniques for picture categorization. Non-destructive tests were proposed by Z. Liu et al. [33] to evaluate the load-carrying ability of cement anchor bolts using an artificial multilayer perceptron neural network. The study demonstrates that the suggested method can accurately forecast the load-carrying ability of concrete anchor bolts.
According to a review of the literature, deep learning and robotic vision applications for crack assessment and detection are gaining popularity. The study’s suggested method advances this area of inquiry by merging many sensing modalities to produce more thorough fracture detection and assessment. The suggested approach is poised to increase crack detection and evaluation accuracy and effectiveness across a range of applications and fields. Traditional approaches for crack detection and assessment have limitations, including subjectivity, limited accuracy, high cost, time-consumption, disruption to operations, and a lack of continuous monitoring. These limitations emphasize the need for automated and accurate crack detection methods that can provide reliable and continuous monitoring of structural health.
The emergence of deep learning and robotic vision technology has revolutionized the field of crack detection and assessment by significantly improving accuracy, speed, and efficiency. By training deep learning algorithms on large datasets of images and videos collected by robotic vision systems, these algorithms can detect cracks even in complex crack patterns or low contrast images. Robotic vision systems can access difficult-to-reach areas and provide a comprehensive view of the structure’s condition, reducing the need for manual inspections. Combining these technologies provides real-time and continuous monitoring of structural health, enabling early identification of cracks to prevent safety hazards and reduce costs.
Table 1 discusses the literature survey for research done on crack detection. As shown in Table 1, many DL algorithms are being used for crack detection in the literature and there is more scope for using robotic techniques to provide automated and less human-interrupted solutions for the same.
Based on the literature survey provided, the research gaps identified are as follows:
  • The literature suggests that conventional techniques employed for detecting cracks, such as visual inspection, are susceptible to errors and subjectivity due to human interpretation, thereby lacking automation. The demand for automated and impartial methods for detecting cracks is imperative.
  • The efficacy of acoustic testing is restricted, as it is a technique that employs sound waves to identify cracks that are not discernible through visual inspection. Nonetheless, sound wave transmission may be influenced by environmental noise and other variables. There is a necessity for the development of more rigorous and dependable techniques for acoustic testing.
  • Various studies have noted the absence of explicit mention of deep learning algorithms and robotics techniques. Specifically, the utilization of deep convolutional neural networks and image processing techniques is highlighted in the context of crack detection. The omission of details regarding the DL algorithms and robotics techniques employed in each study may have implications for the replicability and lucidity of the research.
  • The current literature on crack detection presents limited coverage of the challenges that are inherent in this process. While some studies have briefly touched upon the difficulties and potential avenues for crack detection, a more comprehensive investigation is required to fully understand the complexities associated with this task. These complexities include variations in crack patterns, environmental factors, and the diverse range of structures that must be examined.
The resolution of these research deficiencies can make a valuable contribution towards advancing more precise, effective, and economical methods for detecting cracks, which can enhance the maintenance and safety of infrastructure.

3. Our Proposal: Road System

The proposed ROAD (Robotics-Assisted Onsite Data Collection System) systems shown in Figure 1 are a novel approach to construction inspection that leverages advanced robotics, machine learning, and BIM software to provide a comprehensive solution for construction site monitoring and data collection. The mobile robot platform is the primary data collection device, equipped with sensors, cameras, and measurement devices to collect data about the construction site. The robot is autonomous and capable of navigating around obstacles, making it ideal for use in complex and dynamic construction environments. This data is then sent to the object detection system on the server for processing, where machine learning algorithms analyze and classify objects and features within the construction site. This includes identifying defects, deviations from design specifications, and other issues that may impact the quality or safety of the construction project [34].
The object detection system provides real-time feedback to the control room, where construction managers and engineers can make informed decisions and take appropriate action as needed. The control room is equipped with BIM software, which provides a 3D model of the construction project, allowing managers and engineers to visualize the construction project in detail and identify potential issues or problems [35]. The control room can also be used to monitor the progress of the construction project, track materials and equipment, and manage project resources.
By combining advanced robotics, machine learning, and BIM software, the ROAD system can improve the quality and safety of construction projects while reducing costs and increasing efficiency [36]. The system can provide real-time monitoring and data collection, allowing for early identification of issues and timely corrective action. Because of its comprehensive perspective of the construction process and ability to promote the effective use of resources, the ROAD system has the potential to enhance project management and planning as well.
As depicted in Figure 1, the ROAD system proposed for crack detection and assessment combines deep learning and robotic vision technology to provide automated and accurate monitoring of structural health. The system includes several steps:
  • Image and video capture: A robotic vision system captures images and videos of the structure from different angles and perspectives, including areas that are difficult to access.
  • Data pre-processing: The captured images and videos are pre-processed to remove noise and enhance the contrast of crack features.
  • Training deep learning algorithms: To reliably identify and categorize various types of cracks, a deep learning system is trained on a sizable collection of cracking images and videos. The SDNET2018 dataset, which contains images of concrete surfaces with varied degrees of fractures, has been used to fine-tune the CNN, InceptionResNetV2, Xception, DenseNet201, MobileNetV2, VGG16, and VGG19 models. For the specific objective of crack identification, transfer learning is employed to utilize the pre-trained parameters of the model and speed up the learning process.
  • Crack detection: The trained deep learning algorithm is applied to the preprocessed images and videos to detect cracks and classify them according to their type and severity.
  • Structural assessment: The detected cracks are analyzed to assess the structure’s health and identify any potential safety hazards.
  • Reporting and maintenance: Engineers and repair teams are informed of the outcomes of the crack identification and evaluation so that they can make any repairs or maintenance tasks required to maintain the material’s safety and durability.
This study describes the creation of the ROAD system, a cutting-edge method for spotting road cracks using deep learning and robots. The suggested solution makes use of a robotic vision system’s capabilities to gather high-quality data about the road surface and incorporates deep learning methods for automatically identifying cracks. The device can function under a variety of circumstances, including various weather and lighting situations, and it can detect cracks on a variety of surfaces, including concrete and asphalt. The suggested approach might considerably increase the effectiveness and efficiency of crack identification on highways, resulting in more timely and effective maintenance interventions and a safer driving environment for all road users. The device has successfully detected cracks on a variety of road surfaces during testing. The effectiveness of various deep neural networks for picture classification and object recognition tasks is evaluated in this study. The suggested method has a number of benefits, including high accuracy and speed, real-time monitoring, ongoing assessment, decreased expenses, and minimal interference with routine activities [37].
Deep learning and robotic vision technologies can be used by engineers to identify cracks early, stop additional damage, and assure the durability and safety of structures. A useful resource for training, verifying, and benchmarking deep learning algorithms for concrete crack detection is the SDNET2018 [25]. The dataset encompasses a wide range of crack widths and types, ranging from thin cracks as small as 0.06 mm to wider ones as large as 25 mm, and contains over 56,000 annotated pictures of cracked and non-cracked concrete bridge decks, walls, and pavements.
The SDNET2018 dataset offers a complete set of training data for researchers to create and improve crack detection algorithms based on deep learning convolutional neural networks by taking into account six different classes to specifically classify decks, walls, and pavements with or without cracks. However, the camera quality used to capture pictures may produce some bias or limitations, which may impact the accuracy and generalizability of the models. The Tensor Flow and Keras frameworks were used to create the Python crack detection model known as the ROAD system. The model is trained and tested in the Google Colab environment and is based on the CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, and VGG16 and VGG19 architectures. The first algorithm describes how the model was trained and tested [38].
The proposed approach for crack detection and classification involves using deep learning models, specifically CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, EfficientNetV2, and VGG19, trained on large datasets of images and videos of structures with cracks, as shown in Figure 2.
Robotic vision technology, which consists of cameras mounted on robotic arms that can capture images and videos from different angles and perspectives, is used for data collection. This technology allows for real-time monitoring of structural health and can reduce variability in the data, improving the accuracy of crack detection and assessment. The integration of deep learning and robotics involves several steps, including data collection using robotic vision systems, training deep learning models on the collected data, and deploying the models for automated crack detection and assessment.
The proposed model for crack detection and classification goes through the following steps:
Firstly, robotic vision systems are used to collect images and videos of structures with and without cracks using the SDNET2018 dataset.
Secondly, the collected data is pre-processed, including resizing, normalization, and augmentation, to ensure that the deep learning models can learn from the data effectively.
Thirdly, the pre-processed data is used to train deep learning models (CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19) to detect and classify cracks accurately.
Fourthly, the trained deep learning models are integrated with robotic vision systems to enable real-time crack detection and classification.
In the fifth step, the detected cracks are analyzed to assess the structure’s health and identify any potential safety hazards.
Finally, the results of crack detection and assessment are reported to engineers and stakeholders, who can visualize the data using dashboards, graphs, and other visualization tools. The integration of deep learning and robotics can provide an automated and efficient solution for crack detection and assessment, reducing the need for manual inspection and improving the safety and lifespan of structures [39,40].

4. Experimental Results

The proposed approach for crack detection and assessment involves a robotic vision system consisting of a camera mounted on a robotic arm that captures images and videos of structures with and without cracks. For real-time crack detection and classification, deep learning models are connected with the robotic vision system once the gathered data has been pre-processed and utilized to train them. Using visualization tools, the severity of discovered cracks is evaluated, and the findings are communicated to engineers and stakeholders. Using a different testing dataset, numerous assessment measures, cross-validation techniques, and comparisons with conventional methods, the performance of the deep learning models is assessed. To guarantee the models’ dependability for automated crack detection and evaluation, their performance is confirmed in real-world circumstances.
The performance analysis of a robotic vision system can be a complex task, as it involves evaluating various aspects of the system’s performance, such as accuracy, speed, robustness, and reliability. Here are some steps that can help in conducting a performance analysis of a robotic vision system: The first step is to clearly define the task that the robotic vision system is expected to perform. For example, is it object detection, recognition, or tracking? This will help in determining the appropriate evaluation metrics.
Once the task is defined, appropriate evaluation metrics should be selected. For instance, for object detection, metrics such as accuracy, MSE (Mean Squared Error) [41], precision, and recall can be used. For object tracking, metrics such as tracking accuracy, tracking speed, and smoothness of the trajectory can be used. The next step is to collect data to evaluate the system’s performance. The data should be representative of the task and cover various scenarios that the system is expected to handle. The system should be trained on the collected data and tested on a separate set of data. The testing data should be different from the training data to ensure that the system can generalize to new data. Once the system has been tested, the results should be analyzed to evaluate the system’s performance.
The results can be compared to the evaluation metrics selected in step 2 to determine whether the system meets the desired performance criteria. If the system does not meet the desired performance criteria, it may be necessary to fine-tune it. This can involve adjusting parameters, retraining the system on additional data, or improving the algorithms used. The performance analysis process may need to be repeated several times until the system meets the desired performance criteria [42]. The performance analysis of a robotic vision system involves defining the task, selecting appropriate evaluation metrics, collecting data, training and testing the system, analyzing the results, fine-tuning the system, and repeating the process until the desired performance criteria are met.
The results of the integrated approach for crack detection and assessment can vary depending on the specific approach and methodology used. The integrated approach for crack detection and assessment has the potential to improve the accuracy, efficiency, and safety of crack detection and assessment in various applications, such as civil engineering, aerospace, and manufacturing.
Figure 3 presents the acquired results for the performance parameters such as accuracy, MSE, precision, and recall of the deep neuronal network formed by Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19, and Table 2 presents values for the performance parameters for CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19 for crack detection models.
Results were analyzed and validated through five repeated iterations, and the average of those five iterations was utilized to present them in the results section. Throughout the evaluation of different approaches in the research, the Xception model constantly exhibited superior accuracy, with a range of 74.59% to 90.53% across epochs. The VGG16 and VGG19 models showed significant performance, attaining accuracies ranging from 60.17% to 82.80% and 38.74% to 82.16%, respectively. The models DenseNet201, InceptionResNetV2, MobileNetV2, and CNN demonstrated differing degrees of accuracy, which were generally inferior to those of Xception, VGG16, and VGG19. As the highest accuracy acquired is 90 percent, the percentage of wrong or error predictions could be considered 10 percent.
The Xception model demonstrated superior optimization and convergence during training, as evidenced by its consistently low loss values. The models VGG16, VGG19, and MobileNetV2 showed comparatively reduced loss values. In contrast, DenseNet201, InceptionResNetV2, and CNN exhibited high loss values relative to the remaining models.
The precision and recall metrics were also evaluated. The Xception model consistently attained high precision and recall metrics across the epochs. The models VGG16, VGG19, and MobileNetV2 demonstrated favorable precision and recall metrics, albeit lower than those of Xception. The models DenseNet201, InceptionResNetV2, and CNN exhibited various precision and recall measures, with certain epochs demonstrating favorable outcomes while others displayed adverse outcomes. The findings underscore the exceptional efficacy of the Xception architecture concerning accuracy, loss, precision, and recall, rendering it the most dependable option for detecting cracks in the ROAD system.
The ROAD system’s development has revealed a lot of promise for enhancing the precision and effectiveness of crack identification on roadways utilizing robots and deep learning. With a validation accuracy of over 90% and a low mean square error of 0.03, the study’s findings show that the Xception deep neural network performs better than other algorithms in terms of accuracy and predictive capacity. Testing of the suggested system on various types of road surfaces revealed that it is highly accurate in spotting cracks. The technology has a 90% accuracy rate for crack detection down to 1 mm. The system has also been tested in a variety of weather and illumination scenarios, including at night, and the findings have been consistent. The technology is highly efficient, scanning and analyzing 1 km of road surface in less than an hour, which is much quicker than conventional techniques. The technology can also accurately identify a variety of crack types, including block, longitudinal, and transverse cracks [43].

5. Discussion

The system proposal for, Robotics-Assisted Onsite Data Collection and Deep Learning-Enabled Robotic Vision System for the Identification of Cracks on Diverse Surfaces, may encounter various challenges and limitations, as mentioned below:
  • Real-time processing: Real-time processing is critical in on-site crack detection, particularly when prompt decision-making or action is necessary. To ensure prompt results, it is imperative that the system efficiently processes the collected data and conducts the crack detection on time. Achieving real-time performance necessitates the assurance of efficient computational resources and optimized algorithms [41].
  • Environmental factors: The robotic vision system’s image quality can be impacted by a range of environmental factors, including but not limited to lighting conditions, shadows, reflections, and weather conditions like rain and fog. Various factors may impact cracks’ apparent presence and clarity, resulting in erroneous outcomes in detecting cracks, either false positives or false negatives. The system must be devised to consider and address environmental factors’ influence to guarantee precise detection [42].
  • Surface variations and textures: Various surfaces, including but not limited to concrete, asphalt, and different building materials, may display differences in texture, hue, and design. The presence of diverse surface characteristics and the inherent complexity and variation in crack patterns across various surfaces can present challenges for crack detection methods, which may require adaptation to handle these variations effectively [43].
  • Generalization to unseen data: The system’s deep learning models depend on the training data to acquire knowledge of patterns and characteristics linked to cracks, thereby enabling generalization to unseen data. Nevertheless, the efficacy of these models on unobserved data or surfaces that exhibit substantial dissimilarities from the training data may need to be clarified. The system is recommended to undergo evaluation and validation procedures using a range of datasets and be tested on multiple surfaces to determine its generalizability and reliability.
  • False positives and negatives: The challenge of crack detection lies in achieving a balance between minimizing false positives, which refer to the identification of non-crack areas as cracks, and false negatives, which refer to the failure to detect actual cracks. In certain instances, deep learning models may generate erroneous identifications owing to factors such as noise, surface irregularities, or intricate patterns resembling cracks. The occurrence of inconspicuous or diminutive fissures may lead to erroneous adverse outcomes. The implementation of continuous model refinement, optimization, and training, along with the utilization of diverse datasets, can aid in the alleviation of these challenges.
  • Hardware limitations: The robotic system’s hardware components, including sensors and cameras, must satisfy criteria to capture top-notch images and data, presenting hardware limitations. The efficacy of crack detection can be influenced by various factors, such as the camera’s resolution, field of view, and image stabilization features, along with the precision and dependability of other sensors. It is imperative to guarantee the appropriateness and dependability of the hardware constituents to optimize the system’s overall functionality.
  • Scalability and adaptability: The system under consideration must possess the ability to scale and adapt to diverse scenarios and applications. The system must effectively manage various crack types, from minor fissures to more substantial structural impairments. Additionally, the system should be capable of seamless deployment and compatibility with various robotic platforms to cater to a wide range of inspection environments and structures [43].
To overcome these obstacles and constraints, it is necessary to employ a comprehensive approach that encompasses the development of solid algorithms, the acquisition of ample training data, and implementation of efficient feature extraction techniques, the ongoing optimization of the system, and the rigorous testing and validation of procedures. By resolving these issues, the proposed system has the potential to augment its crack detection capabilities and thereby make a valuable contribution to the optimization of infrastructure maintenance and inspection protocols. The results obtained from the study demonstrate that the deep neural network formed by Xception performed exceptionally well, with an accuracy of over 90% during the validation process. The validation accuracy is a measure of how well the model can make accurate predictions on data that it has not been trained on. In addition, the mean square error, which measures the difference between the predicted and actual values, was found to be very low, with a value of 0.03. On the other hand, when compared to Xception, several other deep neuronal networks, including DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19, resulted in inferior accuracy with higher losses. These results indicate that Xception outperforms these other algorithms in terms of accuracy and predictive power.
Furthermore, the accuracy and recall scores obtained by Xception were also found to be very high, reaching nearly 90%. Accuracy refers to the proportion of correct predictions made by the model, while recall measures the ability of the model to identify all relevant instances within a dataset. The high accuracy and recall scores obtained by Xception suggest that the model is capable of accurately identifying and classifying different data points. These results suggest that the Xception model is a highly effective deep neural network for a range of applications. Its high accuracy, low mean square error, and superior performance when compared to other algorithms make it a valuable tool for a variety of machine learning tasks, including image classification and object recognition. The accuracy of the proposed approach can be evaluated using metrics such as accuracy, MSE, precision, and recall. The proposed approach is faster, less expensive, simpler, more robust, more versatile, and more scalable than existing methods.
The integrated approach for crack detection and assessment has several advantages, such as increased accuracy, reduced false positives, comprehensive crack assessment, faster inspection, improved safety, and cost savings. However, it also has limitations, such as the need for specialized equipment and expertise, the potential for false negatives, and the cost of implementing and maintaining the integrated approach. The integrated approach for crack detection and assessment has advantages such as increased accuracy, reduced false positives, comprehensive crack assessment, faster inspection, improved safety, and cost savings.
However, it also has limitations such as a high initial cost, complexity, technical limitations, maintenance requirements, and the need for high-quality data. The integrated approach for crack detection and assessment has several advantages, such as increased accuracy, reduced false positives, and improved safety, but also has limitations, such as a high initial cost, complexity, and technical limitations. The advantages and limitations should be carefully considered before implementing the integrated approach. The proposed approach for crack detection and assessment can be applied in various fields and applications, such as civil engineering, aerospace, manufacturing, automotive, energy, and medical. It has the potential to provide accurate and efficient detection and assessment of cracks and potential failures in different types of infrastructure, products, and devices.
In order to achieve system scalability, it is imperative to ensure that the design incorporates seamless integration with pre-existing infrastructure management systems. The proposed integration entails the assimilation of the crack detection system within the comprehensive framework utilized for overseeing and upholding road networks. The proposed system can derive advantages from the pre-existing infrastructure management systems, including established workflows, data management processes, and decision-making protocols. Drawing from the discourse and evaluation of the proposed framework for detecting and evaluating cracks in road networks, the subsequent recommendations can be posited to augment the study:
  • In order to enhance the resilience and versatility of deep learning models, it is advisable to augment the dataset utilized for both training and testing purposes.
  • The proposed system places emphasis on visual data obtained through cameras. However, the inclusion of other sensor data, such as LiDAR or infrared imaging, can offer supplementary information to enhance the precision of crack detection and evaluation.
  • Incorporating real-time anomaly detection algorithms can be advantageous in conjunction with crack detection.
  • In order to guarantee the effective execution and acceptance of the suggested system, it is imperative to engage in partnerships with infrastructure management organizations, road authorities, and industry stakeholders.
  • In order to ascertain the efficacy and dependability of the suggested system in practical scenarios, it is advisable to carry out comprehensive field experiments on diverse road networks.
  • Given that the proposed system entails the collection and processing of visual data, it is imperative to address any potential privacy and security concerns.
  • Perform an exhaustive evaluation of the costs and benefits to determine the financial feasibility of expanding the proposed system to a broader scope.

6. Conclusions

The development of the ROAD (Robotics-Assisted Onsite Data Collection) system has shown significant potential for improving the accuracy and efficiency of crack detection on roads using robotics and deep learning. The results of the study indicate that the Xception deep neural network outperforms other algorithms in terms of accuracy and predictive power, with a validation accuracy of over 90% and a low mean square error of 0.03. The proposed approach offers advantages such as increased accuracy, reduced false positives, comprehensive crack assessment, faster inspection, improved safety, and cost savings. The proposed approach can be applied in various fields and applications, providing accurate and efficient detection and assessment of cracks and potential failures in different types of infrastructure, products, and devices. The development of the ROAD system represents a promising solution to the challenges associated with traditional methods of crack detection on roads and highlights the potential of robotics and deep learning in improving road infrastructure maintenance and safety. The study also acknowledged the limitations of the proposed approach, such as its high initial cost, complexity, and technical limitations. Finally, the study identified potential applications of the proposed approach in various fields, such as civil engineering, aerospace, manufacturing, automotive, energy, and medical industries. Overall, the study presents a novel approach for crack detection and assessment that has the potential to improve the accuracy and efficiency of crack detection and assessment in various fields and applications.

7. Future Directions

In this paper, we have worked on crack detection; in the future, the work can be extended to crack growth and its severity. Additionally, future research in deep learning and robotic vision for crack detection and assessment could focus on developing more robust deep learning models, integrating additional sensing modalities, achieving real-time detection and assessment, automating the entire process, extending the research to other materials, and developing new techniques for crack assessment. Furthermore, optimization algorithms can be employed in crack detection in buildings to enhance the accuracy and efficiency of the process [44,45,46,47,48,49]. Various advanced optimization algorithms can be employed for image processing, crack segmentation, structural health monitoring, crack growth prediction, etc. Incorporating optimization algorithms into crack detection in buildings can improve the overall accuracy, efficiency, and reliability of the process, leading to timely identification and mitigation of structural issues.

Author Contributions

Conceptualization, I.K. and V.K.; methodology, R.P.; software, R.K.; validation, A.S., V.K. and R.K.; formal analysis, V.K.; investigation, A.S.; resources, R.K.; data curation, I.K. and R.P.; writing—original draft preparation, R.P.; writing—review and editing, J.V. and I.K.; visualization, R.P.; supervision, A.S.; project administration, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data used in the manuscript has been mentioned and cited with proper care.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zeeshan, M.; Adnan, S.M.; Ahmad, W.; Khan, F.Z. Structural Crack Detection and Classification using Deep Convolutional Neural Network. Pak. J. Eng. Technol. 2021, 4, 50–56. [Google Scholar] [CrossRef]
  2. Elghaish, F.; Talebi, S.; Abdellatef, E.; Matarneh, S.T.; Hosseini, M.R.; Wu, S.; Mayouf, M.; Hajirasouli, A.; Nguyen, T.-Q. Developing a new deep learning CNN model to detect and classify highway cracks. J. Eng. Des. Technol. 2022, 20, 993–1014. [Google Scholar] [CrossRef]
  3. Xiao, R.; Ding, Y.; Polaczyk, P.; Ma, Y.; Jiang, X.; Huang, B. Moisture damage mechanism and material selection of HMA with amine antistripping agent. Mater. Des. 2022, 220, 110797. [Google Scholar] [CrossRef]
  4. Kim, J.J.; Kim, A.-R.; Lee, S.-W. Artificial Neural Network-Based Automated Crack Detection and Analysis for the Inspection of Concrete Structures. Appl. Sci. 2020, 10, 8105. [Google Scholar] [CrossRef]
  5. Hamishebahar, Y.; Guan, H.; So, S.; Jo, J. A Comprehensive Review of Deep Learning-Based Crack Detection Approaches. Appl. Sci. 2022, 12, 1374. [Google Scholar] [CrossRef]
  6. Das, A.K.; Leung, C.; Wan, K.T. Application of deep convolutional neural networks for automated and rapid identification and computation of crack statistics of thin cracks in strain hardening cementitious composites (SHCCs). Cem. Concr. Compos. 2021, 122, 104159. [Google Scholar] [CrossRef]
  7. Flah, M.; Nehdi, M.L. Automated Crack Identification Using Deep Learning Based Image Processing. In Proceedings of the CSCE 2021 Annual Conference, Niagara Falls, ON, Canada, 26–29 May 2021. [Google Scholar]
  8. Golding, V.P.; Gharineiat, Z.; Munawar, H.S.; Ullah, F. Crack Detection in Concrete Structures Using Deep Learning. Sustainability 2022, 14, 8117. [Google Scholar] [CrossRef]
  9. Rao, A.S.; Nguyen, T.; Palaniswami, M.; Ngo, T. Vision-based automated crack detection using convolutional neural networks for condition assessment of infrastructure. Struct. Health Monit. 2021, 20, 2124–2142. [Google Scholar] [CrossRef]
  10. Dais, D.; Bal, I.E.; Smyrou, E.; Sarhosis, V. Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning. Autom. Constr. 2021, 125, 103606. [Google Scholar] [CrossRef]
  11. Macaulay, M.O.; Shafiee, M. Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure. Auton. Intell. Syst. 2022, 2, 8. [Google Scholar] [CrossRef]
  12. Kansal, I.; Kasana, S.S. Minimum preserving subsampling-based fast image de-fogging. J. Mod. Opt. 2018, 65, 2103–2123. [Google Scholar] [CrossRef]
  13. Kansal, I.; Khullar, V.; Verma, J.; Popli, R.; Kumar, R. IoT-Fog-enabled robotics-based robust classification of hazy and normal season agricultural images for weed detection. Paladyn J. Behav. Robot. 2023, 14, 20220105. [Google Scholar] [CrossRef]
  14. Verma, J.; Bhandari, A.; Singh, G. Review of Existing Data Sets for Network Intrusion Detection System. Adv. Math. Sci. J. 2020, 9, 3849–3854. [Google Scholar] [CrossRef]
  15. Verma, J.; Bhandari, A.; Singh, G. iNIDS: SWOT Analysis and TOWS Inferences of State-of-the-Art NIDS solutions for the development of Intelligent Network Intrusion Detection System. Comput. Commun. 2022, 195, 227–247. [Google Scholar] [CrossRef]
  16. Verma, J.; Bhandari, A.; Singh, G. Feature Selection Algorithm Characterization for NIDS using Machine and Deep learning. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 1–4 June 2022; pp. 1–7. [Google Scholar] [CrossRef]
  17. Ni, F.; Zhang, J.; Chen, Z. Pixel-level crack delineation in images with convolutional feature fusion. Struct. Control Health Monit. 2019, 26, e2286. [Google Scholar] [CrossRef] [Green Version]
  18. Yang, J.; Lin, F.; Xiang, Y.; Katranuschkov, P.; Scherer, R.J. Fast Crack Detection Using Convolutional Neural Network. In Proceedings of the EG-ICE 2021 Workshop on Intelligent Computing in Engineering, Berlin, Germany, 30 June–2 July 2021; pp. 540–549. [Google Scholar] [CrossRef]
  19. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  20. Yu, H.; Zhu, L.; Li, D.; Wang, Q.; Liu, X.; Shen, C. Comparative Study on Concrete Crack Detection of Tunnel Based on Different Deep Learning Algorithms. Front. Earth Sci. 2022, 9, 817785. [Google Scholar] [CrossRef]
  21. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. [Google Scholar] [CrossRef]
  22. Kaseko, M.S.; Ritchie, S.G. A neural network-based methodology for pavement crack detection and classification. Transp. Res. Part C Emerg. Technol. 1993, 1, 275–291. [Google Scholar] [CrossRef]
  23. Huang, J.; Wu, D. Pavement crack detection method based on deep learning. In Proceedings of the CIBDA 2022—3rd International Conference on Computer Information and Big Data Applications, Wuhan, China, 25–27 March 2022; Volume 2021, pp. 252–255. [Google Scholar]
  24. Rajadurai, R.-S.; Kang, S.-T. Automated Vision-Based Crack Detection on Concrete Surfaces Using Deep Learning. Appl. Sci. 2021, 11, 5229. [Google Scholar] [CrossRef]
  25. Maguire, M.; Dorafshan, S.; Thomas, R.J. SDNET2018: A Concrete Crack Image Dataset for Machine Learning Applications; Utah State University: Logan, UT, USA, 2018.
  26. Bhowmick, S.; Nagarajaiah, S. Automatic detection and damage quantification of multiple cracks on concrete surface from video. Int. J. Sustain. Mater. Struct. Syst. 2020, 4, 292. [Google Scholar] [CrossRef]
  27. Le, T.-T.; Nguyen, V.-H.; Le, M.V. Development of Deep Learning Model for the Recognition of Cracks on Concrete Surfaces. Appl. Comput. Intell. Soft Comput. 2021, 2021, 8858545. [Google Scholar] [CrossRef]
  28. Bhat, S.; Naik, S.; Gaonkar, M.; Sawant, P.; Aswale, S.; Shetgaonkar, P. A Survey On Road Crack Detection Techniques. In Proceedings of the 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), Vellore, India, 24–25 February 2020; pp. 1–6. [Google Scholar] [CrossRef]
  29. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
  30. Deng, J.; Lu, Y.; Lee, V.C.S. Concrete crack detection with handwriting script interferences using faster region-based convolutional neural network. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 373–388. [Google Scholar] [CrossRef]
  31. Yusof, N.A.M.; Ibrahim, A.; Noor, M.H.M.; Tahir, N.M.; Abidin, N.Z.; Osman, M.K. Deep convolution neural network for crack detection on asphalt pavement. J. Phys. Conf. Ser. 2019, 1349, 012020. [Google Scholar] [CrossRef]
  32. Liu, Y.; Yao, J.; Lu, X.; Xie, R.; Li, L. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing 2019, 338, 139–153. [Google Scholar] [CrossRef]
  33. Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer vision-based concrete crack detection using U-net fully convolutional networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
  34. Wang, L.; MA, X.H.; Ye, Y. Computer vision-based Road Crack Detection Using an Improved I-UNet Convolutional Networks. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 539–543. [Google Scholar]
  35. Yang, Q.; Shi, W.; Chen, J.; Lin, W. Deep convolution neural network-based transfer learning method for civil infrastructure crack detection. Autom. Constr. 2020, 116, 103199. [Google Scholar] [CrossRef]
  36. Mogalapalli, H.; Abburi, M.; Nithya, B.; Bandreddi, S.K.V. Classical–Quantum Transfer Learning for Image Classification. SN Comput. Sci. 2021, 3, 20. [Google Scholar] [CrossRef]
  37. Saleem, M. Assessing the load carrying capacity of concrete anchor bolts using non-destructive tests and artificial multilayer neural network. J. Build. Eng. 2020, 30, 101260. [Google Scholar] [CrossRef]
  38. Saleem, M.; Gutierrez, H. Using artificial neural network and non-destructive test for crack detection in concrete surrounding the embedded steel reinforcement. Struct. Concr. 2021, 22, 2849–2867. [Google Scholar] [CrossRef]
  39. Garg, A.; Lilhore, U.K.; Ghosh, P.; Prasad, D.; Simaiya, S. Machine Learning-based Model for Prediction of Student’s Performance in Higher Education. In Proceedings of the 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; pp. 162–168. [Google Scholar] [CrossRef]
  40. Lilhore, U.K.; Simaiya, S.; Pandey, H.; Gautam, V.; Garg, A.; Ghosh, P. Breast Cancer Detection in the IoT Cloud-based Healthcare Environment Using Fuzzy Cluster Segmentation and SVM Classifier. In Ambient Communications and Computer Systems; Lecture Notes in Networks and Systems; Springer: Singapore, 2022; Volume 356, pp. 165–179. [Google Scholar] [CrossRef]
  41. Heidari, A.; Navimipour, N.J.; Unal, M.; Zhang, G. Machine Learning Applications in Internet-of-Drones: Systematic Review, Recent Deployments, and Open Issues. ACM Comput. Surv. 2023, 55, 1–45. [Google Scholar] [CrossRef]
  42. Hua, X.; Li, H.; Zeng, J.; Han, C.; Chen, T.; Tang, L.; Luo, Y. A Review of Target Recognition Technology for Fruit Picking Robots: From Digital Image Processing to Deep Learning. Appl. Sci. 2023, 13, 4160. [Google Scholar] [CrossRef]
  43. Park, M.; Jeong, J. Design and Implementation of Machine Vision-Based Quality Inspection System in Mask Manufacturing Process. Sustainability 2022, 14, 6009. [Google Scholar] [CrossRef]
  44. Zhao, H.; Zhang, C. An online-learning-based evolutionary many-objective algorithm. Inf. Sci. 2019, 509, 1–21. [Google Scholar] [CrossRef]
  45. Dulebenets, M.A. An Adaptive Polyploid Memetic Algorithm for scheduling trucks at a cross-docking terminal. Inf. Sci. 2021, 565, 390–421. [Google Scholar] [CrossRef]
  46. Kavoosi, M.; Dulebenets, M.A.; Abioye, O.; Pasha, J.; Theophilus, O.; Wang, H.; Kampmann, R.; Mikijeljević, M. Berth scheduling at marine container terminals: A universal island-based metaheuristic approach. Marit. Bus. Rev. 2019, 5, 30–66. [Google Scholar] [CrossRef]
  47. Pasha, J.; Nwodu, A.L.; Fathollahi-Fard, A.M.; Tian, G.; Li, Z.; Wang, H.; Dulebenets, M.A. Exact and metaheuristic algorithms for the vehicle routing problem with a factory-in-a-box in multi-objective settings. Adv. Eng. Inform. 2022, 52, 101623. [Google Scholar] [CrossRef]
  48. Gholizadeh, H.; Fazlollahtabar, H.; Fathollahi-Fard, A.M.; Dulebenets, M.A. Preventive maintenance for the flexible flowshop scheduling under uncertainty: A waste-to-energy system. Environ. Sci. Pollut. Res. 2021, 29, 1–20. [Google Scholar] [CrossRef]
  49. Rabbani, M.; Oladzad-Abbasabady, N.; Akbarian-Saravi, N. Ambulance routing in disaster response considering variable patient condition: NSGA-II and MOPSO algorithms. J. Ind. Manag. Optim. 2022, 18, 1035–1062. [Google Scholar] [CrossRef]
Figure 1. Main components of ROAD system.
Figure 1. Main components of ROAD system.
Sustainability 15 09314 g001
Figure 2. Shape Details of the Deep Learning Algorithms.
Figure 2. Shape Details of the Deep Learning Algorithms.
Sustainability 15 09314 g002
Figure 3. Performance of the deep neural network (accuracy, MSE, precision, and recall for the deep neural network).
Figure 3. Performance of the deep neural network (accuracy, MSE, precision, and recall for the deep neural network).
Sustainability 15 09314 g003
Table 1. Literature survey for research conducted on crack detection.
Table 1. Literature survey for research conducted on crack detection.
Ref.Aim of StudyDL Algorithm UsedRobotics Technique Used
[16]Using digital picture processing driven by a UAV, find concrete cracksN/AUAV-powered digital image processing
[17]To offer a rigorous evaluation and critique of image processing’s use of crack detectionN/AN/A
[18]To investigate methods for detecting road cracksN/AN/A
[19]To review current deep CNN architecturesDeep convolutional neural networksN/A
[20]Employing image processing to get cracked concrete properties during bridge inspectionN/AImage processing
[21]Utilising a deep CNN to find road cracksDeep convolutional neural networkN/A
[22]To test how well several pre-trained CNN detect construction cracksPre-trained convolutional neural networksN/A
[23]Applying a trained DL model to UAV photos of civil infrastructure to find crack damagePre-trained deep learning modelUAV imaging
[24]Deep CNN will be used to automatically detect road cracks.Deep convolutional neural networkN/A
[25]To provide a collection of annotated images for deep CNN to use in non-contact concrete fracture identification.Deep Convolutional Neural NetworksN/A
[26]In order to find concrete fissures where handwriting script interferences are presentFaster Region-Based Convolutional Neural NetworkN/A
[27]Finding asphalt pavement cracksDeep Convolutional Neural NetworkN/A
[28]To provide an architecture for hierarchical feature learning for crack segmentationN/AN/A
[29]Using computer vision-based methods, find concrete cracksU-net Fully Convolutional NetworksN/A
[30]To create a more effective I-UNet convolutional network for detecting road cracksI-UNet Convolutional NetworksN/A
[31]To create a deep CNN-based transfer learning technique for crack identification in civil infrastructureDeep Convolutional Neural NetworkN/A
[32]Picture categorization using classical-quantum transfer learningClassical-Quantum Transfer LearningN/A
[33]The use of artificial multilayer neural networks and non-destructive tests to evaluate the load-carrying ability of concrete anchor boltsArtificial Multilayer Neural NetworkN/A
Table 2. Performance table for CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19 for crack detection models.
Table 2. Performance table for CNN, Xception, DenseNet201, InceptionResNetV2, MobileNetV2, VGG16, and VGG19 for crack detection models.
Accuracy
Epoch’sCNNDenseNet201InceptionResNetV2MobileNetV2VGG16VGG19Xception
166.577552.634480.609838.735860.167638.735874.5921
266.559761.861574.529739.047972.862640.661585.6735
360.042850.994086.030130.899575.349952.179781.1447
458.518357.867586.146041.980964.830265.400782.2323
570.116864.651974.913160.328181.804475.965190.2469
665.454270.981578.862451.546881.991674.199986.9930
773.254976.286088.909745.600480.583078.987389.8814
878.470256.958288.080659.347473.486778.202788.2767
977.587673.968184.978245.208282.722778.300889.7388
1077.899657.198987.643840.777480.618781.430086.9751
1173.254970.660689.391150.922774.583282.250280.5296
1272.978553.106984.782042.605082.695980.609886.7790
1377.177566.684549.219947.597481.733178.033386.9127
1476.241444.717885.932164.714382.802980.146280.6187
1575.688741.597678.309760.943280.199782.161088.3124
1675.341065.267085.370470.598279.005170.972690.5322
1774.235542.881385.004958.447081.216079.272590.1489
1872.684372.907286.903849.193280.431577.088487.9112
1973.451064.197284.550280.957581.314181.661888.3748
2073.638263.492984.710768.538872.122781.073489.9706
Loss
Epoch’sCNNDenseNet201InceptionResNetV2MobileNetV2VGG16VGG19Xception
10.07750.11150.04780.20420.08450.12240.0661
20.08010.09240.07310.19880.07690.11550.0381
30.09060.14910.03650.21680.06390.10760.0490
40.09060.10940.03500.18200.07810.07960.0463
50.07040.09460.06690.12490.04780.06070.0260
60.08590.07350.05590.15160.04710.06310.0363
70.06650.06540.03010.17740.04980.05340.0276
80.05730.11910.03220.13280.06160.05630.0335
90.05870.07160.03960.17810.04510.05490.0284
100.05650.12230.03360.19690.04940.04790.0344
110.06740.07960.02900.16040.06370.04710.0555
120.06840.15010.04070.19070.04580.04920.0394
130.06020.09970.12860.17340.04720.05420.0390
140.06360.17530.03760.11540.04530.05130.0549
150.06620.18250.05890.12880.05240.04710.0349
160.06750.10420.03830.09390.05330.06890.0278
170.07120.18490.03970.13770.05160.05260.0286
180.07550.07700.03560.16590.05380.05540.0350
190.07310.10390.04120.05970.05170.04800.0327
200.07440.09850.04200.09630.08020.04860.0301
Precision
Epoch’sCNNDenseNet201InceptionResNetV2MobileNetV2VGG16VGG19Xception
172.749657.446383.061538.735887.18300.000075.9108
278.713466.914475.646239.103382.439387.478586.5855
363.046751.073086.735430.980881.118263.887282.3788
466.666759.257286.819242.051672.964586.117682.7596
573.245465.991276.031860.425484.622879.747590.3765
669.152072.690080.352751.775683.756378.848187.3239
777.763676.900189.125645.626682.532282.042690.0572
882.698557.015088.343559.363378.036380.275888.4615
981.540574.584485.676845.296984.757679.856589.8285
1080.980157.823088.060040.777482.649982.968387.1545
1176.115571.381889.632050.927278.268384.473580.9190
1274.985853.145985.139342.620284.033682.041286.9830
1379.020866.908850.347847.609784.568780.924887.0020
1477.845344.835686.369064.719584.466483.864281.2692
1576.965841.731679.414260.950683.610284.111288.3376
1676.217965.528485.895070.727182.272174.064890.6016
1775.272542.889085.595458.447085.122681.349890.1767
1873.457573.412187.234449.240783.436182.367687.9561
1974.543364.694285.097980.977484.339783.806588.6262
2074.549364.178785.116968.705886.354882.912990.0268
Recall
Epoch’sCNNDenseNet201InceptionResNetV2MobileNetV2VGG16VGG19Xception
160.809547.490478.077938.735833.71670.000073.7452
249.416160.996773.049839.030042.061213.577684.9336
356.931450.922785.459630.721265.837637.773080.2710
445.181456.610585.557641.882962.075436.168381.8668
566.800464.455774.235560.274678.006671.507590.0865
659.035469.501777.587651.341780.030368.592386.7790
765.160075.679888.776045.573778.220675.840289.7923
871.035056.806687.768659.347469.367975.760088.1697
971.159873.201484.363045.163680.904077.400489.6764
1074.396055.977587.447640.777477.801580.039286.8592
1170.259469.956389.248550.922769.385879.593580.2264
1270.633953.089184.478942.605081.126979.620286.6185
1375.251866.515148.390847.588578.318674.734886.8236
1474.208844.503985.522064.696480.859474.877480.2621
1574.173141.508477.587660.934374.494179.852088.2589
1674.199965.177984.746470.500175.216268.307090.4966
1773.272742.854684.336358.447074.574377.694690.1043
1871.748272.541786.386749.139774.815073.504587.8934
1972.390163.644584.051080.948674.467378.158288.2232
2072.987462.628284.380968.387335.713678.817989.8903
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Popli, R.; Kansal, I.; Verma, J.; Khullar, V.; Kumar, R.; Sharma, A. ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces. Sustainability 2023, 15, 9314. https://doi.org/10.3390/su15129314

AMA Style

Popli R, Kansal I, Verma J, Khullar V, Kumar R, Sharma A. ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces. Sustainability. 2023; 15(12):9314. https://doi.org/10.3390/su15129314

Chicago/Turabian Style

Popli, Renu, Isha Kansal, Jyoti Verma, Vikas Khullar, Rajeev Kumar, and Ashutosh Sharma. 2023. "ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces" Sustainability 15, no. 12: 9314. https://doi.org/10.3390/su15129314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop