Next Article in Journal
Bi-Kernel Graph Neural Network with Adaptive Propagation Mechanism for Hyperspectral Image Classification
Next Article in Special Issue
Block-Scrambling-Based Encryption with Deep-Learning-Driven Remote Sensing Image Classification
Previous Article in Journal
A New Automated Ship Wake Detector for Small and Go-Fast Ships in Sentinel-1 Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Road Damage Detection Using the Hunger Games Search with Elman Neural Network on High-Resolution Remote Sensing Images

1
Department of Computer Science, College of Sciences and Humanities-Aflaj, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
4
Department of Mathematics, College of Science & Arts at Mahayil, King Khalid University, Mohail Asser, Abha 62521, Saudi Arabia
5
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
6
Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21589, Saudi Arabia
7
Department of Computer Science, Faculty of Computers and Information Technology, Future University in Egypt, New Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(24), 6222; https://doi.org/10.3390/rs14246222
Submission received: 17 September 2022 / Revised: 29 November 2022 / Accepted: 6 December 2022 / Published: 8 December 2022
(This article belongs to the Special Issue Remote Sensing for Intelligent Transportation Systems in Smart Cities)

Abstract

:
Roads can be significant traffic lifelines that can be damaged by collapsed tree branches, landslide rubble, and buildings debris. Thus, road damage detection and evaluation by utilizing High-Resolution Remote Sensing Images (RSI) are highly important to maintain routes in optimal conditions and execute rescue operations. Detecting damaged road areas through high-resolution aerial images could promote faster and effectual disaster management and decision making. Several techniques for the prediction and detection of road damage caused by earthquakes are available. Recently, computer vision (CV) techniques have appeared as an optimal solution for road damage automated inspection. This article presents a new Road Damage Detection modality using the Hunger Games Search with Elman Neural Network (RDD–HGSENN) on High-Resolution RSIs. The presented RDD–HGSENN technique mainly aims to determine road damages using RSIs. In the presented RDD–HGSENN technique, the RetinaNet model was applied for damage detection on a road. In addition, the RDD–HGSENN technique can perform road damage classification using the ENN model. To tune the ENN parameters automatically, the HGS algorithm was exploited in this work. To examine the enhanced outcomes of the presented RDD–HGSENN technique, a comprehensive set of simulations were conducted. The experimental outcomes demonstrated the improved performance of the RDD–HGSENN technique with respect to recent approaches in relation to several measures.

1. Introduction

Natural disasters such as floods, earthquakes, and wildfires cause massive destruction to substructures, block roads, and flatten buildings, leading to social and heavy economic losses. Roads are considered to be lifelines [1]. Once a disaster has occurred, road assessment and road damage detection are the foundation for emergency response activities and rescue operations. For identifying, detecting, and assessing road damage, numerous types of remote sensing data, namely, satellite or aerial images, SAR, and Lidar, are used broadly [2]. In particular, high-resolution aerial imageries are gained in a controlled way, relating to both time and flight planning and at high spectral, radiometric, and geometric resolution to allow an emergency response [3]. This method can be highly suitable for a rapid and reliable post-disaster damage evaluation because of the rapid acquisition and accessibility of the images which enable the detection of damaged road regions. Identifying damaged roads through high-resolution aerial imageries can speed up and improve the process of decision making during a disaster. Several methods for the prediction and detection of road damage caused by earthquakes were proposed [4]. Such techniques are classified into three types. The visual interpretation technique can be used for assessing and detecting road damage by several GIS data and remote sensing imageries; however, it relies upon numerous auxiliary tools (e.g., ArcGIS) [5]. Recently, a most precise technique commonly utilized in practice for road damage detection has been presented and is known as visual interpretation.
Though these techniques are promising, most of them utilize limited data [6]. In several cases, this may be due to the lack of diverse and large datasets and machine learning techniques. A sub-optimal feature extraction results in solutions that do not perform well in highly complicated scenarios (i.e., images with changing grades of illumination, varied camera perception, amongst others) [7,8,9]. However, new endeavors in the field seem able to increase the performance of these techniques because of the adoption of DL-related CV acquisition mechanisms. These AI-related mechanisms would cost less compared to other technological choices and, when enforced properly, could represent a cost-effective solution for governments or agencies having limited budgets [10,11,12]. Additionally, such AI-related techniques would be improved if they were integrated with mobile and cloud computing elements; the former is used to implement lightweight mobile acquisition mechanisms, whereas the latter is utilized to store the data processed and captured by edge sensors for executing big data investigations [13]. For example, by digitizing and geo-localizing the captured data, road damages are tracked over time when introducing the data inside an asset managing device for data analytics (i.e., planning, allotting budgetary sources, amongst others) [14,15]. This technique with the latest digital transformation patterns is effective and is readily positioned as a business method and a means to enhance management decisions.
This article presents a new Road Damage Detection method using Hunger Games Search with Elman Neural Network (RDD–HGSENN) on High-Resolution Remote Sensing Images. The presented RDD–HGSENN technique mainly aims to determine road damages using remote sensing images. In the presented RDD–HGSENN technique, the RetinaNet model was applied for damage detection on a road. Besides, the RDD–HGSENN technique performed road damage classification using the ENN model. To tune the ENN parameters automatically, the HGS algorithm was exploited in this work. For examining the enhanced outcomes of the presented RDD–HGSENN technique, a comprehensive set of simulations were conducted.
The rest of the paper is organized as follows. Section 2 offers a brief overview of the existing models. Section 3 discusses the proposed RDD–HGSENN technique, and Section 4 provides an experimental validation. Finally, Section 5 concludes the paper, discussing its key findings.

2. Related Works

In [16], the authors formulated a novel sensor technology that identifies road damages by utilizing a DL-related image processing method. This modelled technology involves a semi-supervised and super-resolution learning technique depending on generative adversarial networks (GAVs). The former enhances the road image quality to obtain a clear view of the damaged region. In [17], an innovative technique related to the Tracking, Learning, and Detector (TLD) structure to detect damaged roads through post-disaster high-resolution RSI was offered. Firstly, a spoke wheel operator was leveraged for describing the primary template of a road. After that, the TLD structure was utilized for detecting the supposed damaged road areas. At last, the damaged road regions were derived by pruning the roads incorrectly detected as damaged. Yuan et al. [18] presented FedRD, a new privacy-preserving edge cloud and federated learning-related structure for intelligent dangerous Road Damage warning and detection. In FedRD, an innovative perilous road damage detecting method was formulated by utilizing the benefits of ordered feature fusion. (size) A novel individualized differential privacy technique adding pixelization was modelled for protecting the privacy of users before sharing the data.
Fan and Liu [19] introduced a new road damage detection technique related to unsupervised disparity map segmentations. The authors identified its numerical solution, resolving the energy reduction issue, by utilizing non-linear optimized approaches. The disparity map which was transformed could be divided by utilizing the Otus’s thresholding technique, and the damaged road areas were identified. This technique requires no variables while identifying a road damage. Fan et al. [20] presented a real-time road damage inspection mechanism, which was entrenched in drones, for reconstructing a 3D road geometry utilizing stereo vision along with visual simultaneous localization and mapping and disparity map segmentation to localize and detect road damage. Furthermore, the 3D road map can be updated, allowing road damage superintendents to easily assess the conditions of roads.
Kortmann et al. [21] examined the automatic detection of various kinds of road damages through imageries from front-facing cameras in a vehicle. This DL method uses the pre-trained Faster Region-related CNN (R-CNN). In the initial step, the authors categorize the images followed by the application of skilled networks in different area. In [22], the authors modelled an innovative technique for the semi-automatic detection and evaluation of destructed roads in urban regions by utilizing pre-event vector maps and post- and pre-earthquake QuickBird imageries. Many texture and spectral features were considered, and GA was utilized for finding the optimal features. Then, an SVM classification was implemented to the optimal features for detecting damages.

3. The Proposed Model

In this article, a new RDD–HGSENN method is proposed for road damage detection using high-resolution RSIs. The presented RDD–HGSENN technique majorly focuses on the detection and classification of road damages using remote sensing images. In the RDD–HGSENN technique, the RetinaNet model is applied for damage detection on a road. Moreover, the RDD–HGSENN technique performs road damage classification using the HGS with the ENN model. Figure 1 illustrates the working process of the RDD–HGSENN system.

3.1. Road Damage Detection: The RetinaNet Model

In the initial stage, the RetinaNet model was applied for damage detection on a road. Facebook AI Research 2018 initially developed the RetinaNet model using a single-shot object detector with advanced performance [23]. This network structure comprises three individual components: a Feature Pyramid Network, ResNet-50, and ‘‘heads” or sub-networks for regression and classification. Every component is here individually described. ResNet, initially developed by He et al., comprises a class of neural networks that differ from the conventional CNN, having skip connections amongst the layers. The additional skip connections permit deep neural networks to be trained with high performance when compared to their predecessor. ResNet-50 is a Residual Network with 50 layers that are pretrained on ImageNet. The Feature Pyramid Network (FPN), developed by Facebook AI Research, is an NN structure that seeks to manage scale variances in objects within an image. It was inspired by the conventional Image Pyramid in the classical Computer Vision technique that manages scale variance by sampling an image at different resolutions and running the desirable model on every re-sampling image. The FPN achieves the same effects by leveraging the tendency of the deep layer of ResNet to have low resolution and at the same time rich semantic data. Thus, to accomplish a precise object detection at different scales, numerous feature layers in the ResNet framework are selected by means of rich multiscale feature layers that are generated with the integration of shallower feature layers component-wise to the next deep feature layers with nearest neighbor up-sampling.
The output of the feature map generated by the FPN is transferred to fully convolutional subnetworks that evaluate the location and class of dissimilar objects in the image. The RetinaNet detectors utilize an anchor as an initial point for the estimation of bounding boxes; hence, the subnetwork produces the output for all the bounding boxes. The class prediction subnetworks have the resultant dimension of W ∗ H ∗ K ∗ A, where K represents the class count (K = 2, background and vehicle), and A indicates the anchor count. The position estimation subnetworks have an output dimension of 4 ∗ W ∗ H ∗ A, whereby the four variables for all the anchors are an offset to the repositioning anchor over the recognized object. In the presented method, further subnetworks to evaluate metadata about the orientation of objects, the satellite’s orientation, and the ground sample distance were added as further multitask learning processes. The major contribution toward object recognition is the Focal Loss function; class imbalance issues found for object recognition are also addressed, whereby the commonest class is an insignificant background class. To overcome the imbalance among foreground and background classes, the Focal Loss function decreases the loss, increasing the model confidence.

3.2. Road Damage Classification: Optimal ENN Model

Once a road damage was identified, the next stage involved its classification using the ENN model. The ENN technique comprises hidden, input, context, and output layers [24]. The major configuration of the ENN module is analogous to that of the FFNN as regards the connections, whereas the context layers are the same as in the MLP. The context layer obtains the input from the hidden state and stores earlier values of the hidden layer. W h i ,   W h c ,   W h 0 represent output weight, external input, and context weight matrixes. Figure 2 depicts the infrastructure of the ENN. The dimension of the output and input layers are characterized by n , that is, the dimension of the context layer represents m , and x 1 ( t ) = [ x 1 1 ( t ) , x 2 1 ( t ) , . , x n 1 ( t ) ] T , y ( t ) = [ y 1 ( t ) ,   y 2 ( t ) ,   ,   y n ( t ) ] T .
The input unit of the ENN can be as follows:
u i ( l ) = e i ( l ) ,   i = 1 , 2 ,   ,   n
Now, l defines the input and output units at the l round in the following:
v k ( l ) = j = 1 N ω k j 1 ( l ) x j c ( l ) + i = 1 n ω k i 2 ( l ) u i ( l ) k = 1 , 2 ,   ,   N                                                                                                            
Here, x j c ( l ) defines the signals that are disseminated from the k - t h context node, ω k j 1 ( l ) describes the i - t h and j - t h weights of the hidden state directed from the o - t h nodes. Eventually, the outcome of the hidden states is obtained into the context layer, as shown below:
W k ( l ) = f 0 ( v ¯ k ( l ) )
Now,
v ¯ k ( l ) = v k ( l ) max { v k ( l ) }
The above-mentioned formula denotes the normalized value of the hidden state as
C k ( l ) = β C , ( l 1 ) + W k ( l 1 ) ,   k = 1 , 2 ,   ,   N
In Equation (5), W k specifies the gain of self-connected feedback among [0, 1]. Eventually, the output unit in the network is:
y 0 ( l ) = k = 1 N ω o k 3 ( l ) W k , ( l ) ,   0 = 1 , 2 ,   ,   n
In the above expression, ω o k 3 describes the connection weight from the k - t h into the o - t h layer.
In the final stage, the HGS algorithm was enforced to tuning the parameters relevant to the ENN model. Hunger produces internal animal requirements that induce animals to forage [25]. There is no doubt that different stimuli affect the life of animals and that animals lacking calories will seek food. To satisfy its needs, the animals forage continuously in their environments, alternating between competitive, exploratory, and defensive activities. The animals can alter dynamically their search patterns based on their hunger level. Once hunger is high, separate animal may seek food together in a restricted region moving in the same direction and concentrating in areas with high resources. This efficiently stimulates animals to organize socially. This approach becomes well-established once animals with dissimilar hunger levels are dispersed in similar regions. The mathematical mechanism and model regarding the described state of hunger can be defined as follows. The key stages of the HGS are shown below.
1.
Population initialized: to determine the first location for optimal search, the population will be initialized. The HGS approach makes use of the N real-valued vector of dimension D , and all the members of the population are denoted by X i = [ X i 1 X i 2 X i D ] r T i = 1 , 2 , N . In the original HGS model, each population member is considered to conform to a mean and probabihty distribution with the subsequent formula
X N × D = r a n d ( N , D ) × ( u b l b ) + l b
In Equation (7), N × D signifies a Euclidean space, N signifies the population count, and D represents the dimension of space; u b and l b represent the upper and the lower bounds of the search region.
2.
Approach food: This phase can be described as follows
X ( t + 1 ) = { X ( t ) ( 1 + r a n d n ( 1 ) ) , r   l W 1 X b + R W 2 | X b X ( t ) | ,   r 1 l ,   r 2 E W 1 X b R W 2 | X b X ( t ) | , r   l , r   E
In Equation (8), X ( t ) represents the existing individual. X b signifies the position of the individuals with better fitness values; randn (l) follows a standard distribution having the mean of 0 and the variance of 1; r 1 and r 2 indicate arbitrary numbers in the range [ 1 , 1 ] .   R indicates the values in the range [ a , a ] .   W 1 and W 2 characterize the hunger weight. The variable l is intended for facilitating the application to a varied population, upgrading the existing search agent, and thereby enhancing the algorithm performance. The E parameter can efficiently control the direction in the searching region and thereby increase the diversity of the population; it is determined as follows
E = s e c h ( | F i B F | )
In Equation (9), i represents a positive integer within [ 1, N ] , and F i signifies the fitness of the i - t h population member. B F signifies the best fitness attained so far. Sech is a function (sech ( x ) = 2 / ( e χ + e x ) ) . The variable R is presented to dynamically control the search phase of the search agents
R × a × r a n d a
a × ( 1 t T )
Now, r a n d represents a random parameter, whose values range is [ 1 ] ;   t characterizes the iteration count of the existing model, and T denotes the maximal iteration count.
3.
Hunger role: here, the hunger features of the search agent were simulated mathematically. In Equation (8), W 1 and W 2 characterize the extent of the population starvation, which vigorously controls the upgrade of the search agents’ position.
W 1 is determined with Equation (12):
W 1 { h u n g r y ( i ) × N S H u n g r y × r 4 , r 3 < l r 3 > l
The equation for W 2 in Equation (7) is shown below:
W 2 ( ι ) = ( 1 e x p ( | h u n g r y ( i ) S H u n g r y | ) ) × r 5 × 2
In Equation (13), h u n g r y specifies the level of hunger of all the individuals in the population. N denotes the number of the individuals in the population. S H u n g r y indicates the sum of the hunger levels of all the population associates; r 3 ,   r 4 , and r 5 indicate random values within [ 1 , 1 ] as follows:
h u n g r y ( i ) = { 0 , A l l F i t n e s s ( i ) = = B F h u n g r y ( i ) + H ,   A l l F i t n e s s   ( ) ! = B F
For better modelling the hunger levels of every population member, the hunger level of every population member was interconnected to the fitness values, where AllFitness(i) indicates the fitness value of i - t h population member of existing iteration; B F denotes better fitness values of the population till the existing iteration. When the fitness values of the existing search affiliates are equivalent to B F , it indicates that each associate is sated and does not feel hunger. In contrast, if the affiliate is in a starvation state, the starvation activation variable H is determined:
T H F i B F W F B F × r 6 × 2 × ( u b l b )
H = { L H × ( 1 + r ) ,   T H < L H T H , T H L H
Now, F i indicates the fitness values of every individual in the population. B F and W F show the best and worst fitness values of the individuals attained till the existing iteration. The upper and the lower bounds of the potential area are denoted as u b and l b ; r 6 indicates a random value within the range [ 1 , 1 ] . The sensation of hunger H was fixed to its lowest limit, L H , that generally takes the value of 100.

4. Experimental Validation

The proposed model was simulated using the Python 3.6.5 tool. The proposed model was experimented on PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD, and 1TB HDD. The parameter settings were: learning rate, 0.01, dropout, 0.5, batch size, 5, epoch count, 50, and activation, ReLU. In this study, the road damage classification results of the RDD–HGSENN method were assessed using the dataset of 4000 images as reported in Table 1. Figure 3 shows some sample images.
The road damage classification results of the RDD–HGSENN model were examined using a confusion matrix shown in Figure 4. The confusion matrix portrayed that the RDD–HGSENN model had properly recognized all types of road damages.
Table 2 provides the road damage detection output of the RDD–HGSENN method for 80% of the TR database and 20% of the TS database. Figure 5 showcases the road damage classification results of the RDD–HGSENN method for 80% of the TR database. The results showed that the RDD–HGSENN model identified all distinct kinds of road damage. For instance, the RDD–HGSENN model detected linear cracks with an a c c u y of 96.59% and peeling with an a c c u y of 97.97%. Finally, the RDD–HGSENN approach detected potholes with an a c c u y of 98.47%.
Figure 6 displays the road damage classification results of the RDD–HGSENN method with 20% of the TS database. The results displayed that the RDD–HGSENN algorithm identified all distinct kinds of road damages. For example, the RDD–HGSENN technique detected linear cracks with an a c c u y of 96.63% and peeling with an a c c u y of 98.38%. Finally, the RDD-HGSENN approach identified potholes with an a c c u y of 98.62%.
Table 3 presents the road damage detection output of the RDD–HGSENN system with 70% of the TR database and 30% of the TS database. Figure 7 portrays the road damage classification outcomes of the RDD–HGSENN approach with 70% of the TR database. The results displayed the RDD–HGSENN method identified all distinct kinds of road damage. For example, the RDD–HGSENN technique detected linear cracks with an a c c u y of 96.54% and peeling with an a c c u y of 97.04%. Finally, the RDD–HGSENN technique detected potholes with an a c c u y of 97.36%.
Figure 8 exhibits the road damage classification results of the RDD–HGSENN technique with 30% of the TS database. The results displayed the RDD–HGSENN technique has identified all distinct kinds of road damage. For example, the RDD–HGSENN approach detected linear cracks with an a c c u y of 97.17% and peeling with an a c c u y of 97.92%. Finally, the RDD–HGSENN approach detected potholes with an a c c u y of 98.58%.
The training accuracy (TRA) and validation accuracy (VLA) reached by the RDD–HGSENN approach using the test database are shown in Figure 9. The simulation result indicates that the RDD–HGSENN methodology reached high values of TRA and VLA. In addition, the VLA was higher than the TRA.
The training loss (TRL) and validation loss (VLL) attained by the RDD–HGSENN system using the test database are shown in Figure 10. The simulation results indicated the RDD–HGSENN approach exhibited low values of TRL and VLL. In particular, the VLL was lower than the TRL.
A clear precision–recall examination of the RDD–HGSENN method using the test database is depicted in Figure 11. The figure shows that the RDD–HGSENN system resulted in improved precision–recall values in every class label.
A brief ROC examination of the RDD–HGSENN system using the test database is portrayed in Figure 12. The results indicated that the RDD–HGSENN was able to distinguish different classes in the test database.
At last, an extensive comparative study of the RDD–HGSENN model with recent DL models, reported in Table 4, was conducted [26]. Figure 13 reports a detailed a c c u y assessment of the RDD–HGSENN method and the existing methods. The figure shows that the RDD–HGSENN method gained a higher a c c u y of 98.13% compared to the MobileNet, AlexNet, GoogleNet, and RetinaNet models, whose a c c u y was 90.03%, 92.84%, 91.47%, and 90.70%, respectively.
Finally, comparative results of the RDD–HGSENN model are provided in terms of p r e c n , r e c a l , and F s c o r e , as shown in Figure 14. The results demonstrated that the MobileNet and RetinaNet models showed lower classification outcomes.
The AlexNet and GoogleNet methods reached a realistic performance. However, the RDD–HGSENN model showed enhanced results, with maximum p r e c n , r e c a l , and F s c o r e of 96.25%, 96.34%, and 96.29%, respectively. Therefore, the RDD–HGSENN model can be employed for an accurate road damage classification.

5. Conclusions

In this article, a new RDD–HGSENN method was developed for road damage detection using high-resolution RSIs. The presented RDD–HGSENN technique majorly focuses on the detection and classification of road damages using remote sensing images. In the presented RDD–HGSENN technique, the RetinaNet model is applied for damage detection on a road. Moreover, the RDD–HGSENN technique performs road damage classification using the ENN model. Finally, the HGS algorithm was applied to tune the ENN parameters. To examine the enhanced outcomes of the presented RDD–HGSENN technique, a comprehensive set of simulations were conducted. The experimental outcomes demonstrated the improved performance of the RDD–HGSENN technique with respect to recent approaches, with a maximum accuracy of 98.13%. In the future, we plan to extend the RDD–HGSENN technique by the use of advanced DL classifiers. In addition, the computational complexity of the proposed model can be examined in future.

Author Contributions

Methodology, M.A.D. and A.Y.; Software, O.A.; Validation, A.A., K.A. and O.A.; Investigation, M.A.D. and O.A.; Resources, A.Y. and H.M.; Writing—review & editing, A.A., K.A., R.A. and H.M.; Supervision, A.A.M.; Funding acquisition, A.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under grant number (372/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R151), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. Research Supporting Project number (RSP2022R444), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any of the authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare that they have no conflict of interest. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.

References

  1. Cao, M.T.; Tran, Q.V.; Nguyen, N.M.; Chang, K.T. Survey on performance of deep learning models for detecting road damages using multiple dashcam image resources. Adv. Eng. Inform. 2020, 46, 101182. [Google Scholar] [CrossRef]
  2. Hruza, P.; Mikita, T.; Tyagur, N.; Krejza, Z.; Cibulka, M.; Prochazkova, A.; Patocka, Z. Detecting forest road wearing course damage using different methods of remote sensing. Remote. Sens. 2018, 10, 492. [Google Scholar] [CrossRef] [Green Version]
  3. Li, J.; Zhao, X.; Li, H. April. Method for detecting road pavement damage based on deep learning. In Health Monitoring of Structural and Biological Systems XIII; Fromme, P., Su, Z., Eds.; SPIE: Bellingham, WA, USA, 2019; Volume 10972, pp. 517–526. [Google Scholar]
  4. Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-driven structural health monitoring and damage detection through deep learning: State-of-the-art review. Sensors 2020, 20, 2778. [Google Scholar] [CrossRef] [PubMed]
  5. Arya, D.; Maeda, H.; Ghosh, S.K.; Toshniwal, D.; Mraz, A.; Kashiyama, T.; Sekimoto, Y. Deep learning-based road damage detection and classification for multiple countries. Autom. Constr. 2021, 132, 103935. [Google Scholar] [CrossRef]
  6. Yin, J.; Qu, J.; Huang, W.; Chen, Q. Road damage detection and classification based on multi-level feature pyramids. KSII Trans. Internet Inf. Syst. (TIIS) 2021, 15, 786–799. [Google Scholar]
  7. Heidari, M.J.; Najafi, A.; Borges, J.G. Forest Roads Damage Detection Based on Objected Detection Deep Learning Algorithms. 2022. [Google Scholar] [CrossRef]
  8. Van Der Horst, B.B.; Lindenbergh, R.C.; Puister, S.W.J. Mobile laser scan data for road surface damage detection. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 1141–1148. [Google Scholar] [CrossRef] [Green Version]
  9. Shim, S.; Kim, J.; Lee, S.W.; Cho, G.C. Road surface damage detection based on hierarchical architecture using lightweight auto-encoder network. Autom. Constr. 2021, 130, 103833. [Google Scholar] [CrossRef]
  10. Angulo, A.; Vega-Fernández, J.A.; Aguilar-Lobo, L.M.; Natraj, S.; Ochoa-Ruiz, G. Road damage detection acquisition system based on deep neural networks for physical asset management. In Proceedings of the 18th Mexican International Conference on Artificial Intelligence, MICAI 2019, Xalapa, Mexico, 27 October–2 November 2019; Springer: New York, NY, USA, 2019; pp. 3–14. [Google Scholar]
  11. Jia, J.; Sun, H.; Jiang, C.; Karila, K.; Karjalainen, M.; Ahokas, E.; Khoramshahi, E.; Hu, P.; Chen, C.; Xue, T.; et al. Review on active and passive remote sensing techniques for road extraction. Remote Sens. 2021, 13, 4235. [Google Scholar] [CrossRef]
  12. Chen, Z.; Deng, L.; Luo, Y.; Li, D.; Junior, J.M.; Gonçalves, W.N.; Nurunnabi, A.A.M.; Li, J.; Wang, C.; Li, D. Road extraction in remote sensing data: A survey. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102833. [Google Scholar] [CrossRef]
  13. Shao, Z.; Zhou, Z.; Huang, X.; Zhang, Y. MRENet: Simultaneous extraction of road surface and road centerline in complex urban scenes from very high-resolution images. Remote Sens. 2021, 13, 239. [Google Scholar] [CrossRef]
  14. Zhang, L.; Lan, M.; Zhang, J.; Tao, D. Stagewise unsupervised domain adaptation with adversarial self-training for road segmentation of remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  15. Chen, Z.; Wang, C.; Li, J.; Fan, W.; Du, J.; Zhong, B. Adaboost-like End-to-End multiple lightweight U-nets for road extraction from optical remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2021, 100, 102341. [Google Scholar] [CrossRef]
  16. Shim, S.; Kim, J.; Lee, S.W.; Cho, G.C. Road damage detection using super-resolution and semi-supervised learning with generative adversarial network. Autom. Constr. 2022, 135, 104139. [Google Scholar] [CrossRef]
  17. Zhao, K.; Liu, J.; Wang, Q.; Wu, X.; Tu, J. Road Damage Detection From Post-Disaster High-Resolution Remote Sensing Images Based on TLD Framework. IEEE Access 2022, 10, 43552–43561. [Google Scholar] [CrossRef]
  18. Yuan, Y.; Yuan, Y.; Baker, T.; Kolbe, L.M.; Hogrefe, D. FedRD: Privacy-preserving adaptive Federated learning framework for intelligent hazardous Road Damage detection and warning. Future Gener. Comput. Syst. 2021, 125, 385–398. [Google Scholar] [CrossRef]
  19. Fan, R.; Liu, M. Road damage detection based on unsupervised disparity map segmentation. IEEE Trans. Intell. Transp. Syst. 2019, 21, 4906–4911. [Google Scholar] [CrossRef] [Green Version]
  20. Fan, R.; Cheng, J.; Yu, Y.; Deng, J.; Giakos, G.; Pitas, I. Long-awaited next-generation road damage detection and localization system is finally here. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 641–645. [Google Scholar]
  21. Kortmann, F.; Talits, K.; Fassmeyer, P.; Warnecke, A.; Meier, N.; Heger, J.; Drews, P.; Funk, B. Detecting various road damage types in global countries utilizing faster r-cnn. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5563–5571. [Google Scholar]
  22. Izadi, M.; Mohammadzadeh, A.; Haghighattalab, A. A new neuro-fuzzy approach for post-earthquake road damage assessment using GA and SVM classification from QuickBird satellite images. J. Indian Soc. Remote Sens. 2017, 45, 965–977. [Google Scholar] [CrossRef]
  23. Hill, C. Automatic Detection of Vehicles in Satellite Images for Economic Monitoring. Doctoral Dissertation, University of South Florida, Tampa, FL, USA, 2021. [Google Scholar]
  24. Sitharthan, R.; Krishnamoorthy, S.; Sanjeevikumar, P.; Holm-Nielsen, J.B.; Singh, R.R.; Rajesh, M. Torque ripple minimization of PMSM using an adaptive Elman neural network-controlled feedback linearization-based direct torque control strategy. Int. Trans. Electr. Energy Syst. 2021, 31, 12685. [Google Scholar]
  25. Zhou, X.; Gui, W.; Heidari, A.A.; Cai, Z.; Elmannai, H.; Hamdi, M.; Liang, G.; Chen, H. Advanced Orthogonal Learning and Gaussian Barebone Hunger Games for Engineering Design. J. Comput. Des. Eng. 2022, 9, 1699–1736. [Google Scholar] [CrossRef]
  26. Ochoa-Ruiz, G.; Angulo-Murillo, A.A.; Ochoa-Zezzatti, A.; Aguilar-Lobo, L.M.; Vega-Fernández, J.A.; Natraj, S. An asphalt damage dataset and detection system based on retinanet for road conditions assessment. Appl. Sci. 2020, 10, 3974. [Google Scholar] [CrossRef]
Figure 1. Working process of the RDD–HGSENN system.
Figure 1. Working process of the RDD–HGSENN system.
Remotesensing 14 06222 g001
Figure 2. Framework of the ENN.
Figure 2. Framework of the ENN.
Remotesensing 14 06222 g002
Figure 3. Sample images.
Figure 3. Sample images.
Remotesensing 14 06222 g003
Figure 4. Confusion matrices of the RDD–HGSENN system. (a,b) TR/TS databases, 80:20 and (c,d) TR/TS databases, 70:30.
Figure 4. Confusion matrices of the RDD–HGSENN system. (a,b) TR/TS databases, 80:20 and (c,d) TR/TS databases, 70:30.
Remotesensing 14 06222 g004
Figure 5. Road damage detection outcome of the RDD–HGSENN system with 80% of the TR database.
Figure 5. Road damage detection outcome of the RDD–HGSENN system with 80% of the TR database.
Remotesensing 14 06222 g005
Figure 6. Road damage detection outcome of the RDD–HGSENN system with 20% of the TS database.
Figure 6. Road damage detection outcome of the RDD–HGSENN system with 20% of the TS database.
Remotesensing 14 06222 g006
Figure 7. Road damage detection outcome of the RDD–HGSENN system with 70% of the TR database.
Figure 7. Road damage detection outcome of the RDD–HGSENN system with 70% of the TR database.
Remotesensing 14 06222 g007
Figure 8. Road damage detection outcome of the RDD–HGSENN system with 30% of the TS database.
Figure 8. Road damage detection outcome of the RDD–HGSENN system with 30% of the TS database.
Remotesensing 14 06222 g008
Figure 9. TRA and VLA analysis of the RDD–HGSENN system.
Figure 9. TRA and VLA analysis of the RDD–HGSENN system.
Remotesensing 14 06222 g009
Figure 10. TRL and VLL analysis of the RDD–HGSENN system.
Figure 10. TRL and VLL analysis of the RDD–HGSENN system.
Remotesensing 14 06222 g010
Figure 11. Precision–recall analysis of the RDD–HGSENN system.
Figure 11. Precision–recall analysis of the RDD–HGSENN system.
Remotesensing 14 06222 g011
Figure 12. ROC analysis of the RDD–HGSENN system.
Figure 12. ROC analysis of the RDD–HGSENN system.
Remotesensing 14 06222 g012
Figure 13. A c c u y analysis of the RDD–HGSENN system compared with recent DL approaches.
Figure 13. A c c u y analysis of the RDD–HGSENN system compared with recent DL approaches.
Remotesensing 14 06222 g013
Figure 14. Comparative analysis of the RDD–HGSENN system with recent DL approaches.
Figure 14. Comparative analysis of the RDD–HGSENN system with recent DL approaches.
Remotesensing 14 06222 g014
Table 1. Dataset details.
Table 1. Dataset details.
ClassNo. of Samples
Linear Cracks1000
Peeling1000
Alligator Cracks1000
Potholes1000
Total Number of Samples4000
Table 2. Road damage detection outcome of the RDD–HGSENN system for 80:20 of the TR/TS databases.
Table 2. Road damage detection outcome of the RDD–HGSENN system for 80:20 of the TR/TS databases.
ClassAccuracyPrecisionRecallF-ScoreAUC Score
Training Phase (80%)
Linear Cracks96.5994.5391.5593.0294.90
Peeling97.9796.5095.1495.8197.01
Alligator Cracks97.7294.1397.0495.5697.49
Potholes98.4796.3797.6797.0198.21
Average97.6995.3895.3595.3596.90
Testing Phase (20%)
Linear Cracks96.6393.6993.2493.4695.52
Peeling98.3898.1295.8796.9897.59
Alligator Cracks98.8896.4198.9597.6698.90
Potholes98.6296.7797.3097.0498.16
Average98.1396.2596.3496.2997.54
Table 3. Road damage detection outcome of the RDD–HGSENN system for 70:30 of the TR/TS databases.
Table 3. Road damage detection outcome of the RDD–HGSENN system for 70:30 of the TR/TS databases.
ClassAccuracyPrecisionRecallF-ScoreAUC Score
Training Phase (70%)
Linear Cracks96.5489.6197.6093.4396.89
Peeling97.0495.9192.2694.0595.46
Alligator Cracks96.6496.8189.7393.1494.36
Potholes97.3693.4595.6894.5596.78
Average96.8993.9493.8293.7995.87
Testing Phase (30%)
Linear Cracks97.1791.6497.2794.3797.20
Peeling97.9296.4894.8195.6496.86
Alligator Cracks97.5098.1491.3594.6295.40
Potholes98.5896.4398.4897.4498.55
Average97.7995.6795.4895.5297.00
Table 4. Comparative analysis of the RDD–HGSENN system with the currently used DL approaches.
Table 4. Comparative analysis of the RDD–HGSENN system with the currently used DL approaches.
MethodsAccuracyPrecisionRecallF-Score
RDD-HGSENN98.1396.2596.3496.29
MobileNet90.0390.8888.9789.15
AlexNet92.8493.5293.8392.95
GoogleNet91.4792.2391.3391.39
RetinaNet90.7089.4589.8090.17
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al Duhayyim, M.; Malibari, A.A.; Alharbi, A.; Afef, K.; Yafoz, A.; Alsini, R.; Alghushairy, O.; Mohsen, H. Road Damage Detection Using the Hunger Games Search with Elman Neural Network on High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 6222. https://doi.org/10.3390/rs14246222

AMA Style

Al Duhayyim M, Malibari AA, Alharbi A, Afef K, Yafoz A, Alsini R, Alghushairy O, Mohsen H. Road Damage Detection Using the Hunger Games Search with Elman Neural Network on High-Resolution Remote Sensing Images. Remote Sensing. 2022; 14(24):6222. https://doi.org/10.3390/rs14246222

Chicago/Turabian Style

Al Duhayyim, Mesfer, Areej A. Malibari, Abdullah Alharbi, Kallekh Afef, Ayman Yafoz, Raed Alsini, Omar Alghushairy, and Heba Mohsen. 2022. "Road Damage Detection Using the Hunger Games Search with Elman Neural Network on High-Resolution Remote Sensing Images" Remote Sensing 14, no. 24: 6222. https://doi.org/10.3390/rs14246222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop