Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge
Abstract
:1. Introduction
1.1. Autonomous Bridge Inspection Approaches
1.2. Sensors and Vehicles
1.3. Damage Recognition
1.4. Hierarchical 3D Model Generation
1.5. Research Significance
2. Field Deployment and Methodology
2.1. Case Study
2.2. Workflow
2.3. Data Acquisition
2.4. Data Preparation
3. Experiments and Results
3.1. ConvNets Training, Validation, and Testing
3.1.1. Bridge Component Detection
3.1.2. Areas of Potential Damage Detection
3.1.3. Pixel-Wise Damage Detection
3.2. Analyzing the Effect of Brightness and Blurring on Computer Vision Detection and Point Cloud Generation
3.3. Intelligent Hierarchical DSfM
3.4. Evaluation Framework
3.5. Damage Quantification
4. Discussion
5. Conclusions
- Comparisons of semantic segmentation for both pixel-wise bridge components and damage detection show that U-Net performs better for joint gap segmentation (small objects), while SegNet is more efficient with large-scale objects, and so is better at bridge component detection;
- Image normalization and augmentation expand the diversity of the generated dataset by involving random rescaling, horizontal flips, changes to brightness, contrast, and color, as well as random cropping. Thus, the network learnt to extract features in different conditions, although it is important to capture images with the least blurring. The maximum allowed camera movement to achieve the best performance in pixel-wise bridge component and joint gap segmentation is suggested by Equation (2);
- Application of semantic segmentation in the SfM workflow, to mask background and other unnecessary parts of raw UAV images, showed potential improvement in point cloud generation in both computation time and accuracy;
- As a part of verification and error estimation, the point cloud generated using the proposed method was compared to that generated using regular SfM for a region of damage. It was found that the proposed method produced higher point cloud density and lower deviation compared with results from regular SfM. This shows the importance of image distance for accurate detection of damaged areas.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, H.-N.; Yi, T.-H.; Ren, L.; Li, D.-S.; Huo, L.-S. Reviews on innovations and applications in structural health monitoring for infrastructures. Struct. Monit. Maint. 2014, 1, 1–45. [Google Scholar] [CrossRef]
- Graybeal, B.A.; Phares, B.M.; Rolander, D.D.; Moore, M.; Washer, G. Visual Inspection of Highway Bridges. J. Nondestruct. Eval. 2002, 21, 67–83. [Google Scholar] [CrossRef]
- Phares, B.M.; Washer, G.A.; Rolander, D.D.; Graybeal, B.; Moore, M. Routine Highway Bridge Inspection Condition Documentation Accuracy and Reliability. J. Bridg. Eng. 2004, 9, 403–413. [Google Scholar] [CrossRef]
- Popescu, C.; Täljsten, B.; Blanksvärd, T.; Elfgren, L. 3D reconstruction of existing concrete bridges using optical methods. Struct. Infrastruct. Eng. 2018, 15, 912–924. [Google Scholar] [CrossRef] [Green Version]
- Jáuregui, D.V.; White, K.R.; Woodward, C.B.; Leitch, K.R. Static Measurement of Beam Deformations via Close-Range Photogrammetry. Transp. Res. Rec. J. Transp. Res. Board 2002, 1814, 3–8. [Google Scholar] [CrossRef]
- Lichti, D.; Gordon, S.; Stewart, M.; Franke, J.; Tsakiri, M. Comparison of Digital Photogrammetry and Laser Scanning. Available online: https://www.researchgate.net/publication/245716767_Comparison_of_Digital_Photogrammetry_and_Laser_Scanning (accessed on 29 June 2021).
- Park, H.S.; Lee, H.M.; Adeli, H.; Lee, I. A New Approach for Health Monitoring of Structures: Terrestrial Laser Scanning. Comput. Civ. Infrastruct. Eng. 2006, 22, 19–30. [Google Scholar] [CrossRef]
- Attanayake, U.; Tang, P.; Servi, A.; Aktan, H. Non-Contact Bridge Deflection Measurement: Application of Laser Technology. 2011. Available online: http://dl.lib.uom.lk/bitstream/handle/123/9425/SEC-11-63.pdf?sequence=1&isAllowed=y (accessed on 29 June 2021).
- Higgins, C.; Turan, O.T. Imaging Tools for Evaluation of Gusset Plate Connections in Steel Truss Bridges. J. Bridg. Eng. 2013, 18, 380–387. [Google Scholar] [CrossRef]
- Riveiro, B.; Jauregui, D.; Arias, P.; Armesto, J.; Jiang, R. An innovative method for remote measurement of minimum vertical underclearance in routine bridge inspection. Autom. Constr. 2012, 25, 34–40. [Google Scholar] [CrossRef]
- Sousa, H.; Cavadas, F.; Henriques, A.; Figueiras, J.; Bento, J. Bridge deflection evaluation using strain and rotation meas-urements. Smart Struct. Syst. 2013, 11, 365–386. [Google Scholar] [CrossRef]
- Riveiro, B.; González-Jorge, H.; Varela, M.; Jauregui, D. Validation of terrestrial laser scanning and photogrammetry techniques for the measurement of vertical underclearance and beam geometry in structural inspection of bridges. Measurement 2013, 46, 784–794. [Google Scholar] [CrossRef]
- He, X.; Yang, X.; Zhao, L. Application of Inclinometer in Arch Bridge Dynamic Deflection Measurement. TELKOMNIKA Indones. J. Electr. Eng. 2014, 12, 3331–3337. [Google Scholar] [CrossRef]
- Taşçi, L. Deformation Monitoring in Steel Arch Bridges through Close-Range Photogrammetry and the Finite Element Method. Exp. Tech. 2015, 39, 3–10. [Google Scholar] [CrossRef]
- Anigacz, W.; Beben, D.; Kwiatkowski, J. Displacements Monitoring of Suspension Bridge Using Geodetic Techniques. In Proceedings of the EECE 2020; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; pp. 331–342. [Google Scholar]
- Lõhmus, H.; Ellmann, A.; Märdla, S.; Idnurm, S. Terrestrial laser scanning for the monitoring of bridge load tests–two case studies. Surv. Rev. 2017, 50, 270–284. [Google Scholar] [CrossRef]
- Lee, H.; Han, D. Deformation Measurement of a Railroad Bridge Using a Photogrammetric Board without Control Point Survey. J. Sens. 2018, 2018, 1–10. [Google Scholar] [CrossRef]
- Duque, L.; Seo, J.; Wacker, J. Synthesis of Unmanned Aerial Vehicle Applications for Infrastructures. J. Perform. Constr. Facil. 2018, 32, 04018046. [Google Scholar] [CrossRef]
- Dorafshan, S.; Thomas, R.J.; Maguire, M. Comparison of deep convolutional neural networks and edge detectors for image-based crack detection in concrete. Constr. Build. Mater. 2018, 186, 1031–1045. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmen-tation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016. [Google Scholar]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Cha, Y.-J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Buyukozturk, O. Autonomous structural visual inspection using re-gion-based deep learning for detecting multiple damage types. Computer-Aid. Civil Infrast. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
- Wang, N.; Zhao, Q.; Li, S.; Zhao, X.; Zhao, P. Damage Classification for Masonry Historic Structures Using Convolutional Neural Networks Based on Still Images. Comput. Civ. Infrastruct. Eng. 2018, 33, 1073–1089. [Google Scholar] [CrossRef]
- Wu, W.; Qurishee, M.A.; Owino, J.; Fomunung, I.; Onyango, M.; Atolagbe, B. Coupling deep learning and UAV for infra-structure condition assessment automation. In 2018 IEEE International Smart Cities Conference (ISC2); IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
- Choi, W.; Cha, Y.-J. DDNet: Real-time crack segmentation. IEEE Trans. Ind. Electron. 2019, 67, 8016–8025. [Google Scholar] [CrossRef]
- Liang, X. Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization. Comput. Civ. Infrastruct. Eng. 2018, 34, 415–430. [Google Scholar] [CrossRef]
- Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer vision-based concrete crack detection using U-net fully convolutional networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
- Dais, D.; Bal, I.E.; Smyrou, E.; Sarhosis, V. Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning. Autom. Constr. 2021, 125, 103606. [Google Scholar] [CrossRef]
- Zhang, C.; Chang, C.-C.; Jamshidi, M. Simultaneous pixel-level concrete defect detection and grouping using a fully convolutional model. Struct. Health Monit. 2021. [Google Scholar] [CrossRef]
- Yang, X.; Li, H.; Yu, Y.; Luo, X.; Huang, T.; Yang, X. Automatic Pixel-Level Crack Detection and Measurement Using Fully Convolutional Network. Comput. Civ. Infrastruct. Eng. 2018, 33, 1090–1109. [Google Scholar] [CrossRef]
- Ni, F.; Zhang, J.; Chen, Z. Zernike-moment measurement of thin-crack width in images enabled by dual-scale deep learning. Comput. Aided Civil Infrastruct. Eng. 2019, 34, 367–384. [Google Scholar] [CrossRef]
- Kim, B.; Cho, S. Image-based concrete crack assessment using mask and region-based convolutional neural network. Struct. Control. Health Monit. 2019, 26, e2381. [Google Scholar] [CrossRef]
- Li, S.; Zhao, X.; Zhou, G. Automatic pixel-level multiple damage detection of concrete structure using fully convolution-al network. Comput. Aided Civil Infrastruct. Eng. 2019, 34, 616–634. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image seg-mentation. IEEE Trans. Pattern Anal. Machine Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.-F.; Cho, S.; Spencer, B.F., Jr.; Fan, J.-S. Concrete crack assessment using digital image processing and 3D scene reconstruction. J. Comput. Civil Eng. 2016, 30, 04014124. [Google Scholar] [CrossRef]
- Liu, Y.; Nie, X.; Fan, J.; Liu, X. Image-based crack assessment of bridge piers using unmanned aerial vehicles and three-dimensional scene reconstruction. Comput. Civ. Infrastruct. Eng. 2019, 35, 511–529. [Google Scholar] [CrossRef]
- Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’Andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2018, 14, 124–136. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Khaloo, A.; Lattanzi, D. Hierarchical dense structure-from-motion reconstructions for infrastructure condition assess-ment. J. Comput. Civil Eng. 2017, 31, 04016047. [Google Scholar] [CrossRef]
- Chen, D.F.; Laefer, E.; Mangina, S.; Zolanvari, I.; Byrne, J. UAV bridge inspection through evaluated 3D reconstruc-tions. J. Bridge Eng. 2019, 24, 05019001. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Tiwari, S.; Shukla, V.P.; Biradar, S.; Singh, A. Texture Features based Blur Classification in Barcode Images. Int. J. Inf. Eng. Electron. Bus. 2013, 5, 34–41. [Google Scholar] [CrossRef] [Green Version]
- Dollar, P.; Appel, R.; Belongie, S.; Perona, P. Fast Feature Pyramids for Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef] [Green Version]
- Derpanis, K.G. The Harris Corner Detector; York University: Toronto, ON, Canada, 2004; Volume 2. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE Features. In Advances in Computational Intelligence; Springer Science and Business Media LLC: Berlin, Germany, 2012; Volume 7577, pp. 214–227. [Google Scholar]
- Sargent, I.; Harding, J.; Freeman, M. Data quality in 3D: Gauging quality measures from users requirements. Int. Arch. Photog. Remote Sens. Spatial Inf. Sci. 2007, 36, 8. [Google Scholar]
- Koutsoudis, A.; Vidmar, B.; Ioannakis, G.; Arnaoutoglou, F.; Pavlidis, G.; Chamzas, C. Multi-image 3D reconstruction data evaluation. J. Cult. Heritage 2014, 15, 73–79. [Google Scholar] [CrossRef]
- Cheng, S.-W.; Lau, M.-K. Denoising a point cloud for surface reconstruction. arXiv 2017, arXiv:1704.04038. [Google Scholar]
- Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change detection on points cloud data acquired with a ground laser scanner. Int. Archives Photog. Remote Sens. Spatial Inf. Sci. 2005, 36, W19. [Google Scholar]
- Schafer, R.W. What Is a Savitzky-Golay Filter? IEEE Signal Process. Mag. 2011, 28, 111–117. [Google Scholar] [CrossRef]
Horizontal Sensor Size | Camera View Angle | Focal Length | Horizontal Pixel Number | Shutter Speed |
---|---|---|---|---|
6.16 mm | 66.24 degrees | 3.61 mm | 4000 | 1/13 |
Architecture | Global Accuracy | Mean Accuracy | Mean IoU | Weighted IoU | Mean BF Score |
---|---|---|---|---|---|
SegNet | 0.97649 | 0.97299 | 0.94965 | 0.95411 | 0.87261 |
U-Net | 0.79775 | 0.84144 | 0.66091 | 0.66985 | 0.4248 |
Architecture | Global Accuracy | Mean Accuracy | Mean IoU | Weighted IoU | Mean BF Score |
---|---|---|---|---|---|
SegNet | 0.9996 | 0.50001 | 0.4998 | 0.9992 | 0.99383 |
U-Net | 0.93785 | 0.79507 | 0.47102 | 0.93746 | 0.50397 |
Images 1, 2 | Images 2, 3 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Brightness | Detected Tie Points | Matches Tie Points | Detected Tie Points % | Matched Tie Points % | Error (Pix) 1 | Matched Tie Points % | Detected Tie Points % | Matches Tie Points | Detected Tie Points | Index |
+90% | 1801 | 1772 | 44.4% | 43.7% | 1.612 | 40.7% | 41.4% | 1642 | 1670 | 0.11 |
+80% | 3044 | 2829 | 75.1% | 69.8% | 2.131 | 64.7% | 69.1% | 2606 | 2784 | 0.23 |
+60% | 3734 | 3678 | 92.2% | 90.8% | 3.267 | 84.7% | 85.1% | 3414 | 3427 | 0.24 |
+40% | 4011 | 3838 | 99.0% | 94.7% | 4.569 | 87.0% | 92.8% | 3505 | 3740 | 0.19 |
+20% | 4027 | 3860 | 99.4% | 95.3% | 0.637 | 94.1% | 98.1% | 3791 | 3952 | 1.47 |
Normal | 4050 | 3634 | 100% | 89.7% | 0.550 | 91.7% | 100% | 3695 | 4027 | 1.65 |
−20% | 4028 | 3821 | 99.4% | 94.3% | 1.180 | 90.1% | 97.2% | 3630 | 3915 | 0.77 |
−40% | 4008 | 3958 | 98.9% | 97.7% | 2.238 | 94.1% | 94.3% | 3790 | 3800 | 0.41 |
−60% | 3802 | 3678 | 93.8% | 90.8% | 1.928 | 81.8% | 85.0% | 3295 | 3424 | 0.40 |
−80% | 3048 | 2811 | 75.2% | 69.4% | 1.746 | 65.5% | 70.7% | 2638 | 2847 | 0.28 |
−90% | 1842 | 1706 | 45.4% | 42.1% | 0.845 | 38.6% | 41.4% | 1558 | 1669 | 0.21 |
Speed mm/s | ||||||||||
0 | 4050 | 3634 | 100% | 89.7% | 0.550 | 91.7% | 100% | 3695 | 4027 | 1.65 |
52 | 4049 | 3882 | 99.9% | 95.8% | 0.653 | 99.2% | 98.9% | 3997 | 3983 | 1.48 |
208 | 2651 | 2476 | 65.4% | 61.1% | 0.752 | 53.9% | 57.6% | 2174 | 2320 | 0.47 |
364 | 1445 | 1328 | 35.6% | 32.7% | 0.797 | 27.6% | 30.5% | 1113 | 1229 | 0.13 |
520 | 976 | 874 | 24.1% | 21.5% | 0.932 | 18.4% | 20.4% | 734 | 823 | 0.05 |
Images 1, 2 | Images 2, 3 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Detected Tie Points | Matches Tie Points | Detected Tie Points % | Matched Tie Points % | Error (Pix) | Matched Tie Points % | Detected Tie Points % | Matches Tie Points | Detected Tie Points | Index | |
Unmasked | 4050 | 3634 | 100% | 89.7% | 0.550 | 91.7% | 100% | 3695 | 4027 | 1.65 |
Masked | 4072 | 3517 | 101% | 86.3% | 0.426 | 73.6% | 101% | 2982 | 4047 | 1.90 |
Method | Area of Potential Damage | Close-Up View | Distribution of the Point Cloud Deviation from Reference Model |
---|---|---|---|
SfM | |||
Intelligent Hierarchical DSfM |
Reconstruction Method | Intended Area | Number of Points | Local Point Density | Standard Deviation (mm) |
---|---|---|---|---|
TLS | 1.42 | 34,506 | 24,300 | Reference |
CRP, 3 m (Hierarchical DSfM) | 46,611 | 32,824 | 2.4 | |
CRP, 3–10 m (regular) | 15,386 | 20,018 | 6.2 |
AB (cm) | BC (cm) | B (Degrees) | 1 (cm) | 2 (cm) | 3 (cm) | |
---|---|---|---|---|---|---|
TLS | 66.9 | 32.4 | 93.33 | 2.13 | 2.04 | 3.21 |
Proposed method | 66.91 | 32.01 | 93.53 | 2.32 | 2.16 | 3.09 |
Deviation | 0.01 | −0.29 | 0.2 | 0.2 | 0.12 | −0.12 |
Error % | 0 | −1.2 | 0.2 | 8.9 | 5.8 | −3.7 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mirzazade, A.; Popescu, C.; Blanksvärd, T.; Täljsten, B. Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge. Remote Sens. 2021, 13, 2665. https://doi.org/10.3390/rs13142665
Mirzazade A, Popescu C, Blanksvärd T, Täljsten B. Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge. Remote Sensing. 2021; 13(14):2665. https://doi.org/10.3390/rs13142665
Chicago/Turabian StyleMirzazade, Ali, Cosmin Popescu, Thomas Blanksvärd, and Björn Täljsten. 2021. "Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge" Remote Sensing 13, no. 14: 2665. https://doi.org/10.3390/rs13142665