Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning
Abstract
:1. Introduction
- By introducing contrastive learning, we realize the estimation of the degradation degree of distress images using not only labeled but also unlabeled training data to reduce the annotation burden of engineers.
- We realize the acquisition of general representation specific to damaged road infrastructures and the improvement of the estimation performance specializing in each distress.
2. Related Works
2.1. Distress Image Classification Based on Supervised Learning
2.2. Contrastive Learning for Distress Image Classification
3. Estimation of Degradation Degree in Road Infrastructure Using Contrastive Learning
3.1. Representation Acquisition through Contrastive Learning
3.2. Degradation Degree Estimation Using Prior Representation
4. Experimental Results and Discussion
4.1. Experimental Settings
4.2. Experimental Results
4.2.1. Quantitative Evaluation
4.2.2. Contribution of Each Module in CorABN including Contrastive Learning
4.2.3. Effectiveness of Contrastive Learning Approach Using Another Model
4.2.4. Robustness Corresponding to Varying Parameter of Contrastive Learning Model
4.2.5. Qualitative Evaluation
4.3. Discussions
4.4. Limitation and Future Work
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Australia, M.R.W. Road Maintenance: Issues and Directions; Main Roads Western Australia: Perth, Australia, 1996. [Google Scholar]
- American Association of State Highway and Transportation Officials. Bridging the Gap: Restoring and Rebuilding the Nation’s Bridges; American Association of State Highway and Transportation Officials: Washington, DC, USA, 2008. [Google Scholar]
- Technical Report; Ministry of Land, Infrastructure Tourism, Transport and Tourism: Japan. White Paper on Land, Infrastructure, Transport and Tourism in Japan, 2017 (Online), 2018. Available online: https://www.mlit.go.jp/common/001269888.pdf (accessed on 1 November 2022).
- Agnisarman, S.; Lopes, S.; Madathil, K.C.; Piratla, K.; Gramopadhye, A. A survey of automation-enabled human-in-the-loop systems for infrastructure visual inspection. Autom. Constr. 2019, 97, 52–76. [Google Scholar] [CrossRef]
- Gao, Y.; Mosalam, K.M. Deep transfer learning for image-based structural damage recognition. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
- Gopalakrishnan, K.; Gholami, H.; Vidyadharan, A.; Choudhary, A.; Agrawal, A. Crack damage detection in unmanned aerial vehicle images of civil infrastructure using pre-trained deep learning model. Int. J. Traffic Transp. Eng 2018, 8, 1–14. [Google Scholar]
- Xia, W. An approach for extracting road pavement disease from HD camera videos by deep convolutional networks. In Proceedings of the International Conference on Audio, Language and Image Processing, Shanghai, China, 16–17 July 2018; pp. 418–422. [Google Scholar]
- Ogawa, N.; Maeda, K.; Ogawa, T.; Haseyama, M. Correlation-aware attention branch network using multi-modal data for deterioration level estimation of infrastructures. In Proceedings of the IEEE International Conference on Image Processing, Anchorage, AK, USA, 19–22 September 2021; pp. 1014–1018. [Google Scholar]
- Maeda, K.; Ogawa, N.; Ogawa, T.; Haseyama, M. Reliable Estimation of Deterioration Levels via Late Fusion Using Multi-View Distress Images for Practical Inspection. J. Imaging 2021, 7, 273. [Google Scholar] [CrossRef] [PubMed]
- Ogawa, N.; Maeda, K.; Ogawa, T.; Haseyama, M. Deterioration Level Estimation Based on Convolutional Neural Network Using Confidence-Aware Attention Mechanism for Infrastructure Inspection. Sensors 2022, 22, 382. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Fukui, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Attention branch network: Learning of attention mechanism for visual explanation. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10705–10714. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.S.; Dean, J. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 2013, 26, 1–9. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Caron, M.; Misra, I.; Mairal, J.; Goyal, P.; Bojanowski, P.; Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. Adv. Neural Inf. Process. Syst. 2020, 33, 9912–9924. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Online, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- Ayush, K.; Uzkent, B.; Meng, C.; Tanmay, K.; Burke, M.; Lobell, D.; Ermon, S. Geography-aware self-supervised learning. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10181–10190. [Google Scholar]
- Stojnic, V.; Risojevic, V. Self-supervised learning of remote sensing scene representations using contrastive multiview coding. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1182–1191. [Google Scholar]
- Sauder, J.; Sievers, B. Self-supervised deep learning on point clouds by reconstructing space. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Zhang, Z.; Girdhar, R.; Joulin, A.; Misra, I. Self-supervised pretraining of 3d features on any point-cloud. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10252–10263. [Google Scholar]
- Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Self-supervised learning for medical image analysis using image context restoration. Med. Image Anal. 2019, 58, 101539. [Google Scholar] [CrossRef]
- Azizi, S.; Mustafa, B.; Ryan, F.; Beaver, Z.; Freyberg, J.; Deaton, J.; Loh, A.; Karthikesalingam, A.; Kornblith, S.; Chen, T.; et al. Big self-supervised models advance medical image classification. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 3478–3488. [Google Scholar]
- Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
- Zhang, A.; Wang, K.C.; Li, B.; Yang, E.; Dai, X.; Peng, Y.; Fei, Y.; Liu, Y.; Li, J.Q.; Chen, C. Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deep-learning network. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 805–819. [Google Scholar] [CrossRef]
- Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
- Maeda, H.; Sekimoto, Y.; Seto, T.; Kashiyama, T.; Omata, H. Road damage detection and classification using deep neural networks with smartphone images. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 1127–1141. [Google Scholar] [CrossRef] [Green Version]
- Attard, L.; Debono, C.J.; Valentino, G.; Di Castro, M.; Masi, A.; Scibile, L. Automatic crack detection using mask R-CNN. In Proceedings of the 2019 11th international symposium on image and signal processing and analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; pp. 152–157. [Google Scholar]
- Li, P.; Xia, H.; Zhou, B.; Yan, F.; Guo, R. A Method to Improve the Accuracy of Pavement Crack Identification by Combining a Semantic Segmentation and Edge Detection Model. Appl. Sci. 2022, 12, 4714. [Google Scholar] [CrossRef]
- Maeda, K.; Takahashi, S.; Ogawa, T.; Haseyama, M. Convolutional sparse coding-based deep random vector functional link network for distress classification of road structures. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 654–676. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
- Misra, I.; Maaten, L.v.d. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6707–6717. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Rre-training. 2018. Available online: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf (accessed on 26 December 2022).
- Baevski, A.; Zhou, Y.; Mohamed, A.; Auli, M. wav2vec 2.0: A framework for self-supervised learning of speech representations. Adv. Neural Inf. Process. Syst. 2020, 33, 12449–12460. [Google Scholar]
- Hsu, W.N.; Bolte, B.; Tsai, Y.H.H.; Lakhotia, K.; Salakhutdinov, R.; Mohamed, A. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 3451–3460. [Google Scholar] [CrossRef]
- Rombach, K.; Michau, G.; Ratnasabapathy, K.; Ancu, L.S.; Bürzle, W.; Koller, S.; Fink, O. Contrastive Feature Learning for Fault Detection and Diagnostics in Railway Applications. In Proceedings of the European Safety and Reliability Conference, Dublin, Ireland, 28 August–1 September 2022; pp. 1875–1881. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Andrew, G.; Arora, R.; Bilmes, J.; Livescu, K. Deep canonical correlation analysis. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1247–1255. [Google Scholar]
- Yuji, S. Maintenance Management System for Concrete Structures in Expressways—A Case Study of NEXCO East Japan Kanto Branch—(In Japanese). Concr. J. 2010, 48, 17–20. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
- Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21271–21284. [Google Scholar]
- Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Online, 3–7 May 2021. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10012–10022. [Google Scholar]
Degradation Degree | Training | Validation | Test | Total |
---|---|---|---|---|
A | 1915 | 228 | 227 | 2370 |
B | 2085 | 237 | 265 | 2587 |
C | 1897 | 233 | 251 | 2381 |
D | 2102 | 265 | 277 | 2644 |
Degradation Degree | Training | Validation | Test | Total |
---|---|---|---|---|
A | 1039 | 143 | 141 | 1323 |
B | 1125 | 150 | 152 | 1427 |
C | 1096 | 143 | 129 | 1368 |
D | 842 | 105 | 119 | 1066 |
Degradation Degree | Training | Validation | Test | Total |
---|---|---|---|---|
A | 210 | 19 | 35 | 264 |
B | 1639 | 234 | 185 | 2058 |
C | 1236 | 164 | 146 | 1546 |
D | 1002 | 115 | 133 | 1250 |
Degradation Degree | Training | Validation | Test | Total |
---|---|---|---|---|
A | 1503 | 190 | 203 | 1896 |
B | 1333 | 171 | 165 | 1669 |
C | 1086 | 132 | 152 | 1370 |
Degradation Degree | Training | Validation | Test | Total |
---|---|---|---|---|
A | 1326 | 168 | 179 | 1673 |
B | 1307 | 163 | 166 | 1636 |
C | 1056 | 127 | 133 | 1316 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
PM | 0.781 | 0.634 | 0.617 | 0.719 | 0.688 |
CorABN (ImageNet) [8] | 0.776 | 0.627 | 0.599 | 0.704 | 0.676 |
ABN [12] | 0.741 | 0.561 | 0.511 | 0.661 | 0.619 |
ResNet50 [38] | 0.746 | 0.556 | 0.487 | 0.651 | 0.610 |
SeNet154 [41] | 0.730 | 0.572 | 0.527 | 0.650 | 0.620 |
DenseNet121 [43] | 0.695 | 0.490 | 0.436 | 0.608 | 0.557 |
InceptionV4 [42] | 0.733 | 0.527 | 0.452 | 0.633 | 0.586 |
EfficientNetB5 [44] | 0.712 | 0.552 | 0.500 | 0.653 | 0.604 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
PM | 0.699 | 0.532 | 0.473 | 0.715 | 0.605 |
CorABN (ImageNet) [8] | 0.707 | 0.516 | 0.497 | 0.684 | 0.601 |
ABN [12] | 0.602 | 0.480 | 0.327 | 0.592 | 0.500 |
ResNet50 [38] | 0.586 | 0.445 | 0.293 | 0.576 | 0.475 |
SeNet154 [41] | 0.610 | 0.489 | 0.404 | 0.607 | 0.528 |
DenseNet121 [43] | 0.413 | 0.381 | 0.284 | 0.494 | 0.393 |
InceptionV4 [42] | 0.517 | 0.403 | 0.306 | 0.544 | 0.443 |
EfficientNetB5 [44] | 0.605 | 0.489 | 0.367 | 0.618 | 0.520 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
PM | 0.449 | 0.677 | 0.468 | 0.579 | 0.543 |
CorABN (ImageNet) [8] | 0.311 | 0.647 | 0.449 | 0.543 | 0.487 |
ABN [12] | 0.133 | 0.600 | 0.436 | 0.438 | 0.402 |
ResNet50 [38] | 0.020 | 0.611 | 0.456 | 0.437 | 0.381 |
SeNet154 [41] | 0.169 | 0.615 | 0.467 | 0.475 | 0.431 |
DenseNet121 [43] | 0.000 | 0.592 | 0.336 | 0.287 | 0.304 |
InceptionV4 [42] | 0.000 | 0.581 | 0.294 | 0.199 | 0.269 |
EfficientNetB5 [44] | 0.268 | 0.606 | 0.439 | 0.491 | 0.451 |
Degree: A | Degree: B | Degree: C | Average | |
---|---|---|---|---|
PM | 0.632 | 0.399 | 0.492 | 0.507 |
CorABN (ImageNet) [8] | 0.630 | 0.370 | 0.402 | 0.467 |
ABN [12] | 0.573 | 0.338 | 0.180 | 0.364 |
ResNet50 [38] | 0.558 | 0.359 | 0.173 | 0.364 |
SeNet154 [41] | 0.599 | 0.329 | 0.392 | 0.440 |
DenseNet121 [43] | 0.542 | 0.261 | 0.023 | 0.275 |
InceptionV4 [42] | 0.553 | 0.374 | 0.082 | 0.337 |
EfficientNetB5 [44] | 0.574 | 0.340 | 0.425 | 0.446 |
Degree: A | Degree: B | Degree: C | Average | |
---|---|---|---|---|
PM | 0.644 | 0.491 | 0.556 | 0.563 |
CorABN (ImageNet) [8] | 0.616 | 0.461 | 0.510 | 0.529 |
ABN [12] | 0.567 | 0.370 | 0.389 | 0.442 |
ResNet50 [38] | 0.559 | 0.358 | 0.340 | 0.419 |
SeNet154 [41] | 0.608 | 0.434 | 0.416 | 0.486 |
DenseNet121 [43] | 0.503 | 0.377 | 0.271 | 0.383 |
InceptionV4 [42] | 0.554 | 0.342 | 0.414 | 0.437 |
EfficientNetB5 [44] | 0.568 | 0.427 | 0.372 | 0.456 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
PM (all module) | 0.449 | 0.677 | 0.468 | 0.579 | 0.543 |
PM (FME) | 0.377 | 0.666 | 0.472 | 0.557 | 0.518 |
PM (AMG) | 0.265 | 0.661 | 0.434 | 0.547 | 0.477 |
PM (CF) | 0.267 | 0.655 | 0.457 | 0.541 | 0.480 |
PM (FME + AMG) | 0.396 | 0.676 | 0.466 | 0.571 | 0.527 |
PM (AMG + CF) | 0.294 | 0.656 | 0.440 | 0.545 | 0.484 |
PM (FME + CF) | 0.409 | 0.674 | 0.459 | 0.565 | 0.527 |
CorABN (ImageNet) [8] | 0.311 | 0.647 | 0.449 | 0.543 | 0.487 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
Crack | |||||
PM (SimCLR) | 0.781 | 0.634 | 0.617 | 0.719 | 0.688 |
PM (BYOL) | 0.782 | 0.642 | 0.603 | 0.704 | 0.683 |
CorABN (ImageNet) [8] | 0.776 | 0.627 | 0.599 | 0.704 | 0.676 |
Efflorescence | |||||
PM (SimCLR) | 0.699 | 0.532 | 0.473 | 0.715 | 0.605 |
PM (BYOL) | 0.714 | 0.543 | 0.504 | 0.713 | 0.618 |
CorABN (ImageNet) [8] | 0.707 | 0.516 | 0.497 | 0.684 | 0.601 |
Rebar corrosion | |||||
PM (SimCLR) | 0.449 | 0.677 | 0.468 | 0.579 | 0.543 |
PM (BYOL) | 0.324 | 0.663 | 0.473 | 0.554 | 0.504 |
CorABN (ImageNet) [8] | 0.311 | 0.647 | 0.449 | 0.543 | 0.487 |
Concrete scaling | |||||
PM (SimCLR) | 0.632 | 0.399 | 0.492 | – | 0.507 |
PM (BYOL) | 0.618 | 0.428 | 0.336 | – | 0.461 |
CorABN (ImageNet) [8] | 0.630 | 0.370 | 0.402 | – | 0.467 |
Concrete spalling | |||||
PM (SimCLR) | 0.644 | 0.491 | 0.556 | – | 0.563 |
PM (BYOL) | 0.619 | 0.484 | 0.515 | – | 0.539 |
CorABN (ImageNet) [8] | 0.616 | 0.461 | 0.510 | – | 0.529 |
Degree: A | Degree: B | Degree: C | Degree: D | Average | |
---|---|---|---|---|---|
Crack | |||||
PM () | 0.781 | 0.634 | 0.617 | 0.719 | 0.688 |
PM () | 0.781 | 0.644 | 0.614 | 0.721 | 0.690 |
PM () | 0.779 | 0.637 | 0.609 | 0.713 | 0.684 |
Efflorescence | |||||
PM () | 0.699 | 0.532 | 0.473 | 0.715 | 0.605 |
PM () | 0.703 | 0.535 | 0.469 | 0.683 | 0.597 |
PM () | 0.709 | 0.531 | 0.487 | 0.685 | 0.603 |
Rebar corrosion | |||||
PM () | 0.449 | 0.677 | 0.468 | 0.579 | 0.543 |
PM () | 0.473 | 0.686 | 0.465 | 0.566 | 0.548 |
PM () | 0.515 | 0.682 | 0.498 | 0.586 | 0.570 |
Concrete scaling | |||||
PM () | 0.632 | 0.399 | 0.492 | – | 0.507 |
PM () | 0.642 | 0.418 | 0.486 | – | 0.515 |
PM () | 0.650 | 0.389 | 0.505 | – | 0.514 |
Concrete spalling | |||||
PM () | 0.644 | 0.491 | 0.556 | – | 0.563 |
PM () | 0.619 | 0.495 | 0.551 | – | 0.555 |
PM () | 0.643 | 0.515 | 0.557 | – | 0.571 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Higashi, T.; Ogawa, N.; Maeda, K.; Ogawa, T.; Haseyama, M. Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning. Sensors 2023, 23, 1657. https://doi.org/10.3390/s23031657
Higashi T, Ogawa N, Maeda K, Ogawa T, Haseyama M. Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning. Sensors. 2023; 23(3):1657. https://doi.org/10.3390/s23031657
Chicago/Turabian StyleHigashi, Takaaki, Naoki Ogawa, Keisuke Maeda, Takahiro Ogawa, and Miki Haseyama. 2023. "Estimation of Degradation Degree in Road Infrastructure Based on Multi-Modal ABN Using Contrastive Learning" Sensors 23, no. 3: 1657. https://doi.org/10.3390/s23031657