CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection
Abstract
:1. Introduction
- (1)
- Tiny cracks generally possess weaker feature information and are more susceptible to the interference from noises, so many existing methods achieve poor performance and completeness in extracting tiny cracks.
- (2)
- Most of the existing methods for crack detection tend to generate coarse and thicker crack boundaries, which are not desirable results.
- (1)
- CTCD-Net for tiny road crack detection is proposed, which utilizes SegNet with five side outputs as the backbone architecture and takes advantage of the complementary information from all layers.
- (2)
- An attention-based cross-layer information transmission module is proposed. At each layer, the module integrates attention mechanisms to transmit the information from higher layers to the next layer to achieve the full use of complementary feature information, which improves the extraction ability of tiny cracks to a great extent.
- (3)
- A boundary refinement block based on residual structure is proposed. The refinement block is embedded before both the final output and the five side outputs from each layer, which effectively addresses the problem of coarse and thicker boundaries.
2. Related Work
2.1. Crack Detection Based on Deep Learning
2.2. Attention Mechanism
2.3. Boundary Refinement
3. Methodology
3.1. Model Overview
3.2. Attention-Based Cross-Layer Information Transmission Module
3.3. Boundary Refinement Block
3.4. Comparison with Other Architectures
3.5. Loss Function
4. Experiments and Results
4.1. Experimental Settings
4.2. Datasets
- (1)
- DeepCrack537: Liu et al. [29] built a dataset named DeepCrack537, comprising 537 images with annotated labels. All of the images and labels are 544 × 384 pixels in size. DeepCrack537 is separated randomly (300 images for training and 237 images for testing) to train and evaluate all models.
- (2)
- CFD: CFD [14] contains 118 images with manually annotated segmentation labels. All the images are 480 × 320 pixels in size. Most of the cracks in these images are tiny and affected by interference of noises. CFD is separated randomly (72 images for training and 46 images for testing) to train and evaluate models.
- (3)
- AED: AED consists of three datasets: AIGLE_RN (38 images), ESAR (15 images), and Dynamique (16 images) [53,54]. These images are unevenly illuminated and affected by interference of noises such as stains, and the cracks in these images are mostly tiny. By cropping and resizing, finally, we obtain 253 images of 256 × 256 pixels with annotated ground truth, which are separated randomly (153 images for training and 100 images for testing). AED is also used for the generalization study.
4.3. Comparison Methods
- (1)
- HED: HED [34] is an improved network based on FCN and VGG16 for edge detection, which fuses all side out features by concatenation operation.
- (2)
- RCF: RCF [35] is an edge detection network improved based on HED, in which addition operation is utilized for all features at each stage before generating side outputs.
- (3)
- U-Net: U-Net [25] is a frequently used baseline network, which concatenates the features at the same layer from the encoder and decoder.
- (4)
- SegNet: SegNet [24] is an encoder–decoder symmetric structure. It is also a popular baseline network.
- (5)
- Deepcrack18: Deepcrack18 [31] is a network built on SegNet. It fuses multi-scale features together by concatenation operation.
- (6)
- Deepcrack19: Deepcrack19 [29] employs the encoder structure of SegNet and adopts side networks at each layer. All side outputs are fused in a concatenation way to generate the final prediction. Considering that GF and CRF are only model-independent post-processing solutions, we use DeepCrack-BN as described in [29] to compare to our method.
- (7)
- FPHBN: FPHBN [36] promotes the performance of crack detection with the utilization of a feature pyramid and hierarchical boosting net.
- (8)
- FFEDN: FFEDN [42] introduces two effective modules into an encoder–decoder model for tiny crack detection.
4.4. Metrics
4.5. Experimental Results
4.5.1. Comparison Experiments
- (1)
- Results on DeepCrack537. From Figure 8a, we can see that the PR curve of CTCD-Net is nearest to the top-right, which means CTCD-Net achieves the highest F and the optimal detection performance. Table 1 shows the quantitative evaluation results on DeepCrack537. CTCD-Net achieves the optimal results in the five metrics, with the best P, R, F, IoU, and mIoU of 86.59%, 83.10%, 84.81%, 73.62%, and 86.14%, respectively. Moreover, there are 2.04%, 2.05%, 2.04%, and 3.02% improvements compared with SegNet on P, R, F, and IoU, respectively. The improvements confirm the effectiveness of the two presented modules. Figure 9a–d show the crack detection results selected from DeepCrack537. It is obvious that CTCD-Net has the strongest extraction ability for tiny cracks and it is the least disturbed by background noises.
- (2)
- Results on CFD. Similarly, from Figure 8b, we can conclude that CTCD-Net still achieves the optimal performance. We can observe from Table 2 that CTCD-Net surpasses all comparison methods in the five metrics, having the best P, R, F, IoU, and mIoU of 66.74%, 73.57%, 69.99%, 53.83%, and 76.38%, respectively. Furthermore, compared with FFEDN (the second-best method), there are performance improvements of 1.09%, 0.80%, 0.96%, and 1.13% in P, R, F, and IoU, respectively. Compared with SegNet and DeepCrack18, there are performance improvements of 5.79% and 2.57%, respectively, in the metric F. The results also confirm the effectiveness and superiority of the two presented modules. Figure 10a–d show the crack detection results selected from CFD. It is obvious that the extraction performance of CTCD-Net is the best, and the results of tiny cracks extracted by CTCD-Net are more accurate and complete than all comparison methods. In addition, it can be seen that the boundaries of the cracks extracted by CTCD-Net are clearer and more accurate, while the cracks extracted by the comparison methods, especially HED, RCF, and DeepCrack19, are much coarser and thicker than the labels.
- (3)
- Results on AED. Figure 8c shows the PR curves on AED. CTCD-Net still achieves the optimal performance. Table 3 shows the quantitative results. CTCD-Net achieves the optimal P, R, F, IoU, and mIoU of 65.28%, 71.42%, 68.21%, 51.76%, and 75.35%, respectively. Compared with FFEDN (the second-best method), there are performance improvements of 0.44%, 0.62%, 0.52%, and 0.60% performance in P, R, F, and IoU, respectively. Compared with SegNet and DeepCrack18, there are performance improvements of 3.61% and 2.97%, respectively, in the metric F. The images in AED are unevenly illuminated and affected by interference of noises, and the cracks in those images are mostly tiny. From Figure 10e–g, we can see that the ability to extract tiny cracks of CTCD-Net is better than that of other comparison methods by a large margin. Moreover, the cracks detected by CTCD-Net are clearer and the crack boundaries are more refined and accurate.
4.5.2. Generalization Experiments
4.5.3. Ablation Experiments
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ma, L.; Li, J. SD-GCN: Saliency-based dilated graph convolution network for pavement crack extraction from 3D point clouds. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102836. [Google Scholar] [CrossRef]
- Mohan, A.; Poobal, S. Crack detection using image processing: A critical review and analysis. Alex. Eng. J. 2018, 57, 787–798. [Google Scholar] [CrossRef]
- Gupta, P.; Dixit, M. Image-based crack detection approaches: A comprehensive survey. Multimed. Tools Appl. 2022, 81, 40181–40229. [Google Scholar] [CrossRef]
- Li, Q.; Zou, Q.; Zhang, D.; Mao, Q. FoSA: F* seed-growing approach for crack-line detection from pavement images. Image Vis. Comput. 2011, 29, 861–872. [Google Scholar] [CrossRef]
- Dorafshan, S.; Thomas, R.J.; Maguire, M. Comparison of deep convolutional neural networks and edge detectors for image-based crack detection in concrete. Constr. Build. Mater. 2018, 186, 1031–1045. [Google Scholar] [CrossRef]
- Zhao, H.; Qin, G.; Wang, X. Improvement of canny algorithm based on pavement edge detection. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 964–967. [Google Scholar]
- Kamaliardakani, M.; Sun, L.; Ardakani, M.K. Sealed-crack detection algorithm using heuristic thresholding approach. J. Comput. Civ. Eng. 2016, 30, 04014110. [Google Scholar] [CrossRef]
- Liu, F.; Xu, G.; Yang, Y.; Niu, X.; Pan, Y. Novel approach to pavement cracking automatic detection based on segment extending. In Proceedings of the 2008 International Symposium on Knowledge Acquisition and Modeling, Wuhan, China, 21–22 December 2008; pp. 610–614. [Google Scholar]
- Wang, J.; Liu, F.; Yang, W.; Xu, G.; Tao, Z. Pavement crack detection using attention u-net with multiple sources. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Nanjing, China, 16–18 October 2020; pp. 664–672. [Google Scholar]
- Zhou, J.; Huang, P.S.; Chiang, F.-P. Wavelet-based pavement distress detection and evaluation. Opt. Eng. 2006, 45, 027007. [Google Scholar] [CrossRef]
- Hu, Y.; Zhao, C.-X. A novel LBP based methods for pavement crack detection. J. Pattern Recognit. Res. 2010, 5, 140–147. [Google Scholar] [CrossRef]
- Kapela, R.; Śniatała, P.; Turkot, A.; Rybarczyk, A.; Pożarycki, A.; Rydzewski, P.; Wyczałek, M.; Błoch, A. Asphalt surfaced pavement cracks detection based on histograms of oriented gradients. In Proceedings of the 2015 22nd International Conference Mixed Design of Integrated Circuits & Systems (MIXDES), Torun, Poland, 25–27 June 2015; pp. 579–584. [Google Scholar]
- Li, G.; Zhao, X.; Du, K.; Ru, F.; Zhang, Y. Recognition and evaluation of bridge cracks with modified active contour model and greedy search-based support vector machine. Autom. Constr. 2017, 78, 51–61. [Google Scholar] [CrossRef]
- Shi, Y.; Cui, L.; Qi, Z.; Meng, F.; Chen, Z. Automatic road crack detection using random structured forests. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3434–3445. [Google Scholar] [CrossRef]
- Chen, Y.; Weng, Q.; Tang, L.; Wang, L.; Xing, H.; Liu, Q. Developing an intelligent cloud attention network to support global urban green spaces mapping. ISPRS J. Photogramm. Remote Sens. 2023, 198, 197–209. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, C.; Li, J.; Fan, W.; Du, J.; Zhong, B. Adaboost-like End-to-End multiple lightweight U-nets for road extraction from optical remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2021, 100, 102341. [Google Scholar] [CrossRef]
- Dung, C.V.; Sekiya, H.; Hirano, S.; Okatani, T.; Miki, C. A vision-based method for crack detection in gusset plate welded joints of steel bridges using deep convolutional neural networks. Autom. Constr. 2019, 102, 217–229. [Google Scholar] [CrossRef]
- Eisenbach, M.; Stricker, R.; Seichter, D.; Amende, K.; Debes, K.; Sesselmann, M.; Ebersbach, D.; Stoeckert, U.; Gross, H.-M. How to get pavement distress detection ready for deep learning? A systematic approach. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2039–2047. [Google Scholar]
- Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. [Google Scholar]
- Deng, J.; Lu, Y.; Lee, V.C.S. Concrete crack detection with handwriting script interferences using faster region-based convolutional neural network. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 373–388. [Google Scholar] [CrossRef]
- Du, Y.; Pan, N.; Xu, Z.; Deng, F.; Shen, Y.; Kang, H. Pavement distress detection and classification based on YOLO network. Int. J. Pavement Eng. 2021, 22, 1659–1672. [Google Scholar] [CrossRef]
- Zhang, H.; Song, Y.; Chen, Y.; Zhong, H.; Liu, L.; Wang, Y.; Akilan, T.; Wu, Q.J. MRSDI-CNN: Multi-model rail surface defect inspection system based on convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 2022, 23, 11162–11177. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Huyan, J.; Li, W.; Tighe, S.; Xu, Z.; Zhai, J. CrackU-Net: A novel deep convolutional neural network for pixelwise pavement crack detection. Struct. Control Health Monit. 2020, 27, e2551. [Google Scholar] [CrossRef]
- Ren, Y.; Huang, J.; Hong, Z.; Lu, W.; Yin, J.; Zou, L.; Shen, X. Image-based concrete crack detection in tunnels using deep fully convolutional networks. Constr. Build. Mater. 2020, 234, 117367. [Google Scholar] [CrossRef]
- Yang, X.; Li, H.; Yu, Y.; Luo, X.; Huang, T.; Yang, X. Automatic pixel-level crack detection and measurement using fully convolutional network. Comput. Aided Civ. Infrastruct. Eng. 2018, 33, 1090–1109. [Google Scholar] [CrossRef]
- Liu, Y.; Yao, J.; Lu, X.; Xie, R.; Li, L. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing 2019, 338, 139–153. [Google Scholar] [CrossRef]
- Sun, X.; Xie, Y.; Jiang, L.; Cao, Y.; Liu, B. DMA-Net: DeepLab with Multi-Scale Attention for Pavement Crack Segmentation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18392–18403. [Google Scholar] [CrossRef]
- Zou, Q.; Zhang, Z.; Li, Q.; Qi, X.; Wang, Q.; Wang, S. Deepcrack: Learning hierarchical convolutional features for crack detection. IEEE Trans. Image Process. 2018, 28, 1498–1512. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Zhao, Z.; Lv, C.; Ding, Y.; Chang, H.; Xie, Q. An image enhancement algorithm to improve road tunnel crack transfer detection. Constr. Build. Mater. 2022, 348, 128583. [Google Scholar] [CrossRef]
- Ma, D.; Fang, H.; Wang, N.; Zhang, C.; Dong, J.; Hu, H. Automatic Detection and Counting System for Pavement Cracks Based on PCGAN and YOLO-MF. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22166–22178. [Google Scholar] [CrossRef]
- Xie, S.; Tu, Z. Holistically-Nested Edge Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 11–18 December 2015; pp. 1395–1403. [Google Scholar]
- Liu, Y.; Cheng, M.-M.; Hu, X.; Wang, K.; Bai, X. Richer Convolutional Features for Edge Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3000–3009. [Google Scholar]
- Yang, F.; Zhang, L.; Yu, S.; Prokhorov, D.; Mei, X.; Ling, H. Feature pyramid and hierarchical boosting network for pavement crack detection. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1525–1535. [Google Scholar] [CrossRef]
- Fan, Z.; Li, C.; Chen, Y.; Wei, J.; Loprencipe, G.; Chen, X.; Di Mascio, P. Automatic crack detection on road pavements using encoder-decoder architecture. Materials 2020, 13, 2960. [Google Scholar] [CrossRef]
- Qu, Z.; Chen, W.; Wang, S.-Y.; Yi, T.-M.; Liu, L. A crack detection algorithm for concrete pavement based on attention mechanism and multi-features fusion. IEEE Trans. Intell. Transp. Syst. 2021, 23, 11710–11719. [Google Scholar] [CrossRef]
- Xu, Z.; Guan, H.; Kang, J.; Lei, X.; Ma, L.; Yu, Y.; Chen, Y.; Li, J. Pavement crack detection from CCD images with a locally enhanced transformer network. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102825. [Google Scholar] [CrossRef]
- Qu, Z.; Wang, C.-Y.; Wang, S.-Y.; Ju, F.-R. A Method of Hierarchical Feature Fusion and Connected Attention Architecture for Pavement Crack Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16038–16047. [Google Scholar] [CrossRef]
- Xiang, X.; Zhang, Y.; El Saddik, A. Pavement crack detection network based on pyramid structure and attention mechanism. IET Image Process. 2020, 14, 1580–1586. [Google Scholar] [CrossRef]
- Liu, C.; Zhu, C.; Xia, X.; Zhao, J.; Long, H. FFEDN: Feature Fusion Encoder Decoder Network for Crack Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15546–15557. [Google Scholar] [CrossRef]
- Guo, J.-M.; Markoni, H.; Lee, J.-D. BARNet: Boundary aware refinement network for crack detection. IEEE Trans. Intell. Transp. Syst. 2021, 23, 7343–7358. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Park, J.; Woo, S.; Lee, J.-Y.; Kweon, I.S. BAM: Bottleneck attention module. arXiv 2018, arXiv:1807.06514. [Google Scholar] [CrossRef]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-Local Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Wang, T.; Borji, A.; Zhang, L.; Zhang, P.; Lu, H. A stagewise refinement model for detecting salient objects in images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4019–4028. [Google Scholar]
- Deng, Z.; Hu, X.; Zhu, L.; Xu, X.; Qin, J.; Han, G.; Heng, P.-A. R3Net: Recurrent residual refinement network for saliency detection. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 684–690. [Google Scholar]
- Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. BASNet: Boundary-aware salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7479–7489. [Google Scholar]
- Peng, C.; Zhang, X.; Yu, G.; Luo, G.; Sun, J. Large kernel matters—Improve semantic segmentation by global convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4353–4361. [Google Scholar]
- Amhaz, R.; Chambon, S.; Idier, J.; Baltazart, V. Automatic crack detection on two-dimensional pavement images: An algorithm based on minimal path selection. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2718–2729. [Google Scholar] [CrossRef]
- Chambon, S.; Moliard, J.-M. Automatic road pavement assessment with image processing: Review and comparison. Int. J. Geophys. 2011, 2011, 989354. [Google Scholar] [CrossRef]
Methods | Metrics | ||||
---|---|---|---|---|---|
P | R | F | IoU | mIoU | |
HED | 80.38 | 80.44 | 80.41 | 67.24 | 82.74 |
RCF | 79.69 | 79.30 | 79.50 | 65.97 | 82.07 |
U-Net | 84.52 | 80.22 | 82.32 | 69.95 | 84.20 |
SegNet | 84.55 | 81.05 | 82.77 | 70.60 | 84.54 |
DeepCrack18 | 85.53 | 82.68 | 84.08 | 72.54 | 85.57 |
DeepCrack19 | 81.31 | 80.46 | 80.88 | 67.90 | 83.10 |
FPHBN | 84.70 | 81.28 | 82.96 | 70.88 | 84.69 |
FFEDN | 86.42 | 82.44 | 84.38 | 72.99 | 85.81 |
Ours | 86.59 | 83.10 | 84.81 | 73.62 | 86.14 |
Methods | Metrics | ||||
---|---|---|---|---|---|
P | R | F | IoU | mIoU | |
HED | 56.34 | 66.10 | 60.83 | 43.71 | 71.13 |
RCF | 56.66 | 67.37 | 61.55 | 44.46 | 71.52 |
U-Net | 64.17 | 69.64 | 66.80 | 50.14 | 74.48 |
SegNet | 60.56 | 68.30 | 64.20 | 47.28 | 72.99 |
DeepCrack18 | 62.30 | 73.45 | 67.42 | 50.85 | 74.82 |
DeepCrack19 | 58.95 | 68.77 | 63.48 | 46.50 | 72.58 |
FPHBN | 57.93 | 71.13 | 63.85 | 46.90 | 72.77 |
FFEDN | 65.65 | 72.77 | 69.03 | 52.70 | 75.80 |
Ours | 66.74 | 73.57 | 69.99 | 53.83 | 76.38 |
Methods | Metrics | ||||
---|---|---|---|---|---|
P | R | F | IoU | mIoU | |
HED | 53.79 | 66.24 | 59.37 | 42.22 | 70.39 |
RCF | 53.62 | 68.28 | 60.07 | 42.93 | 70.75 |
U-Net | 64.56 | 68.02 | 66.25 | 49.53 | 74.22 |
SegNet | 60.22 | 69.66 | 64.60 | 47.71 | 73.25 |
DeepCrack18 | 61.83 | 69.04 | 65.24 | 48.41 | 73.62 |
DeepCrack19 | 52.87 | 65.40 | 58.47 | 41.31 | 69.92 |
FPHBN | 59.33 | 68.46 | 63.57 | 46.59 | 72.68 |
FFEDN | 64.84 | 70.80 | 67.69 | 51.16 | 75.05 |
Ours | 65.28 | 71.42 | 68.21 | 51.76 | 75.35 |
Methods | Metrics | ||||
---|---|---|---|---|---|
P | R | F | IoU | mIoU | |
HED | 32.96 | 43.17 | 37.38 | 22.98 | 60.36 |
RCF | 34.59 | 46.25 | 39.58 | 24.67 | 61.23 |
U-Net | 43.24 | 50.77 | 46.70 | 30.47 | 64.32 |
SegNet | 40.51 | 43.45 | 41.93 | 26.52 | 62.32 |
DeepCrack18 | 43.04 | 53.19 | 47.58 | 31.22 | 64.69 |
DeepCrack19 | 34.13 | 50.10 | 40.60 | 25.47 | 61.58 |
FPHBN | 35.13 | 55.13 | 42.92 | 27.32 | 62.51 |
FFEDN | 48.39 | 50.59 | 49.47 | 32.86 | 65.62 |
Ours | 49.46 | 59.35 | 53.96 | 36.95 | 67.68 |
Methods | Metrics | ||||
---|---|---|---|---|---|
P | R | F | IoU | mIoU | |
SegNet | 60.56 | 68.30 | 64.20 | 47.28 | 72.99 |
SegNet + Side | 62.91 | 72.19 | 67.23 | 50.64 | 74.72 |
SegNet + BR | 63.09 | 73.38 | 67.85 | 51.34 | 75.08 |
SegNet + ACIT | 65.15 | 72.81 | 68.77 | 52.40 | 75.64 |
CTCD-Net | 66.74 | 73.57 | 69.99 | 53.83 | 76.38 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, C.; Chen, Y.; Tang, L.; Chu, X.; Li, C. CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection. Remote Sens. 2023, 15, 2185. https://doi.org/10.3390/rs15082185
Zhang C, Chen Y, Tang L, Chu X, Li C. CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection. Remote Sensing. 2023; 15(8):2185. https://doi.org/10.3390/rs15082185
Chicago/Turabian StyleZhang, Chong, Yang Chen, Luliang Tang, Xu Chu, and Chaokui Li. 2023. "CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection" Remote Sensing 15, no. 8: 2185. https://doi.org/10.3390/rs15082185
APA StyleZhang, C., Chen, Y., Tang, L., Chu, X., & Li, C. (2023). CTCD-Net: A Cross-Layer Transmission Network for Tiny Road Crack Detection. Remote Sensing, 15(8), 2185. https://doi.org/10.3390/rs15082185