Next Article in Journal
Satellite Constellation Optimization for Emitter Geolocalization Missions Based on Angle of Arrival Techniques
Previous Article in Journal
In Situ Evaluation of the Self-Heating Effect in Resistance Temperature Sensors
Previous Article in Special Issue
Exploring the Effectiveness of Road Maintenance Interventions on IRI Value Using Crowdsourced Connected Vehicle Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Crack Sealant in the Pretreatment Process of Hot In-Place Recycling of Asphalt Pavement via Deep Learning Method

1
School of Transportation, Southeast University, Nanjing 210096, China
2
Jinan City Planning and Design Institute, Jinan 250101, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(11), 3373; https://doi.org/10.3390/s25113373
Submission received: 10 April 2025 / Revised: 13 May 2025 / Accepted: 26 May 2025 / Published: 27 May 2025

Abstract

:
Crack sealant is commonly used to fill pavement cracks and improve the Pavement Condition Index (PCI). However, during asphalt pavement hot in-place recycling (HIR), irregular shapes and random distribution of crack sealants can cause issues like agglomeration and ignition. To address these problems, it is necessary to mill large areas containing crack sealant or pre-mark locations for removal after heating. Currently, detecting and recording crack sealant locations, types, and distributions is conducted manually, which significantly reduces efficiency. While deep learning-based object detection has been widely applied to distress detection, crack sealants present unique challenges. They often appear as wide black patches that overlap with cracks and potholes, and complex background noise further complicates detection. Additionally, no dataset specifically for crack sealant detection currently exists. To overcome these challenges, this paper presents a specialized dataset created from 1983 pavement images. A deep learning detection algorithm named YOLO-CS (You Only Look Once Crack Sealant) is proposed. This algorithm integrates the RepViT (Representation Learning with Visual Tokens) network to reduce computational complexity while capturing the global context of images. Furthermore, the DRBNCSPELAN (Dilated Reparam Block with Cross-Stage Partial and Efficient Layer Aggregation Networks) module is introduced to ensure efficient information flow, and a lightweight shared convolution (LSC) detection head is developed. The results demonstrate that YOLO-CS outperforms other algorithms, achieving a precision of 88.4%, a recall of 84.2%, and an mAP (mean average precision) of 92.1%. Moreover, YOLO-CS significantly reduces parameters and memory consumption. Integrating Artificial Intelligence-based algorithms into HIR significantly enhances construction efficiency.

1. Introduction

In the northwest region of China, asphalt pavement cracks are frequent and diverse [1]. Historically, due to economic constraints, crack filling has been a common treatment method as shown in Figure 1. Crack sealant, a material used for repairing cracks, is typically heated to a high temperature recommended by the manufacturer (usually around 193 °C) to become liquid, ensuring good adhesion and durability with the crack edges [2,3]. By filling the cracks, the pavement condition index (PCI) can be significantly improved in a short time. However, the lifespan of this treatment is generally short. Currently, the widespread use of hot in-place recycling (HIR) technology for recycling the surface layer of asphalt mixtures presents significant challenges when dealing with crack sealant [4,5,6]. During the heating process of old pavements, the variability in temperature due to uneven heating by the heating machine can be substantial. If the heating temperature is insufficient, the crack sealant does not revert to a liquid state but instead forms agglomerates [7]. For small areas of crack sealant, these agglomerates can be manually removed after heating, but this significantly slows down the construction process. However, if large areas of crack sealant are not pre-milled, repeated heating by the machine can cause the sealant (asphalt-based above 204 °C, silicone-based above 300 °C) to ignite, posing safety hazards. Therefore, accurately and effectively locating the position, type, and distribution of crack sealant during the pre-treatment process of origin pavements in HIR is crucial.
Traditional manual survey methods are still commonly used in HIR engineering to record the location and type of crack sealant through full-line inspections. Although accurate, these methods are influenced by subjective factors, time-consuming, and labor-intensive [8]. The results can vary significantly among technicians, and the process may cause traffic congestion and pose risks to survey personnel. To overcome these challenges, modern equipment is essential for capturing high-resolution pavement images. Line scan cameras, known for their high resolution, stable imaging quality, and low cost, are widely used in road detection. For example, Xiong et al. used a pavement inspection vehicle equipped with two line-scanning cameras, each capturing a line image of 2048 × 1 pixels [9]. To address traffic congestion and the difficulty of observing long longitudinal cracks, Wang et al. [10] used UAV (Unmanned Aerial Vehicle) oblique photography technology to collect high-resolution pavement images and perform 3D reconstruction. Similarly, Zhang and Zhu used the UACV (Unmanned Aerial Camera Vehicle) method to construct a pavement distress database for non-destructive testing [11,12]. However, after obtaining pavement images using these methods, manually classifying and locating the crack sealant on a computer remains costly and inefficient.
In the field of computer vision, object detection tasks identify objects of interest in images, determining their categories and locations [13,14]. Applying this technology to detect crack sealant on asphalt pavements is crucial. Traditional object detection algorithms, such as thresholding, edge detection, region growing, and clustering, rely on object features like color, shape, and texture [13]. However, these methods’ accuracy can be compromised when target shapes are complex, occluded, or have strong background noise. Convolutional Neural Networks (CNNs) have significantly advanced deep learning applications in object detection by learning the mapping between inputs and outputs without precise mathematical equations [15]. Deep learning-based object detection algorithms are categorized into two-stage and one-stage methods [16,17]. Two-stage methods involve feature extraction, region proposal (RP) generation, and classification/location regression. Representative algorithms include R-CNN, SPP-Net, Fast R-CNN, Faster R-CNN, and R-FCN. Matarneh et al. [18] introduced a method for asphalt pavement crack classification using the DenseNet201 model and GWO optimizer, achieving 98.73% accuracy and good robustness. Liang et al. [19] proposed a detection method based on Faster R-CNN to automatically identify and locate pavement issues such as cracks, potholes, and asphalt spills. Although these region-based two-stage models are highly accurate, their detection speed is generally slow, making them unsuitable for fast and lightweight pavement detection [20]. One-stage methods skip the RP step and directly extract features in the network to predict the classification and location of objects. Notable algorithms include OverFeat, YOLO, SSD, and RetinaNet. YOLOv1, proposed by Redmon in 2016 [21], significantly improved detection speed by transforming the object detection task into a regression problem. YOLO has been updated to version 10 and is widely used in pavement detection [22,23]. Several studies have compared different YOLO series models to identify the most suitable ones for pavement defect detection. For instance, Yao et al. improved YOLOv5 by incorporating the Space and Channel Squeeze-and-Excitation (SCSE) module and the Convolutional Block Attention Module (CBAM), considering different addition positions and methods [24]. Comparative tests achieved a detection speed of 87% mAP@0.5:0.95 and 13.15 ms/pic. Zhu et al. trained UAV datasets using Faster R-CNN, YOLOv3, and YOLOv4 models, finding YOLOv3 performed best with a mAP of 56.6% [12]. Liu et al. proposed YOLO-SST based on YOLOv5, introducing the Shuffle attention mechanism and an additional detection layer. Ablation experiments and comparative tests showed that YOLO-SST increased accuracy by 1.2% and mAP by 3.1% [25]. Researchers have conducted extensive studies on rutting, potholes, cracks, and ground-penetrating radar images, establishing high-quality datasets and improving detection accuracy and speed through deep learning algorithms.
In summary, detecting and identifying crack sealant involves several challenges: (1) Crack sealant is typically a wider black patch, while cracks are usually narrower [26,27]. This difference affects the accuracy and robustness of detection algorithms. (2) Crack sealant often overlaps with cracks and potholes, and background noise further complicates pavement detection. (3) No researchers have established a high-quality dataset specifically for crack sealant [8,28]. Therefore, existing pavement distress detection algorithms are not suitable for crack sealant detection, leading to high rates of missed and false detections, slow detection speeds, and large memory usage. These issues make it difficult to meet the requirements for lightweight detection and mobile deployment in engineering applications. This study aims to address these challenges and achieve high-precision crack sealant detection during the HIR pre-treatment process. The approach involves several steps: First, high-resolution full-scale pavement images are collected using a detection vehicle equipped with two line-scanning cameras. Images containing crack sealant are then selected and cut to create a dataset. Next, based on the YOLOv8s algorithm, lightweight improvements are made using RepViT, DRBNCSPELAN, and LSC detection heads. Finally, the proposed improvements are verified through ablation and comparative tests, with the results visualized and analyzed.

2. Methodology

2.1. YOLOv8s Network Model

The object detection benchmark model selected in this study is YOLOv8, developed by the Ultralytics team, and known for its cutting-edge and advanced features [22,29]. YOLOv8 introduces new functions and improvements that enhance performance and flexibility. Compared to other one-stage detection models and previous versions of the YOLO series, YOLOv8 demonstrates superior performance in detecting pavement distress [30,31,32,33]. YOLOv8 offers five network structure models: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x, as shown in Table 1. Each is tailored to different deployment scenarios, ranging from resource-constrained embedded devices to high-performance GPU servers. This paper aims to improve accuracy while maintaining high processing speed, making YOLOv8s the chosen benchmark model for further enhancement.
The network structure of YOLOv8s, shown in Figure 2, consists of four main parts: Input, Backbone, Neck, and Head.
The Backbone is the network component responsible for extracting image features, transforming the original input image into a multi-layer feature map for subsequent target detection tasks. YOLOv8 employs the C3 module and ELAN design principles to create a C2f structure, which ensures lightweight operation while capturing richer gradient flow information [34,35]. The Neck component handles multi-scale feature fusion of the feature maps and passes these features to the prediction layer. YOLOv8 uses PAN-FPN, mimicking the Backbone in the PANet for the Neck part [36,37]. This involves organizing the FPN (feature pyramid network) with both down-sampling and up-sampling processes [38]. There are two cross-layer fusion connections between the up-sampling and down-sampling branches. The Head performs the final regression prediction. YOLOv8 employs a decoupled-head structure for separate regression learning of categories and bounding boxes, adopting the Anchor-Free concept [39].

2.2. Overview of YOLO-CS

This paper aims to achieve lightweight detection of sealant in the pretreatment process of HIR. To this end, a series of innovative improvements have been made to the YOLOv8s model as shown in Figure 3. Firstly, the RepViT backbone framework has been introduced, which enhances detection accuracy and reduces model complexity, thereby effectively decreasing computation time. RepViT enables the model to better capture feature information in the image, thus improving detection accuracy and achieving higher efficiency without additional computational burden.
Next, to further optimize model performance, the fusion of the Dilated Reparam Block (DRB) and Generalized ELAN has been employed to form the DRBNCSPELAN module. This module aims to enhance detection task performance by combining the benefits of DRB and Generalized ELAN. DRB expands the model’s receptive field through dilated convolution, helping to better capture long-range dependencies in the image, thereby improving detection accuracy. Generalized ELAN effectively handles fuzzy boundary conditions in specific scenarios, further enhancing the model’s robustness and generalization ability. By combining these two elements, the model better adapts to detection tasks in various scenarios, achieving superior performance.
Finally, to address the issue of the YOLOv8s detection head’s high computational demand, the LSC detection head has been developed. The LSC detection head uses a shared convolution design, which minimizes computational load and maintains accuracy by sharing the convolution kernel between multiple detection layers. This shared convolution layer extracts common feature information and shares it across multiple detection layers, reducing redundant calculations and lowering the model’s computational overhead. This design improves the model’s inference speed and efficiency while maintaining high detection accuracy, making the model more suitable for practical applications. Through these improvements and optimizations, YOLO-CS has significantly enhanced performance in the lightweight detection task of crack sealant, providing a more reliable and efficient solution for project applications.

2.3. RepViT Backbone Framework

While YOLOv8 demonstrates strong performance in accuracy and speed, its computational complexity and parameter quantity may result in slow reasoning speed and high power consumption on resource-constrained devices such as mobile or embedded devices. Additionally, YOLOv8s relies on the Darknet-53 backbone framework, which includes 52 layers of convolution plus an output layer, for detecting asphalt pavement sealant. However, this high structural complexity can limit its flexibility in deployment. Furthermore, YOLOv8’s universal design may not be as effective as specially designed lightweight models when handling specific tasks. Particularly with high-resolution input images, YOLOv8 might struggle to capture small and intricate sealant features. To address these limitations, this paper introduces the RepViT network architecture in Figure 4. Drawing inspiration from the Transformer and ViT (Vision Transformer) concepts, RepViT leverages self-attention mechanisms and global feature modeling to capture global context information in road images, thereby enhancing detection accuracy and robustness [40].
RepViT efficiently extracts both local and global features through a combination of token mixer and channel mixer. The Token Mixer employs deep convolution and point-by-point convolution for local feature extraction, enhancing the representation of local features. Meanwhile, the Channel Mixer facilitates the mixing and enhancement of different channels through point-by-point convolution and residual connection, ensuring information flow across scales and channels to capture more details and context information. Moreover, the modular design of RepViTBlock enables the model to dynamically adjust the number of layers and channels based on specific task requirements and resource constraints. This approach not only enhances the model’s adaptability but also optimizes its performance for diverse needs. Specifically, when the stride is set to 2, the model incorporates a depthwise separable convolution, Squeeze-and-Excite module, and pointwise convolution. Conversely, when the stride is 1, the RepVGGDW module is utilized for deep convolution operations along with the Squeeze-and-Excite module. This flexible design allows for the adjustment of model complexity and computational requirements while maintaining high performance.

2.4. DRBNCSPELAN Feature Fusion Module

While the C2f module in YOLOv8s effectively leverages both detailed and semantic information for enhanced accuracy and robustness in pavement distress detection, its feature fusion operation escalates computational complexity and parameter count. Consequently, this elevates the time cost associated with model training and reasoning. To address this challenge, this paper introduces the DRBNCSPELAN module, replacing the C2f module, utilizing the Dilated Reparam Block and Generalized ELAN, as depicted in Figure 5.
CSPNet is a network built upon stage-level gradient path-based architecture [41]. It divides the input into two segments via the conversion layer, processes them through any computational block, and subsequently reunites the branches through concatenation before passing them through the conversion layer again. Meanwhile, ELAN, a gradient path-oriented network, enhances the network’s gradient length by employing a stack structure within its blocks [42]. Through stacked convolution layers, each layer’s output is combined with the input of the subsequent layer for convolution processing. GELAN, inspired by CSPNet’s segmentation and reassembly concept and ELAN’s hierarchical convolution processing, integrates these elements into its design, allowing flexible utilization of computational blocks as shown in Figure 6 [23]. By facilitating efficient information flow and optimizing parameter utilization, GELAN reduces computing resource requirements while potentially enhancing detection accuracy and model generalization.
The DRB (Dilated Reparam Block) aims to enhance model performance by combining large-kernel convolutions with dilated small-kernel convolutions as shown in Figure 7 [43].
It captures fine-scale features via parallel small-core convolutional layers and sparse features via dilated convolutional layers, thereby enriching feature extraction efficiency. During training, these parallel branch convolution layers are each batch normalized (BN), and their outputs are aggregated. In the inference stage, structural reparameterization combines these convolutional layers and batch normalization layers into an equivalent large kernel convolutional layer, reducing computational overhead. A notable innovation of this module is converting dilated convolutional layers into non-expansive sparse large kernel convolutional layers. Specifically, by introducing zero entries into the convolution kernel, expanded convolution layers can be transformed into sparse non-expanded large kernel convolution layers. This approach preserves the effectiveness of original dilated convolutions while simplifying calculations during inference, achieved through transpose convolution. The integration of BN layers and dilated convolution layers enables the entire DRB to be converted into a single non-expanded large kernel convolution layer during inference, significantly boosting inference speed while maintaining efficient feature extraction.

2.5. Lightweight Shared Convolutional Detection Head

The Head part of YOLOv8s comprises two CBS convolution structures bifurcated, followed by a Conv2d operation as shown in Figure 8. Subsequently, classification loss and Bbox loss are computed separately. YOLOv8s adopts the Decoupled-Head structure to segregate classification and detection heads. Moreover, inspired by the Distributional Focal Loss (DFL) concept, the regression head’s channel count becomes 4 × reg_max (defaults: 16). Despite enhancing detection accuracy, the independent convolution operation employed for each detection layer’s feature map processing leads to redundant computational overhead. This approach underutilizes shared information within the feature map, thereby increasing the model’s computational cost. Additionally, during inference, the need to concatenate multiple convolution outputs and perform intricate post-processing escalates computational complexity and memory usage.
To enhance detection efficiency, this paper proposes a Lightweight Shared Convolutional (LSC) Detection Head, outlined in Figure 9.
This structure enhances model performance and efficiency by amalgamating shared and independent convolutional layers, along with utilizing the distributed focus loss (DFL) module. The crux of its design lies in integrating shared convolutional layers, composed of two 3 × 3 convolutional layers for extracting general feature information, and independent convolutional layers, housing multiple 1 × 1 convolutional layers for regression and classification output. GroupNorm, proven effective in enhancing detection head performance, replaces the normalization layer BN in Conv to counteract feature extraction capability weakening in lightweight scenarios [44]. This design facilitates shared common feature extraction among multiple detection layers, thereby curtailing redundant calculations, lowering computational overhead, and preserving high detection accuracy. Introducing a distributed focus loss (DFL) module further enhances bounding box regression accuracy, facilitating more precise object localization in practical applications.
Beyond structural optimization, the LSC boasts significant advantages in forward propagation. The presence of shared convolution layers streamlines the model’s forward propagation path, reducing concatenation and post-processing steps, and thereby boosting reasoning speed and efficiency. This streamlined forward propagation path not only accelerates model inference but also diminishes computing resource requirements, rendering the model more suitable for real-time scenarios. To address inconsistent target scales across detectors, the Scale layer is employed alongside shared convolution. In summary, LSC not only boosts inference speed and efficiency while maintaining detection accuracy but also excels in resource-constrained environments, promising broad application potential.

3. Dataset Construction and Experiment Setup

3.1. Dataset Construction

To verify the effectiveness of the proposed YOLO-CS in accurately locating and classifying pavement crack sealant, a high-quality dataset is essential. Current public datasets primarily focus on various pavement cracks, pits, and repairs, often lacking sufficient clarity. To address this, a pavement detection vehicle LUPRES-T equipped with two line-scanning cameras was used to capture a 2512 × 3140 pixel image as shown in Figure 10. These images were taken on a high-speed maintenance section in Northwest China, with fully enclosed construction ensuring the detection vehicle traveled at a constant speed of 40 km/h during shooting.
Since the input image size for YOLOv8s is 640 × 640 pixels, directly inputting the original image would result in excessive downscaling, leading to loss of detail and impairing accurate classification and positioning. Therefore, the original image was proportionally cropped to 628 × 628 pixels. A total of 1983 pavement images containing transverse, longitudinal, and block sealants were selected, as detailed in Table 2.
Using Make Sense software (https://www.makesense.ai/), various sealants were marked with rectangular bounding boxes in the asphalt pavement images. The marking information, including type, center point coordinates, width, and height, was saved directly in the text format compatible with the YOLO model. After labeling and expert review, the dataset comprised 1983 road images and 2447 labeled crack sealants. The dataset was randomly divided into training, validation, and test sets in a 7:1:2 ratio, as shown in Table 3. This methodology ensures a robust and precise dataset for evaluating the performance of YOLO-CS.

3.2. Experiment Settings

The training, testing, and validation of the YOLO series are conducted on a computer workstation equipped with a 16 GB NVIDIA GeForce RTX 3060 GPU. The main training parameters of YOLO are as follows: Cache determines whether to use cache when loading data to speed up the reading process. In this paper, the cache is set to False. Epochs refer to the number of training rounds. More epochs allow the model to learn the data more thoroughly but also increase training time and risk overfitting. After extensive experimentation, this paper sets the epochs to 500. Batch size refers to the number of images in each batch. Larger batch size can better represent the data set distribution and improve model learning but requires more graphics memory. This paper sets the batch size to 16. Mosaic data enhancement enriches the background of images and improves the batch size through image splicing. In this paper, mosaic enhancement is disabled in the last 10 rounds of training (close mosaic is set to 10). Workers refer to the number of working threads used when loading data. This paper sets the number of workers to 8. Device refers to the hardware used for training. Since GPU is used to accelerate training, the device is set to 0. Optimizer refers to the algorithm used to adjust model parameters in deep learning to minimize the loss function. YOLOv8s offers several optimizers, including SGD, Adam, and RMSProp. This paper consistently uses the SGD optimizer. All remaining training parameters are left at their default settings.

3.3. Evaluation Metrics

FPS (Frames Per Second) indicates the number of images processed per second, used to evaluate the model’s processing speed on a given hardware setup. FLOPs (Floating Point Operations) measure the number of operations required to process an image, providing a hardware and software-independent metric.
Precision measures the proportion of correct positive predictions among all positive predictions, assessing the accuracy of the model in positive prediction scenarios. Recall measures the proportion of correct positive predictions among all actual positives, indicating the model’s ability to identify positive examples.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where: True Positive (TP) refers to correctly predicting the positive class. False Positive (FP) refers to incorrectly predicting the negative class as positive. False Negative (FN) refers to incorrectly predicting the positive class as negative.
Average Precision (AP) calculates the average accuracy for different categories:
A P = k = 0 k = n 1 [ R e c a l l s ( k ) R e c a l l s ( k + 1 ) ] × P r e c i s i o n s ( k )
where: R e c a l l s ( n ) = 0 , P r e c i s i o n s ( n ) = 1 , n = N u m b e r   o f   t h r e s h o l d s .
Mean Average Precision (mAP) is the average of APs across all categories. mAP50 represents the average mAP with an Intersection over Union (IOU) threshold greater than 0.5, while mAP50-95 represents the average mAP at various IOU thresholds, ranging from 0.5 to 0.95 in steps of 0.05.
m A P = 1 n k = 1 k = n A P k
where: A P k = t h e   A P   o f   c l a s s   k , n = N u m b e r   o f   c l a s s e s .

4. Result and Discussion

4.1. Algorithm Ablation Experiment

To investigate the impact of RepViT, DRBNCSPELAN, and LSC detection heads on the performance of YOLOv8s, ablation experiments were conducted. The results are presented in Table 4. Using YOLOv8s as the benchmark model, a “√” tick symbol indicates the inclusion of the corresponding module. Introducing the RepViT backbone network structure, which captures global context information in pavement images, resulted in Precision increasing by 4%, Recall by 3.9%, mAP50 by 3.8%, and mAP50-95 by 7%. The DRBNCSPELAN module, based on the innovative fusion of the Dilated Reparam Block and Generalized ELAN, improves information flow and the model’s generalization ability compared to the C2f feature fusion module. Precision increased by 6%, Recall by 1.6%, mAP50 by 2.8%, and mAP50-95 by 8.9%. Additionally, the number of parameters decreased from 11,126,745 to 7,666,569, significantly reducing the model’s memory usage to just 15.2 MB. The LSC detection head, developed to enhance detection performance with fewer parameters and computations, resulted in a 4.6% increase in Precision, a 3.2% increase in Recall, and increases of 3.9% in mAP50 and 7.4% in mAP50-95. Furthermore, the parameters and memory were reduced to 84.7% of the original, and FPS increased to 156.4 f·s−1.
Combining these models further enhanced performance. The integration of the RepViT framework with DRBNCSPELAN and LSC detection heads increased mAP50 by 3.5% and 1.8%, respectively, and mAP50-95 by 7.6% and 5.2%, respectively. The combination of DRBNCSPELAN and the LSC detection head achieved the minimum number of parameters (5,970,054) and memory usage (11.9 MB), with FLOPs at only 16.9. Additionally, mAP50 and mAP50-95 increased by 3.1% and 7.6%, respectively. The optimal improvement was achieved by combining all three strategies. This approach increased Precision by 5.4%, Recall by 1.7%, mAP50 by 4%, and mAP50-95 by 9.1%, while reducing parameters and memory by 30.2% and 27.9%, respectively. These improvements facilitate model lightweighting and significantly enhance the detection performance for asphalt pavement crack sealants. These ablation experiment results confirm the effectiveness of the proposed improvement strategies.

4.2. Model Comparative Analysis

To further evaluate the performance of the model, this study compares the YOLO-CS model with commonly used YOLO series models (YOLOv3-tiny, YOLOv5s, YOLOv6s, YOLOv8s) on the same asphalt pavement crack sealant dataset. The test results (Table 5) are as follows:
The YOLO-CS model outperforms other YOLO series models in detecting asphalt pavement crack sealant. YOLO-CS achieves 88.4% Precision, 84.2% Recall, 92.1% mAP50, and 71.2% mAP50-95. Additionally, the model’s FLOPs and size are 23.2 and 15.5 MB, respectively. Compared to YOLOv5s, YOLOv6s, and YOLOv8s, YOLO-CS’s mAP50 increased by 1.8%, 1.8%, and 4%, respectively, while mAP50-95 increased by 6.8%, 2.7%, and 9.1%, respectively. Furthermore, YOLO-CS has the smallest number of parameters and memory footprint, with only 7,764,542 parameters and a size of 15.5 MB. This makes it highly suitable for lightweight deployment under resource-constrained conditions. In summary, by utilizing the RepViT backbone network architecture and the DRBNCSPELAN feature fusion module, combined with the self-developed LSC lightweight detection head, the study significantly enhances high-performance detection of crack sealant during HIR while greatly reducing the number of parameters and memory usage.

4.3. Visualization Analysis

To assess the practicality of the YOLO-CS model in asphalt pavement HIR pretreatment, a visual analysis of crack sealant detection was conducted, as depicted in Figure 11. In Figure 11a, most of the transverse crack sealant detections exhibit confidence levels above 0.8. Notably, the micro-surface pavement enhances detection accuracy due to better background distinction. This type of crack sealant is typically employed for low-temperature transverse crack repairs or reflective cracks, with a wide distribution interval, making excavation and collection straightforward. Figure 11b illustrates the detection of longitudinal sealant, with confidence ranging from 0.7 to 0.9, effectively distinguished from cracks. This sealant addresses fatigue longitudinal or minor network cracks and is suitable for expansive construction cracks, requiring similar treatment measures to transverse sealant. In Figure 11c, block sealant is predominantly detected with confidence exceeding 0.9, largely unaffected by the pavement background. This type often necessitates milling during pretreatment due to structural pavement damage, where HIR alone cannot restore mixture performance. The dense distribution of such sealant poses manual handling challenges and risks fire hazards during heating. Figure 11d exhibits a mixed distribution of various sealant types. YOLO-CS accurately and promptly identifies and classifies sealants across diverse pavement backgrounds, significantly reducing human resource requirements. Its lightweight design facilitates deployment on mobile terminals, fostering engineering applications.
To further compare the advantages of YOLO-CS over the YOLOv8s, the test set is analyzed. In Figure 12, correct detections are indicated by green boxes, missed detections by red boxes, and false detections by blue boxes. Analysis of Figure 12a,b reveals that when the surface lacks micro-surfaces, leading to low background discrimination, YOLOv8s struggles to differentiate cracks from block grouting glue, resulting in missed detections. YOLOv8s also misses bulk crack sealant in the second row and generates multiple prediction frames for transverse crack sealant, leading to missed detection. Conversely, YOLO-CS accurately locates the position and type of crack sealant. In Figure 12c,d, false detections in YOLOv8s are attributed to multiple prediction boxes for the same crack sealant, resulting in erroneous detection. YOLO-CS demonstrates superior learning and detection capabilities under varied conditions, accurately locating crack sealant positions with high confidence.
The statistical results of the test set are presented in Table 6, revealing a 14% decrease in missed detection rate and an 18% decrease in false detection rate with YOLO-CS. This underscores how the proposed YOLO-CS algorithm enhances detection accuracy and efficiency while achieving lightweight detection of crack sealant, effectively addressing challenges like strong background noise and multi-type target occlusion.

5. Conclusions

To achieve lightweight and precise positioning and classification of crack sealant during the pretreatment of hot in-place recycling for asphalt pavement, this paper introduces YOLO-CS, based on YOLOv8s. Firstly, it replaces Darknet-53 with the RepViT lightweight backbone network structure, which effectively reduces the convolutional neural network’s parameters and better processes long-range dependencies in images through a self-attention mechanism and global feature modeling. Secondly, it fuses the Dilated Reparam Block and GELAN to create the DRBNCSPELAN module, replacing the C2f feature fusion module to optimize parameter utilization and facilitate efficient information flow. Finally, a Lightweight Shared Convolutional (LSC) detection head is developed to enhance reasoning speed and efficiency while maintaining detection accuracy through shared convolution. Compared to the YOLOv8s benchmark model, YOLO-CS achieves a 5.4% increase in Precision, 1.7% increase in Recall, and improvements of 4% and 9.1% in mAP50 and mAP50-95, respectively, with a reduction in parameters and memory consumption by 30.2% and 27.9%, respectively. Its FPS is only slightly decreased. This model exhibits clear advantages over existing YOLO series object detection algorithms while maintaining superior detection performance and achieving lightweight deployment.
Furthermore, this paper establishes a dataset of asphalt pavement crack sealant, comprising 1983 pavement images containing transverse, longitudinal, and block crack sealant, with micro-surface and non-micro-surface sections in the pavement background. The images are clear at 628 × 628 pixels. The YOLO-CS model trained on this dataset demonstrates broad applicability. In summary, the YOLO-CS model developed in this study addresses the reliance on manual recording for crack sealant detection in the current pretreatment process, significantly reducing costs and improving detection efficiency. Implementing appropriate treatment measures based on the detection of different types of crack sealant can enhance the performance of recycled pavement.

6. Future Work

In future research, it is essential to evaluate the YOLO-CS model’s detection efficiency under various external factors, such as the presence of water and snow on the pavement surface, and to assess the impact of weather conditions during image capture. Given that crack sealant may deteriorate 1–3 years after application due to climate and load conditions, it is crucial to study whether the sealant’s condition affects detection accuracy. Specifically, it is important to determine if the model can accurately detect severely deteriorated crack sealant (e.g., settlement versus adhesive and cohesive failures). Further expansion of the crack sealant dataset is necessary to enhance the model’s robustness. Additionally, the development of software or mobile apps will promote the practical application of this method in engineering projects, thereby improving on-site construction efficiency.

Author Contributions

Methodology, K.Z. and X.X.; Software, K.Z. and X.X.; Validation, T.L.; Formal analysis, T.L.; Investigation, X.X.; Writing—original draft, K.Z. and X.X.; Supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China [No. 52078132], and the Fundamental Research Funds for the Central Universities [No. 2242022k30058]. The authors gratefully acknowledge their financial support.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, X.; Han, D.; Zhao, Y.; Xie, Y.; Zhou, Z.; Wang, J. Investigation of Asphalt Pavement Crack Propagation Based on Micromechanical Finite Element: A Case Study. Case Stud. Constr. Mater. 2023, 19, e02247. [Google Scholar] [CrossRef]
  2. Yang, C.; Cao, L.; Ullah, S.; Dong, Z.; Zhang, X.; Wei, D. Performance Evolution and Mechanism of Asphalt Crack Sealant under UV Aging: A Continuity Study. Constr. Build. Mater. 2024, 431, 136539. [Google Scholar] [CrossRef]
  3. Wu, S.; Liu, Q.; Yang, J.; Yang, R.; Zhu, J. Study of Adhesion between Crack Sealant and Pavement Combining Surface Free Energy Measurement with Molecular Dynamics Simulation. Constr. Build. Mater. 2020, 240, 117900. [Google Scholar] [CrossRef]
  4. Xiao, F.; Xu, L.; Zhao, Z.; Hou, X. Recent Applications and Developments of Reclaimed Asphalt Pavement in China, 2010–2021. Sustain. Mater. Technol. 2023, 37, e00697. [Google Scholar] [CrossRef]
  5. Xia, X.; Zhao, Y.; Tang, D. The State-of-the-Art Review on the Utilization of Reclaimed Asphalt Pavement via Hot in-Place Recycling Technology. J. Clean. Prod. 2025, 492, 144887. [Google Scholar] [CrossRef]
  6. Abedin Khan, Z.; Balunaini, U.; Costa, S.; Nguyen, N.H.T. A Review on Sustainable Use of Recycled Construction and Demolition Waste Aggregates in Pavement Base and Subbase Layers. Clean. Mater. 2024, 13, 100266. [Google Scholar] [CrossRef]
  7. Yin, J.; Pang, Q.; Wu, H.; Song, W. Using a Polymer-Based Sealant Material to Make Crack Repair of Asphalt Pavement. J. Test. Eval. 2018, 46, 2056–2066. [Google Scholar] [CrossRef]
  8. Yang, X.; Zhang, J.; Liu, W.; Jing, J.; Zheng, H.; Xu, W. Automation in Road Distress Detection, Diagnosis and Treatment. J. Road Eng. 2024, 4, 1–26. [Google Scholar] [CrossRef]
  9. Xiong, X.; Tan, Y. Pixel-Level Patch Detection from Full-Scale Asphalt Pavement Images Based on Deep Learning. Int. J. Pavement Eng. 2023, 24, 2180639. [Google Scholar] [CrossRef]
  10. Wang, J. Morphological Classification Method and Cause Analysis of Asphalt Pavement Cracks of Shanxi Expressway. Master’s Thesis, Southeast University, Nanjing, China, 2021. [Google Scholar]
  11. Zhang, Y.; Zuo, Z.; Xu, X.; Wu, J.; Zhu, J.; Zhang, H.; Wang, J.; Tian, Y. Road Damage Detection Using UAV Images Based on Multi-Level Attention Mechanism. Autom. Constr. 2022, 144, 104613. [Google Scholar] [CrossRef]
  12. Zhu, J.; Zhong, J.; Ma, T.; Huang, X.; Zhang, W.; Zhou, Y. Pavement Distress Detection Using Convolutional Neural Networks with Images Captured via UAV. Autom. Constr. 2022, 133, 103991. [Google Scholar] [CrossRef]
  13. Cao, W.; Liu, Q.; He, Z. Review of Pavement Defect Detection Methods. IEEE Access 2020, 8, 14531–14544. [Google Scholar] [CrossRef]
  14. Zhong, J.; Huyan, J.; Zhang, W.; Cheng, H.; Zhang, J.; Tong, Z.; Jiang, X.; Huang, B. A Deeper Generative Adversarial Network for Grooved Cement Concrete Pavement Crack Detection. Eng. Appl. Artif. Intell. 2023, 119, 105808. [Google Scholar] [CrossRef]
  15. Zhong, J.; Zhu, J.; Huyan, J.; Ma, T.; Zhang, W. Multi-Scale Feature Fusion Network for Pixel-Level Pavement Distress Detection. Autom. Constr. 2022, 141, 104436. [Google Scholar] [CrossRef]
  16. Wang, S.; Chen, X.; Dong, Q. Detection of Asphalt Pavement Cracks Based on Vision Transformer Improved YOLO V5. J. Transp. Eng. Part B Pavements 2023, 149, 04023004. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Liu, F.; Huang, Y.; Hou, Y. Detection and Statistics System of Pavement Distresses Based on Street View Videos. IEEE Trans. Intell. Transp. Syst. 2024, 25, 1–10. [Google Scholar] [CrossRef]
  18. Matarneh, S.; Elghaish, F.; Pour Rahimian, F.; Abdellatef, E.; Abrishami, S. Evaluation and Optimisation of Pre-Trained CNN Models for Asphalt Pavement Crack Detection and Classification. Autom. Constr. 2024, 160, 105297. [Google Scholar] [CrossRef]
  19. Song, L.; Wang, X. Faster Region Convolutional Neural Network for Automated Pavement Distress Detection. Road Mater. Pavement Des. 2021, 22, 23–41. [Google Scholar] [CrossRef]
  20. Fan, L.; Wang, D.; Wang, J.; Li, Y.; Cao, Y.; Liu, Y.; Chen, X.; Wang, Y. Pavement Defect Detection With Deep Learning: A Comprehensive Survey. IEEE Trans. Intell. Veh. 2024, 9, 4292–4311. [Google Scholar] [CrossRef]
  21. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27-30 June 2016. [Google Scholar]
  22. Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  23. Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024. [Google Scholar]
  24. Yao, H.; Liu, Y.; Li, X.; You, Z.; Feng, Y.; Lu, W. A Detection Method for Pavement Cracks Combining Object Detection and Attention Mechanism. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22179–22189. [Google Scholar] [CrossRef]
  25. Liu, Y.; Liu, F.; Liu, W.; Huang, Y. Pavement Distress Detection Using Street View Images Captured via Action Camera. IEEE Trans. Intell. Transp. Syst. 2024, 25, 738–747. [Google Scholar] [CrossRef]
  26. Ma, B.; Hu, Y.; Liu, F.; Si, W.; Wei, K.; Wang, X.; Kang, X.; Chang, X. Performance of a Novel Epoxy Crack Sealant for Asphalt Pavements. Int. J. Pavement Eng. 2022, 23, 3068–3081. [Google Scholar] [CrossRef]
  27. Hu, K.; Chen, Y.; Qin, M.; Hu, R.; Hu, X.; Tao, X. An Eco-Friendly Crack Sealant Approach for Asphalt Pavement by Using Laboratory Tests and Molecular Dynamic Simulation. Mater. Today Sustain. 2024, 27, 100893. [Google Scholar] [CrossRef]
  28. Zheng, L.; Xiao, J.; Wang, Y.; Wu, W.; Chen, Z.; Yuan, D.; Jiang, W. Deep Learning-Based Intelligent Detection of Pavement Distress. Autom. Constr. 2024, 168, 105772. [Google Scholar] [CrossRef]
  29. Xiong, C.; Zayed, T.; Abdelkader, E.M. A Novel YOLOv8-GAM-Wise-IoU Model for Automated Detection of Bridge Surface Cracks. Constr. Build. Mater. 2024, 414, 135025. [Google Scholar] [CrossRef]
  30. Jinbo, G.; Shenghuai, W.; Xiaohui, C.; Chen, W.; Wei, Z. QL-YOLOv8s: Precisely Optimized Lightweight YOLOv8 Pavement Disease Detection Model. IEEE Access 2024, 12, 128392–128403. [Google Scholar] [CrossRef]
  31. Wang, H.; Han, X.; Song, X.; Su, J.; Li, Y.; Zheng, W.; Wu, X. Research on Automatic Pavement Crack Identification Based on Improved YOLOv8. Int. J. Interact. Des. Manuf. (Ijidem) 2024, 18, 3773–3783. [Google Scholar] [CrossRef]
  32. Wang, S.; Cai, B.; Wang, W.; Li, Z.; Hu, W.; Yan, B.; Liu, X. Automated Detection of Pavement Distress Based on Enhanced YOLOv8 and Synthetic Data with Textured Background Modeling. Transp. Geotech. 2024, 48, 101304. [Google Scholar] [CrossRef]
  33. Wang, X.; Gao, H.; Jia, Z.; Li, Z. BL-YOLOv8: An Improved Road Defect Detection Model Based on YOLOv8. Sensors 2023, 23, 8361. [Google Scholar] [CrossRef]
  34. Li, H.; Wu, A.; Jiang, Z.; Liu, F.; Luo, M. Improving Object Detection in YOLOv8n with the C2f-f Module and Multi-Scale Fusion Reconstruction. In Proceedings of the 2024 IEEE 6th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 24–26 May 2024; Volume 6, pp. 374–379. [Google Scholar]
  35. Zhu, J.; Hu, T.; Zheng, L.; Zhou, N.; Ge, H.; Hong, Z. YOLOv8-C2f-Faster-EMA: An Improved Underwater Trash Detection Model Based on YOLOv8. Sensors 2024, 24, 2483. [Google Scholar] [CrossRef] [PubMed]
  36. Nie, H.; Pang, H.; Ma, M.; Zheng, R. A Lightweight Remote Sensing Small Target Image Detection Algorithm Based on Improved YOLOv8. Sensors 2024, 24, 2952. [Google Scholar] [CrossRef]
  37. Zhong, J.; Qian, H.; Wang, H.; Wang, W.; Zhou, Y. Improved Real-Time Object Detection Method Based on YOLOv8: A Refined Approach. J. Real-Time Image Process. 2024, 22, 4. [Google Scholar] [CrossRef]
  38. Li, J.; Zhang, J.; Shao, Y.; Liu, F. SRE-YOLOv8: An Improved UAV Object Detection Model Utilizing Swin Transformer and RE-FPN. Sensors 2024, 24, 3918. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Zhang, H.; Huang, Q.; Han, Y.; Zhao, M. DsP-YOLO: An Anchor-Free Network with DsPAN for Small Object Detection of Multiscale Defects. Expert Syst. Appl. 2024, 241, 122669. [Google Scholar] [CrossRef]
  40. Wang, A.; Chen, H.; Lin, Z.; Han, J.; Ding, G. RepViT: Revisiting Mobile CNN From ViT Perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 15909–15920. [Google Scholar]
  41. Wang, C.-Y.; Mark Liao, H.-Y.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. CSPNet: A New Backbone That Can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
  42. Wang, C.-Y.; Liao, H.-Y.M.; Yeh, I.-H. Designing Network Design Strategies Through Gradient Path Analysis. arXiv 2022, arXiv:2211.04800. [Google Scholar]
  43. Ding, X.; Zhang, Y.; Ge, Y.; Zhao, S.; Song, L.; Yue, X.; Shan, Y. UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5513–5524. [Google Scholar]
  44. Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully Convolutional One-Stage Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9627–9636. [Google Scholar]
Figure 1. Crack sealant.
Figure 1. Crack sealant.
Sensors 25 03373 g001
Figure 2. YOLOv8s network model. Note: Conv (Convolution); C2f (CSP-to-Fusion).
Figure 2. YOLOv8s network model. Note: Conv (Convolution); C2f (CSP-to-Fusion).
Sensors 25 03373 g002
Figure 3. YOLO-CS network model.
Figure 3. YOLO-CS network model.
Sensors 25 03373 g003
Figure 4. Overview of RepViT architecture.
Figure 4. Overview of RepViT architecture.
Sensors 25 03373 g004
Figure 5. DRBNCSPELAN.
Figure 5. DRBNCSPELAN.
Sensors 25 03373 g005
Figure 6. GELAN.
Figure 6. GELAN.
Sensors 25 03373 g006
Figure 7. DRB.
Figure 7. DRB.
Sensors 25 03373 g007
Figure 8. YOLOv8s decoupled head.
Figure 8. YOLOv8s decoupled head.
Sensors 25 03373 g008
Figure 9. LSC detection head.
Figure 9. LSC detection head.
Sensors 25 03373 g009
Figure 10. Pavement full-scale image.
Figure 10. Pavement full-scale image.
Sensors 25 03373 g010
Figure 11. YOLO-CS visualization results.
Figure 11. YOLO-CS visualization results.
Sensors 25 03373 g011
Figure 12. Visualization comparison between YOLO-CS and YOLOv8s.
Figure 12. Visualization comparison between YOLO-CS and YOLOv8s.
Sensors 25 03373 g012
Table 1. YOLOv8 series model.
Table 1. YOLOv8 series model.
ModelDepthWidthMax ChannelSize (Pixels)
YOLOv8n (nano)0.330.251024640
YOLOv8s (small)0.330.501024640
YOLOv8m (medium)0.670.75768640
YOLOv8l (large)1.001.00512640
YOLOv8x (extra large)1.001.25512640
Table 2. Types of crack sealant for asphalt pavement.
Table 2. Types of crack sealant for asphalt pavement.
TypeCorresponding Figure
Transversecrack sealantSensors 25 03373 i001Sensors 25 03373 i002Sensors 25 03373 i003Sensors 25 03373 i004
Longitudinalcrack sealantSensors 25 03373 i005Sensors 25 03373 i006Sensors 25 03373 i007Sensors 25 03373 i008
Blockcrack sealantSensors 25 03373 i009Sensors 25 03373 i010Sensors 25 03373 i011Sensors 25 03373 i012
Table 3. Crack sealant dataset.
Table 3. Crack sealant dataset.
DatasetTransverse Crack SealantLongitudinal Crack SealantBlock Crack Sealant
ImagesObjectsImagesObjectsImagesObjects
Training554581536582528552
Validation858771767272
Test176183155172138142
Sum815851762830738766
Table 4. Ablation study.
Table 4. Ablation study.
BaselineRepDRBLSCP/%R/%mAP50/%mAP50-95/%
TCLCBCTCLCBCTCLCBCTCLCBC
YOLO
v8s
---85.884.978.492.278.676.895.484.684.264.256.565.6
--90.285.485.490.185.583.893.891.190.869.464.973.1
--92.387.387.491.383.777.594.590.487.971.565.676
--9186.185.793.586.277.596.688.291.171.261.775.6
-92.984.390.989.18277.294.689.29169.563.276.4
-92.282.378.490.583.787.394.585.689.568.860.672.6
-93.685.584.386.981.483.894.588.790.570.563.175.5
88.587.289.791.38675.495.390.790.271.265.776.6
BaselineRepDRBLSCPrecisionRecallmAP50mAP
50-95
ParamFlopsSizeFPS
YOLO
v8s
---8382.588.162.111,126,74528.421.5 M137.4
--8786.491.969.111,181,28129.622.0 M106.9
--8984.190.9717,666,56919.615.2 M124.9
--87.685.79269.59,430,23025.818.2 M156.4
-89.482.791.669.79,461,05725.918.8 M101.8
-84.387.289.967.39,484,76626.918.7 M109.9
-87.88491.269.75,970,05416.911.9 M128.3
88.484.292.171.27,764,54223.215.5 M102.8
Where: TC = Transverse crack sealant; LC = Longitudinal crack sealant; BC = Block crack sealant.
Table 5. Comparative experiments.
Table 5. Comparative experiments.
ModelPrecision
%
Recall
%
mAP50
%
mAP50-95
%
ParametersFlopsSizeFPS
YOLOv3tiny59.571.867.829.012,129,20618.923.2 M232.9
YOLOv5s86.982.790.364.49,112,69723.817.7 M157.4
YOLOv6s86.284.790.368.516,298,00944.031.4 M131.4
YOLOv8s8382.588.162.111,126,74528.421.5 M137.4
YOLO-CS88.484.292.171.27,764,54223.215.5 M102.8
Table 6. Test result.
Table 6. Test result.
RightMissingError (Misclassification)
YOLOv8s44542172
YOLO-CS44936141
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, K.; Liu, T.; Xia, X.; Zhao, Y. Detection of Crack Sealant in the Pretreatment Process of Hot In-Place Recycling of Asphalt Pavement via Deep Learning Method. Sensors 2025, 25, 3373. https://doi.org/10.3390/s25113373

AMA Style

Zhao K, Liu T, Xia X, Zhao Y. Detection of Crack Sealant in the Pretreatment Process of Hot In-Place Recycling of Asphalt Pavement via Deep Learning Method. Sensors. 2025; 25(11):3373. https://doi.org/10.3390/s25113373

Chicago/Turabian Style

Zhao, Kai, Tianzhen Liu, Xu Xia, and Yongli Zhao. 2025. "Detection of Crack Sealant in the Pretreatment Process of Hot In-Place Recycling of Asphalt Pavement via Deep Learning Method" Sensors 25, no. 11: 3373. https://doi.org/10.3390/s25113373

APA Style

Zhao, K., Liu, T., Xia, X., & Zhao, Y. (2025). Detection of Crack Sealant in the Pretreatment Process of Hot In-Place Recycling of Asphalt Pavement via Deep Learning Method. Sensors, 25(11), 3373. https://doi.org/10.3390/s25113373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop