Next Article in Journal
Singular Value Decomposition (SVD) Method for LiDAR and Camera Sensor Fusion and Pattern Matching Algorithm
Previous Article in Journal
Accuracy–Efficiency Trade-Off: Optimizing YOLOv8 for Structural Crack Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Onboard LiDAR–Camera Deployment Optimization for Pavement Marking Distress Fusion Detection

1
Department of Traffic Information and Control Engineering, Jilin University, Changchun 130022, China
2
Jilin Engineering Research Center for Intelligent Transportation System, Changchun 130022, China
3
Department of Civil, Environmental and Construction Engineering, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(13), 3875; https://doi.org/10.3390/s25133875
Submission received: 25 May 2025 / Revised: 17 June 2025 / Accepted: 20 June 2025 / Published: 21 June 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

Pavement markings, as a crucial component of traffic guidance and safety facilities, are subject to degradation and abrasion after a period of service. To ensure traffic safety, retroreflectivity and diffuse illumination should be above the minimum thresholds and required to undergo inspection periodically. Therefore, an onboard light detection and ranging (LiDAR) and camera deployment optimization method is proposed for pavement marking distress detection to adapt to complex traffic conditions, such as shadows and changing light. First, LiDAR and camera sensors’ detection capability was assessed based on the sensors’ built-in features. Then, the LiDAR–camera deployment problem was mathematically formulated for pavement marking distress fusion detection. Finally, an improved red fox optimization (RFO) algorithm was developed to solve the deployment optimization problem by incorporating a multi-dimensional trap mechanism and an improved prey position update strategy. The experimental results illustrate that the proposed method achieves 5217 LiDAR points, which fall on a 0.58 m pavement marking per data frame for distress fusion detection, with a relative error of less than 7% between the mathematical calculation and the field test measurements. This empirical accuracy underscores the proposed method’s robustness in real-world scenarios, effectively mitigating the challenges posed by environmental interference.

1. Introduction

Pavement markings play an essential role in enhancing traffic safety and providing traffic information for guidance, and they regulate driver behavior [1,2]. Furthermore, pavement markings provide a foundation criterion in advanced driver assistance system (ADAS) development, such as lane-keeping, fatigue driving detection, and adaptive cruise control [3,4].
However, pavement markings may undergo distress, such as cracks, wear, and aging due to environmental erosion and vehicle loading [5,6]. To check whether pavement markings meet the minimum retroreflectivity requirements, traffic engineers usually use handheld laser reflectometers for on-site inspections. However, this human-controlled method often depends on engineers’ experience and suffers from low efficiency and high risk in field testing [7,8].
In existing research, camera sensors are used in pavement marking detection due to their high resolution and low cost, enabling the precise extraction of marking features through image processing techniques. However, they are sensitive to variations in ambient lighting, making them susceptible to changes in environmental illumination [9]. On the other hand, the light detection and ranging (LiDAR) sensor demonstrates strong robustness to external lighting changes and provides broader scanning coverage, but its sparse point cloud limits its effectiveness in fine feature detection applications, such as pavement marking distress detection [10,11,12]. To improve the coverage and point cloud density of LiDAR in pavement marking detection, onboard-mounted LiDAR parameters were optimized by constructing a deployment optimization model in our early work [11], and a marking distress detection method [6] was proposed based on high-quality point clouds. However, the method has a limited effect on the detection of distress signs with tiny dimensions (fine cracks, minor spalling, etc.). Given the advantages of vision cameras in capturing details and distinguishing small targets, this study further introduces cameras into the deployment optimization system to determine the optimal deployment strategy for multi-sensor fusion [13,14].
Synergistically optimizing the relative positions and viewing angles of LiDAR and cameras can help them overcome the limitations of their respective sensors to leverage their complementary strengths in the detection of large-scale marking lesions and small cracks in complex road environments. This is considered a cost-effective way to achieve robustness and accuracy. However, determining how to jointly optimize the deployment of heterogeneous sensors under a multi-sensor fusion framework to maximize the detection performance of pavement marking lesions is still a key challenge.
Therefore, an onboard LiDAR–camera deployment optimization method was proposed for pavement marking distress fusion detection. First, the onboard LiDAR and camera sensors’ detection range on pavement was evaluated based on the sensors’ built-in characteristics. Then, the LiDAR–camera deployment problem was mathematically formulated to achieve the largest overlap area of detection and most points fall on pavement marking for fusion detection. Finally, an improved red fox optimization (RFO) algorithm was developed for LiDAR–camera mounting parameter optimization by leveraging a multi-dimensional trap mechanism to bound the search space and enforce physical deployment constraints, and an improved prey position update strategy was created to ensure monotonic convergence toward globally optimal configurations.
The main contributions of this paper can be summarized as follows:
(1) An onboard LiDAR–camera co-deployment mathematical model was proposed for pavement marking distress fusion detection to maximize the fusion detection region and point cloud density.
(2) An improved red fox optimization (RFO) algorithm was developed to adapt the constraints in onboard sensor deployment and speed up the convergence rate of the problem solution.
(3) The experimental results show that the proposed method achieves a relative error below 7% between mathematical evaluation and field measurements, and they illustrate the robustness of pavement marking distress detection in environments with shadows and changing light.
The rest of this paper is organized as follows: Section 2 reviews the related research on pavement marking detection and sensor deployment optimization. Section 3 presents the onboard LiDAR–camera deployment optimization problem and solution methodology. Section 4 presents the experiments that were conducted to illustrate the performance of the proposed method. Section 5 concludes this study with a summary and possible future work.

2. Related Works

To draw a clear distinction between this study and previous research, the literature on pavement marking detection and sensor deployment optimization is reviewed in this section.

2.1. Pavement Marking Detection

The existing research on pavement marking detection can be divided into studies focusing on image-based methods and LiDAR-based methods according to the sensor used for data collection [15,16].
Image-based methods can be further categorized into feature-based [17,18], model-based [3,4], and learning-based [19,20] methods. For instance, Borkar [17] converted an image into a grayscale map and used marker edges as features through canny edge extraction for pavement marking detection. Zhou [21] established a geometric detection model for pavement marking detection based on local Hough transform and model matching. Hani [20] proposed a deep learning framework based on Fast-RCNN for pavement marking identification. However, these methods focus on pavement marking outline detection, which cannot be used in pavement marking distress detection. In addition, the accuracy and robustness of image-based methods are always affected by the light conditions and shadows.
In the current study, the LiDAR sensor was also used for pavement marking distress detection. Three-dimensional (3D) point cloud data extracted from the LiDAR sensor was projected into a two-dimensional (2D) plane to form an image. Then, pavement marking distress was detected by leveraging image-based algorithms [6,22]. For instance, Li [12] utilized a laser scanner to obtain high-resolution laser images, identifying lane markings’ contours through closing operations and a marching square algorithm. They ultimately employed a linear support vector machine method for geometric feature extraction and the reconstruction of pavement markings. Zhang [22] mapped the three-dimensional data of the identified areas onto a two-dimensional image and finally accurately extracted the crosswalk marking regions using a convolutional neural network.
Some researchers use a point cloud to reconstruct a 3D pavement marking map and recognize the distress based on the point cloud features [23]. For instance, Rastiveis [24] identified pavement marking areas using the Hough transform and fuzzy inference methods based on laser intensity information and point cloud positions. Cheng [23] used the intensity information of laser point clouds as features and constructed an unsupervised learning network, U-Net, to detect pavement markings.
However, sparse point clouds make it difficult to detect various types and sizes of pavement marking distress due to their low resolution.

2.2. Sensor Deployment Optimization

The rich visual information provided by the camera can help address the limitations of LiDAR in surface texture perception, while the LiDAR sensor can compensate for the limitations of images with shadows. LiDAR–camera data fusion can achieve more comprehensive and accurate pavement marking distress detection. However, the research on LiDAR–camera fusion for pavement marking distress detection is limited.
To leverage the sensor’s performance to the maximum extent, its deployment position and pose should be optimized according to the objective application. In existing research, the deployment of homogeneous sensors in a network was optimized to obtain the best location and pose to expand the coverage, improve accuracy, and reduce the investment cost. Hörster [25] proposed a linear programming-based approach to optimize multi-camera deployments to maximize the detection range by considering the sensor number, location, and cost. Geissler [26] used genetic algorithms to maximize camera sensors’ FOV in dynamic occlusion scenarios. Du [27] proposed a sensor deployment method for vehicle detection and tracking under different traffic densities and vehicle compositions, which can reduce the sensor’s investment cost and expand detection coverage.
Heterogeneous multi-sensor deployment optimization always focuses on the strength of the sensing ability. Dey [28] proposed a VESPA framework to optimize the deployment position and orientation of onboard cameras and radars to extend the coverage and enhance detection accuracy and reliability. Cao [29] proposed a method for the optimal deployment of cameras and radars on road networks using the bisection method and the AO–GI–MO algorithm, which effectively improved traffic accident detection. Li [30] proposed an optimal deployment method for roadside LiDAR sensors and cameras by comparing experimental data from multiple application scenarios and performing object detection and recognition using YOLOv5s and PointPillars, significantly improving the accuracy of blind spot detection and vehicle occlusion issues.
However, to the best of our knowledge, the problems of the onboard LiDAR–camera deployment optimization for pavement marking distress detection have not been addressed.

3. Methodology

3.1. Problem Formulation

Assuming that the projected coordinates origin on the ground is O 0,0 , the camera’s scanning range of the road pavement can be defined as S A R C = A , B , C , D , as shown in Figure 1.
According to the camera sensor’s built-in field of view (FOV), denoted as γ m i n c , γ m a x c , and its deployment height h , the camera’s scanning range can be rewritten as
S A R C = x m a x c , y m a x c x m i n c , y m a x c x m i n c , y m i n c x m a x c , y m i n c = A h · tan γ m i n c 2 , h · tan γ m i n c 2 B h · tan γ m a x c 2 , h · tan γ m i n c 2 C h · tan γ m a x c 2 , h · tan γ m i n c 2 D h · tan γ m i n c 2 , h · tan γ m i n c 2
where x m i n c , x m a x c and y m i n c , y m a x c are the upper and lower coordinates of the camera’s detection range in the X and Y axes, respectively.
The pavement marking detection length L of the camera can be mathematically formulated as
L = 2 h · t a n γ m i n c 2 , 2 h · tan γ m a x c 2
The distance between the LiDAR sensor and the camera is d 1 , and the coordinates’ origin of the LiDAR sensor on the ground can be presented as O d 1 , 0 , as shown in Figure 2.
According to the LiDAR sensor’s horizontal FOV α 2 , α 2 , resolution θ , and deployment height h , the LiDAR sensor’s scanning range can be expressed as
x i , j l = h · t a n γ i l + β cos α 2 + j θ + d 1 y i , j l = h · tan α 2 + j θ
where i is the ID of the LiDAR laser beam. j is the point within the scanning resolution sequence. x i , j l , y i , j l are the coordinates of the laser point.
The angular point of the LiDAR sensor’s scanning range S A R L can be expressed as
S A R L = x m a x l , y m a x l x m i n l , y m a x l x m i n l , y m i n l x m a x l , y m i n l = A h · t a n γ m i n l + β cos α 2 + d 1 , h · t a n α 2 B h · t a n γ m i n l + β cos α 2 + d 1 , h · t a n α 2 C h · t a n γ m i n l + β cos α 2 , h · t a n α 2 D h · t a n γ m i n l + β cos α 2 + d 1 , h · t a n α 2
where x m i n l , x m a x l and y m i n l , y m a x l are the upper and lower coordinates of the LiDAR sensor’s detection range in the X and Y axes, respectively.
The overlap area S O A of the LiDAR–camera scanning range can be formulated as
S O A = S A R C S A R L = x m i n F = max x m i n l , x m i n c x m a x F = min x m a x l , x m a x c y m i n F = max y m i n l , y m i n c y m a x F = min y m a x l , y m a x c
where x m i n F , x m a x F and y m i n F , y m a x F are the upper and lower coordinates of the LiDAR–camera sensor’s detection overlap area in the X and Y axes, respectively.
To improve the efficiency and accuracy of pavement marking distress detection and evaluation, the onboard LiDAR and camera sensor’s overlap scanning area for pavement marking should meet the following criteria:
  • The pavement marking coverage area for data fusion should be as large as possible. That is, the detection length for pavement marking L should be as long as possible due to the width of the pavement marking being fixed.
max f 1 h , α , β = min x m a x l , x m a x c m a x x m i n l , x m i n c
2.
There should be as many laser points that fall on pavement marking as possible.
max f 2 h , α , β = i = 1 m j = 0 j = α θ n i , j
n i , j = 0 ,   x i , j l x m i n F , x m i n F   o r   y i , j l y m i n F , y m i n F 1 ,   x i , j l x m i n F , x m i n F   a n d   y i , j l y m i n F , y m i n F
where n i , j indicates whether the laser point x i , j l , y i , j l fall within the overlap area.
According to the sensor’s mechanical structure and working principle, the pavement marking scanning length increases as the sensor’s installation height h increases. Conversely, the point cloud density and pixel resolution of the pavement marking decreases as the sensor’s installation height h increases, as shown in Figure 3.
The LiDAR sensor’s rotation angle α directly determines the scanning width of the pavement when the LiDAR sensor is installed vertically. A smaller α value results in a narrower scanning width, which may lead to incomplete coverage of the markings. On the other hand, an increase in α broadens the scanning width, leading to sparser point clouds in pavement marking and more invalid point clouds in data processing, as shown in Figure 4.
To maximize the LiDAR–camera sensor’s overlap scanning area, the LiDAR sensor should be installed with an inclination angle β due to the deployment distance between the LiDAR sensor and camera, as shown in Figure 5.
Considering the field application in sensor deployment and data acquisition, sensor deployment should comply with the following constraints:
Constraint 1: The minimum length l m i n c of the camera’s detection range should be greater than the width of the pavement marking W m :
2 h · tan γ m i n c 2 > W m
Constraint 2: The sensors’ mounting height h should be higher than the vehicle’s half-tire height h m i n and lower than the height h m a x of the vehicle’s mirrors:
h m i n < h < h m a x
Constraint 3: When the LiDAR sensor is mounted with an inclination β , the laser beams of the lowest and the highest channel should be fired on the ground:
90 ° < γ m i n l + β < γ m a x l + β < 90 °
Constraint 4: The rotation angle α of the LiDAR sensor should be less than 180 ° to diminish the laser firing to the sky, and the LiDAR sensor’s horizontal scanning range should be greater than the width of the pavement marking W m :
0 ° < α < 180 °
2 h · t a n α 2 > W m
Constraint 5: The l m i n F value of the detection range after the fusion of the width of the LiDAR–camera overlap area should be greater than the pavement marking width W m :
min y m a x l , y m a x c max y m i n l , y m i n c > W m
Therefore, the final multi-objective optimum model can be expressed as:
max f 1 h , α , β = min x m a x l , x m a x c max x m i n l , x m i n c m a x   f 2 h , α , β = i = 1 i = 32 j = 0 j = α θ n i , j
To obtain the unique optimal decision variables h , α , β instead of the Pareto-optimal frontier, objective functions f 1 and f 2 are multiplied to obtain the single objective for the optimization mathematical model. Therefore, the LiDAR–camera deployment problem for pavement marking distress fusion detection can be described as
max h , α , β min x m a x l , x m a x c max x m i n l , x m i n c × i = 1 i = m j = 0 j = α θ n i , j s . t . 2 h · tan γ m i n c 2 > W m h m i n < h < h m a x 90 ° < γ m i n l + β < γ m a x l + β < 90 ° 2 h · t a n α 2 > W m 0 ° < α < 180 ° min y m a x l , y m a x c max y m i n l , y m i n c > W m

3.2. Solution Algorithm

The onboard LiDAR–camera deployment optimization for pavement marking distress detection is a high-dimensional constrained optimization problem. The RFO algorithm shows its superior capability in balancing global exploration and local exploitation for high-dimensional constrained optimization problems [31,32]. Specifically, the RFO algorithm’s dynamic step-size adjustment mechanism, derived from scent-tracking and ambush behaviors, enables adaptive switching between wide-area search for individual sensors and precision refinement for multi-sensor fusion, providing a non-convex objective landscape for the proposed LiDAR–camera deployment optimization problem [33].
To address the LiDAR–camera deployment optimization problem, multi-dimensional traps were introduced to bound the solution space and guide the search path of the red fox agent. Each decision variable was conceptualized as prey within the optimization landscape, while constraints were modeled as one-dimensional traps. The red fox navigates the constrained search space to capture prey, ensuring alignment with system requirements.
During the prey position update phase, a dynamic mechanism ensures the iterative refinement of solutions. Specifically, if the updated prey position (new candidate solution) yields a superior objective value compared to the current optimal prey, the latter is replaced by the former. This strategy not only maintains solution diversity but also accelerates convergence toward globally optimal configurations. By integrating multi-dimensional constraints into the hunting metaphor, the algorithm effectively balances exploration and exploitation, avoiding local optima while adhering to physical and operational deployment limits. The processes of the proposed improved RFO algorithm are as follows:
Step 1: The red fox global search phase. In the global search phase, each candidate solution x i , t operates independently within the prey population. The population is first ranked in ascending order based on objective function values to identify the optimal candidate solution x b e s t , t , which serves as a dynamic reference point for subsequent iterations. For each candidate solution x i , t , the Euclidean distance to x b e s t , t is computed as follows:
x b e s t , t = x i , t x b e s t , t
Then, the D value is set, and scaling parameters are randomly chosen:
α d 0 , d i , t ,   d = 1,2 , D
If x i , t , g l o b a l D satisfies the constraints, then the red fox individual moves to point x i , t , g l o b a l :
x i , t , g l o b a l = x i , t , g l o b a l 1 , x i , t , g l o b a l 2 , x i , t , g l o b a l D
x i , t , g l o b a l D = x i , t D + α d · s i g n x i , t D x b e s t , t D
If x i , t , g l o b a l D does not satisfy the constraint, x b e s t , t is updated by setting the sorted second x b e s t 1 , t to the new x b e s t , t .
Step 2: The red fox localized search phase. The local search phase is initiated immediately after the global search phase and employs a modified snail helix equation to simulate the red fox’s stealthy approach toward its prey. The execution of this phase is governed by a stochastic threshold mechanism: for each candidate solution x i , t , a uniformly distributed random value μ i , t ( 0,1 ) is generated. If μ i , t < 0.75 , the local search phase is omitted. This probabilistic bypass mimics scenarios where the red fox is detected by its prey, prompting a strategic pause to avoid premature pursuit and conserve energy. Conversely, if μ i , t 0.75 , the algorithm simulates the red fox advancing toward the prey, refining the solution through localized exploitation. This behavior is mathematically expressed as:
x i , t + 1 = x i , t + x · cos θ
where x and θ are the step size and angular displacement, respectively. By adaptively triggering local search based on μ i , t , the algorithm balances exploration and exploitation, ensuring robust convergence while avoiding local optima.
Step 3: The prey location update phase. During the predation phase, the algorithm iteratively refines the candidate solution population to enhance convergence efficiency. The lowest 5% of solutions, ranked by fitness value, are systematically eliminated to remove suboptimal candidates. These positions are then replenished with new candidate solutions generated stochastically, ensuring population diversity and preventing premature stagnation.
Subsequently, a dynamic spherical search subspace is defined to focus exploration on promising regions. The subspace is characterized by its centroid x c e n t e r , t , derived from the current population’s spatial distribution, and its adaptive diameter d a r e a , t , which scales with the problem’s dimensionality and convergence progress. Mathematically, the spherical subspace S t is expressed as
S t = x R n | x x c e n t e r , t d a r e a , t 2
d a r e a , t = x b e s t 1 , t x b e s t 2 , t
This adaptive bounding mechanism ensures a balanced trade-off between global exploration and local exploitation, guiding the search toward high-fitness regions while maintaining algorithmic robustness in complex optimization landscapes.
For each newly generated candidate solution x j , t , a uniformly distributed random value k j , t 0,1 is independently sampled to govern its placement. This probabilistic threshold mechanism balances exploration and exploitation as follows:
(1) Exploration-Driven Placement ( k j , t > 0.45 ): The candidate solution x j , t is randomly positioned within the global search space, explicitly excluding the predefined spherical subspace S t . This ensures a broad exploration of unexplored regions while avoiding redundancy in localized search areas.
(2) Exploitation-Driven Placement ( k j , t 0.45 ): The candidate solution x j , t is derived from a weighted combination of two elite solutions, x b e s t 1 , t and x b e s t 2 , t , to intensify search efforts near high-fitness regions:
x j , t = k j , t · x b e s t 1 , t + x b e s t 2 , t 2
where k j , t introduces stochasticity to diversify exploitation, while the elite solutions guide convergence toward optimal regions. x j , t needs to satisfy the constraints.
The calibrated threshold of 0.45 ensures a bias toward exploration (55% probability) while retaining targeted exploitation (45% probability), fostering a robust balance between global diversification and local intensification.
Upon generating new candidate solutions (prey), their fitness values are evaluated against the current optimal solution. If the fitness of the newly generated prey surpasses that of the incumbent optimal solution (i.e., the new solution exhibits improved performance), the position of the current optimal prey is updated to reflect the superior candidate:
x b e s t , t x j , t   i f   f x j , t < f x b e s t , t
where f · is the objective function. This elitist strategy ensures that the algorithm retains the best-known solution at each iteration, progressively refining convergence toward the global optimum while maintaining computational efficiency.

4. Case Study

4.1. Experimental Setup

The RoboSense, Shenzhen, China—RS-Helios-5515 LiDAR sensor and the Hikvision, Hangzhou, China—MV-CU03-A0GM/GC camera were deployed on a Saic Volkswagen, Shanghai, China—Tiguan SUV vehicle to test the effectiveness of the proposed optimized deployment method, as shown in Figure 6.
The built-in and setting parameters of the LiDAR and camera sensors in the experiments are shown in Table 1.

4.2. Comparison of Solution Algorithms

To evaluate the performance of the proposed solution algorithm, the RFO algorithm, the Genetic Algorithm (GA) [34], the Particle Swarm Optimization (PSO) [35], and a global traversal algorithm were used to compare the solution results and convergence rates. In the experiments, the GA’s crossover probability and probability of mutation were set as 0.8 and 0.3, respectively, according to Ref. [34]. The cognitive and social weights of PSO were 0.5 and 1 according to Ref. [35].
The design principles and mechanisms of the RFO algorithm provide it with unique advantages when addressing complex optimization problems. For instance, this algorithm exhibits strong global search capabilities while maintaining good accuracy during local searches. This characteristic enables the RFO algorithm to effectively avoid becoming trapped in local optima when confronted with multimodal and nonlinear optimization problems. By comparing it with the GA and PSO, we aim to illustrate the potential advantages of the RFO algorithm in terms of solution quality and convergence speed, thereby providing strong support for its application in sensor layout optimization. The comparison results are shown in Table 2 and Figure 7.
The improved RFO algorithm, augmented with multi-dimensional trap constraints, exhibits accelerated convergence compared to baseline methods. By the 20th iteration, the algorithm approaches near-optimal solutions, achieving rapid progress toward global optima. Furthermore, the improved RFO guarantees monotonic improvement across iterations: each subsequent prey update is constrained to yield a solution no worse than the current optimal prey. This elitist update mechanism ensures that iterative results strictly dominate prior solutions, a property not inherently upheld by the RFO algorithm, GA, or PSO.
Notably, while the improved RFO algorithm produces optimization results equivalent to exhaustive global traversal (i.e., achieving the highest fitness values), it accomplishes this with a significantly reduced computational time. Although PSO retains a marginal speed advantage, its solutions exhibit markedly inferior fitness values compared to the improved RFO algorithm. This underscores a critical trade-off: the enhanced RFO algorithm prioritizes solution quality over raw computational speed, making it preferable for applications demanding high precision.

4.3. Field Test with Optimum Solution

Before the onboard LiDAR–camera platform was installed on a vehicle to collect pavement marking data. LiDAR and camera data were spatiotemporally calibrated in a robot operating system (ROS) to achieve data timestamp synchronization and coordinate alignment, leveraging the sensor’s internal and external parameters [36], as shown in Figure 8.
In terms of pavement materials, there are significant differences between asphalt and concrete roads in optical properties and surface roughness. Asphalt roads typically have a higher surface roughness and lower reflectivity, which makes the contrast with pavement markings more pronounced, facilitating the extraction and detection of these markings and thus improving detection accuracy. Conversely, concrete roads tend to be smoother and have higher reflectivity, which can hinder the extraction and detection of line markings.
Secondly, the differences between thermoplastic pavement markings and paint film pavement markings should not be overlooked. Thermoplastic line markings possess strong wear resistance and better reflective properties, allowing them to maintain high visibility under various weather conditions. This enables LiDAR sensors to more clearly identify the markings during detection, thereby reducing the likelihood of misjudgment. In contrast, paint film pavement markings may be inferior to thermoplastic materials regarding durability and reflectivity, particularly under long-term use and high traffic conditions, which could lead to a gradual decline in performance and consequently reduce the reliability of LiDAR sensor detection.
Experiments were conducted to verify the effectiveness of the proposed LiDAR–camera deployment optimization method for pavement marking distress fusion detection. The field test experiments, conducted on Eco Street in Changchun city, China, which features asphalt pavement and thermoplastic material, took place under conditions of partial shadowing, no precipitation, and with road markings that were mostly intact, although some were damaged as shown in Figure 9.
To evaluate the reliability of the proposed LiDAR–camera optimization deployment method for pavement marking distress fusion detection, the pavement marking detection length f 1 and the point cloud falling on the pavement marking f 2 of the ground truth in field tests and the proposed mathematical model were compared. The results are shown in Table 3 and Figure 10.
In the field test, experiments were conducted to collect data on the white and yellow pavement markings under shadows and different lighting conditions. This phenomenon degrades the accuracy of image-based detection methods due to reduced contrast and false feature extraction. However, integrating LiDAR point clouds mitigates shadow-induced interference by supplementing 3D spatial data, which remains invariant to lighting variations.
In addition, LiDAR–camera data fusion can capture richer marking data while demonstrating strong anti-interference capability. This synergistic approach ensures the reliable detection of pavement marking distress and provides a robust foundation for subsequent fusion pavement marking distress detection algorithms under challenging environments.
Furthermore, the results of the field test and mathematical model comparison reveal a relative error of less than 7%, demonstrating a high degree of concordance. This narrow margin of error empirically validates the effectiveness of the proposed method in real-world scenarios, confirming its practical applicability for pavement marking distress fusion detection.

5. Conclusions

In this study, an onboard LiDAR–camera deployment optimization method was proposed to address the shadows and light changes in pavement marking distress detection by taking the strengths of individual sensors. First, the LiDAR and camera sensor’s scanning range was assessed according to its inherent characteristics. Then, LiDAR–camera deployment optimization was mathematically formulated to achieve the largest overlap scanning area and point cloud density on pavement markings for distress fusion detection. Finally, an improved RFO algorithm was developed to speed up the convergence rate and optimize the solution result by leveraging a multi-dimensional trap mechanism and an improved prey position update strategy.
The experimental results show that the proposed solution method achieves a shorter time of 22.50 s in solving optimization deployment parameters with a mounting height, LiDAR rotation angle, and LiDAR installation inclination of 0.53m, 60°, and 12°, respectively. With the optimization deployment solution, the field test results show that the relative error between the mathematical model evaluation and field test measurements is less than 7%, achieving 5127 LiDAR points falling on a 0.58 m pavement marking per data frame for fusion distress detection. The experiments also demonstrated the reliability and robustness of LiDAR–camera fusion detection in real-world scenarios, under different shadow and light conditions.
However, this study is not free from limitations. (1) The proposed method was tested on straight pavement markings. However, there are curved roads and roundabouts in the real world. Therefore, the proposed method should be further tested on curved pavement markings. (2) In the experiments, the LiDAR sensor RS-Helios-5515 and the camera sensor MV-CU03-A0GM/GC were used to test and evaluate their performance. In future studies, more types and brands of LiDAR and camera sensors should be tested to determine a cost-effective sensor for pavement marking distress fusion detection. (3) LiDAR–camera fusion data for pavement marking detection in different shadow and light conditions was shown in this study. More pavement marking distress fusion detection and evaluation algorithms will be developed in future work. (4) Another limitation is that this study was conducted using a specific experimental vehicle. In future studies, we will consider conducting testing on other types of experimental vehicles to change the constraints of the mounting brackets and installation heights, enabling broader detection capabilities.

Author Contributions

The authors’ contributions to this paper are as follows: study conception and design: C.L. and B.G.; model design and implementation: W.S. and G.S.; analysis and interpretation of results: H.L., G.S. and C.L.; draft manuscript preparation: B.G., W.S. and C.L.; funding acquisition: B.G. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Scientific and Technological Development Project of Jilin Province (Grant No. 20240304143SF).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carlson, P.J.; Park, E.S.; Andersen, C.K. Benefits of Pavement Markings A Renewed Perspective Based on Recent and Ongoing Research. Transp. Res. Rec. 2009, 2107, 59–68. [Google Scholar] [CrossRef]
  2. Masliah, M.; Bahar, G.; Hauer, E. Application of innovative time series methodology to relationship between retroreflectivity of pavement markings and crashes. Transp. Res. Rec. 2007, 2019, 119–126. [Google Scholar] [CrossRef]
  3. Lin, H.Y.; Chang, C.K.; Tran, V.L. Lane detection networks based on deep neural networks and temporal information. Alex. Eng. J. 2024, 98, 10–18. [Google Scholar] [CrossRef]
  4. Lu, F.F.; Sun, G.X.; Yu, H.Q.; Huang, Y.J.; Zhou, T.; Yao, S.Y. LPCNet: End-to-end lane detection with PnP compression and condition DETR. Displays 2025, 87, 102902. [Google Scholar] [CrossRef]
  5. Seo, H.; Shi, Y.F.; Fu, L. Automatic Damage Detection of Pavement through DarkNet Analysis of Digital, Infrared, and Multi-Spectral Dynamic Imaging Images. Sensors 2024, 24, 464. [Google Scholar] [CrossRef]
  6. Lin, C.Y.; Sun, G.H.; Gong, B.W.; Liu, H.; Liu, H.C. Pavement Marking Worn Identification and Classification Using Low-Channel LiDAR. IEEE Trans. Instrum. Meas. 2025, 74, 5006010. [Google Scholar] [CrossRef]
  7. Guan, Y.Y.; Hu, J.B.; Wang, R.H.; Cao, Q.Y.; Xie, F.C. Research on the nighttime visibility of white pavement markings. Heliyon 2024, 10, e36533. [Google Scholar] [CrossRef]
  8. Mazzoni, L.N.; Vasconcelos, K.; Albarracín, O.; Bernucci, L.; Linhares, G. Field Data Analysis of Pavement Marking Retroreflectivity and Its Relationship with Paint and Glass Bead Characteristics. Appl. Sci. 2024, 14, 4205. [Google Scholar] [CrossRef]
  9. Lu, C.H.; Shao, B.E. Environment-Aware Multiscene Image Enhancement for Internet of Things Enabled Edge Cameras. IEEE Syst. J. 2021, 15, 3439–3449. [Google Scholar] [CrossRef]
  10. Chimba, D.; Kidando, E.; Onyango, M. Evaluating the Service Life of Thermoplastic Pavement Markings: Stochastic Approach. J. Transp. Eng. Part B-Pavements 2018, 144, 04018029. [Google Scholar] [CrossRef]
  11. Lin, C.Y.; Sun, G.H.; Tan, L.D.; Gong, B.W.; Wu, D.Y. Mobile LiDAR Deployment Optimization: Towards Application for Pavement Marking Stained and Worn Detection. IEEE Sens. J. 2022, 22, 3270–3280. [Google Scholar] [CrossRef]
  12. Li, L.; Luo, W.T.; Wang, K.C.P. Lane Marking Detection and Reconstruction with Line-Scan Imaging Data. Sensors 2018, 18, 1635. [Google Scholar] [CrossRef]
  13. Wang, H.; Liu, J.H.; Dong, H.R.; Shao, Z. A Survey of the Multi-Sensor Fusion Object Detection Task in Autonomous Driving. Sensors 2025, 25, 2794. [Google Scholar] [CrossRef]
  14. Yuan, X.Y.; Liu, Y.; Xiong, T.F.; Zeng, W.; Wang, C. Semantic Fusion Algorithm of 2D LiDAR and Camera Based on Contour and Inverse Projection. Sensors 2025, 25, 2526. [Google Scholar] [CrossRef]
  15. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 2006, 7, 20–37. [Google Scholar] [CrossRef]
  16. Eckelmann, S.; Trautmann, T.; Zhang, X.Y.; Michler, O. Empirical Evaluation of a Novel Lane Marking Type for Camera and LiDAR Lane Detection. In Proceedings of the 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Electr Network, Paris, France, 6–8 July 2021; pp. 69–77. [Google Scholar]
  17. Borkar, A.; Hayes, M.; Smith, M.T. A Novel Lane Detection System With Efficient Ground Truth Generation. IEEE Trans. Intell. Transp. Syst. 2012, 13, 365–374. [Google Scholar] [CrossRef]
  18. Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst. Appl. 2015, 42, 1816–1824. [Google Scholar] [CrossRef]
  19. Sulistyaningrum, D.R.; Oranova, D.; Pramestya, R.H.; Mukhlash, I.; Setiyono, B.; Ahyudanari, E. Pavement Distress Classification Using Deep Learning Method Based on Digital Image. In Proceedings of the 2nd International Symposium on Transportation Studies in Developing Countries (ISTSDC), Kendari, Indonesia, 1–3 November 2019; pp. 188–190. [Google Scholar]
  20. Alzraiee, H.; Ruiz, A.L.; Sprotte, R. Detecting of Pavement Marking Defects Using Faster R-CNN. J. Perform. Constr. Facil. 2021, 35, 04021035. [Google Scholar] [CrossRef]
  21. Zhou, S.Y.; Jiang, Y.H.; Xi, J.Q.; Gong, J.W.; Xiong, G.M.; Chen, H.Y. A Novel Lane Detection based on Geometrical Model and Gabor Filter. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), San Diego (UCSD), San Diego, CA, USA, 21–24 June 2010; pp. 59–64. [Google Scholar]
  22. Zhang, A.; Wang, K.C.P.; Yang, E.; Li, J.Q.; Chen, C.; Qiu, Y. Pavement lane marking detection using matched filter. Measurement 2018, 130, 105–117. [Google Scholar] [CrossRef]
  23. Cheng, Y.-T.; Patel, A.; Wen, C.; Bullock, D.; Habib, A. Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds. Remote Sens. 2020, 12, 1379. [Google Scholar] [CrossRef]
  24. Rastiveis, H.; Shams, A.; Sarasua, W.A.; Li, J. Automated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference. Isprs J. Photogramm. Remote Sens. 2020, 160, 149–166. [Google Scholar] [CrossRef]
  25. Hörster, E.; Lienhart, R. On the Optimal Placement of Multiple Visual Sensors; University of Augsburg: Augsburg, Germany, 2006. [Google Scholar]
  26. Geissler, F.; Gräfe, R. Optimized sensor placement for dependable roadside infrastructures. In Proceedings of the IEEE Intelligent Transportation Systems Conference (IEEE-ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 2408–2413. [Google Scholar]
  27. Du, Y.C.A.; Wang, F.Q.; Zhao, C.; Zhu, Y.F.; Ji, Y.X. Quantifying the performance and optimizing the placement of roadside sensors for cooperative vehicle-infrastructure systems. Iet Intell. Transp. Syst. 2022, 16, 908–925. [Google Scholar] [CrossRef]
  28. Dey, J.; Taylor, W.; Pasricha, S. VESPA: A Framework for Optimizing Heterogeneous Sensor Placement and Orientation for Autonomous Vehicles. IEEE Consum. Electron. Mag. 2021, 10, 16–26. [Google Scholar] [CrossRef]
  29. Cao, Q.; Li, Z.H.; Tao, P.F.; Zhao, Y.H. Reallocation of Heterogeneous Sensors on Road Networks for Traffic Accident Detection. IEEE Trans. Instrum. Meas. 2023, 72, 1006911. [Google Scholar] [CrossRef]
  30. Li, Y.; Zhang, H.; Cheng, Z.H.; Wang, Z.J.; Zhang, Z.Y.; Yao, X.P. Optimal Layout Method for Roadside LiDAR and Camera. IEEE Access 2024, 12, 26905–26918. [Google Scholar] [CrossRef]
  31. Polap, D.; Wozniak, M. Red fox optimization algorithm. Expert Syst. Appl. 2021, 166, 114107. [Google Scholar] [CrossRef]
  32. Zaborski, M.; Wozniak, M.; Mandziuk, J. Multidimensional Red Fox meta-heuristic for complex optimization. Appl. Soft Comput. 2022, 131, 109774. [Google Scholar] [CrossRef]
  33. Lanszki, J.; Bende, Z.; Nagyapáti, N.; Lanszki, Z.; Pongrácz, P. Optimal prey for red fox cubs-An example of dual optimizing foraging strategy in foxes from a dynamic wetland habitat. Ecol. Evol. 2023, 13, e10033. [Google Scholar] [CrossRef]
  34. Zheng, C.C.; Li, X.S.; You, J.; Liu, Y.; Bai, Y.R.; Hu, T. Ellipse Fitting Method Based on Pendulum Genetic Algorithm Particle Swarm Optimization for Well Diameter Measurement by Laser Distance Sensor. IEEE Sens. J. 2025, 25, 587–600. [Google Scholar] [CrossRef]
  35. Wang, M.H.; Wang, X.B.; Jiang, K.W.; Fan, B. Reinforcement Learning-Enabled Resampling Particle Swarm Optimization for Sensor Relocation in Reconfigurable WSNs. IEEE Sens. J. 2022, 22, 8257–8267. [Google Scholar] [CrossRef]
  36. Tsai, D.; Worrall, S.; Shan, M.; Lohr, A.; Nebot, E. Optimising the selection of samples for robust lidar camera calibration. In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 2631–2638. [Google Scholar]
Figure 1. The schematic diagram of the camera’s scanning range.
Figure 1. The schematic diagram of the camera’s scanning range.
Sensors 25 03875 g001
Figure 2. The schematic diagram of the LiDAR sensor’s scanning range.
Figure 2. The schematic diagram of the LiDAR sensor’s scanning range.
Sensors 25 03875 g002
Figure 3. The schematic diagram of the inverse relationship between the sensor’s installation height and the pavement marking detection length.
Figure 3. The schematic diagram of the inverse relationship between the sensor’s installation height and the pavement marking detection length.
Sensors 25 03875 g003
Figure 4. The schematic diagram of the inverse relationship between the LiDAR sensor’s rotation angle α and the pavement scanning width.
Figure 4. The schematic diagram of the inverse relationship between the LiDAR sensor’s rotation angle α and the pavement scanning width.
Sensors 25 03875 g004
Figure 5. The schematic diagram showing the side view of the LiDAR–camera sensor’s overlap scanning area: (a) the LiDAR sensor mounted vertically; (b) the LiDAR sensor mounted with an inclination β .
Figure 5. The schematic diagram showing the side view of the LiDAR–camera sensor’s overlap scanning area: (a) the LiDAR sensor mounted vertically; (b) the LiDAR sensor mounted with an inclination β .
Sensors 25 03875 g005
Figure 6. The snapshot of onboard LiDAR–camera sensor deployment.
Figure 6. The snapshot of onboard LiDAR–camera sensor deployment.
Sensors 25 03875 g006
Figure 7. The convergence rates of different algorithms.
Figure 7. The convergence rates of different algorithms.
Sensors 25 03875 g007
Figure 8. The snapshot of LiDAR–camera spatiotemporal calibration in robot operating system.
Figure 8. The snapshot of LiDAR–camera spatiotemporal calibration in robot operating system.
Sensors 25 03875 g008
Figure 9. The snapshot of field test experiments: (a) the field test route in the experiments; (b) the onboard LiDAR–camera deployment based on optimization results.
Figure 9. The snapshot of field test experiments: (a) the field test route in the experiments; (b) the onboard LiDAR–camera deployment based on optimization results.
Sensors 25 03875 g009
Figure 10. The pavement marking detection results under different shadow and light conditions: (a1d1) images extracted from the camera; (a2d2) the point clouds extracted from the LiDAR sensor; (a3d3) LiDAR–camera data fusion for pavement marking distress detection.
Figure 10. The pavement marking detection results under different shadow and light conditions: (a1d1) images extracted from the camera; (a2d2) the point clouds extracted from the LiDAR sensor; (a3d3) LiDAR–camera data fusion for pavement marking distress detection.
Sensors 25 03875 g010aSensors 25 03875 g010b
Table 1. The built-in and setting parameters of the LiDAR and camera sensors in the experiments.
Table 1. The built-in and setting parameters of the LiDAR and camera sensors in the experiments.
SensorsFeaturesParameters
CameraFrame Rate201.4 FPS
Resolution1280 × 1024
Focal Length3.5 mm
Angle of ViewH: 82.6°; V: 70.2°
LiDARHorizontal FOV360°
Vertical FOV−55° ~ +15°
Frame Rate10 Hz ~ 20 Hz
Channels32
Vertical Resolution1.44° ~ 2°
Horizontal Resolution0.2° ~ 0.4°
Table 2. The LiDAR and camera sensor’s deployment parameters solved using different algorithms.
Table 2. The LiDAR and camera sensor’s deployment parameters solved using different algorithms.
AlgorithmsSensor Deployment ParametersFitnessRunning Time (s)
h (cm) α (°) β (°)
Global traversal0.5360123406.91127.38
GA0.5560123207.2105.38
PSO0.596062502.818.06
RFO0.5460103195.437.1
Improved RFO0.5360123406.922.50
Table 3. The comparison of the optimization objective between the field tests and mathematical model.
Table 3. The comparison of the optimization objective between the field tests and mathematical model.
ObjectivesField TestMathematical ModelError (%)
Length (m)0.580.626.45
Point Number512754956.70
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, C.; Sun, W.; Sun, G.; Gong, B.; Liu, H. Onboard LiDAR–Camera Deployment Optimization for Pavement Marking Distress Fusion Detection. Sensors 2025, 25, 3875. https://doi.org/10.3390/s25133875

AMA Style

Lin C, Sun W, Sun G, Gong B, Liu H. Onboard LiDAR–Camera Deployment Optimization for Pavement Marking Distress Fusion Detection. Sensors. 2025; 25(13):3875. https://doi.org/10.3390/s25133875

Chicago/Turabian Style

Lin, Ciyun, Wenjian Sun, Ganghao Sun, Bown Gong, and Hongchao Liu. 2025. "Onboard LiDAR–Camera Deployment Optimization for Pavement Marking Distress Fusion Detection" Sensors 25, no. 13: 3875. https://doi.org/10.3390/s25133875

APA Style

Lin, C., Sun, W., Sun, G., Gong, B., & Liu, H. (2025). Onboard LiDAR–Camera Deployment Optimization for Pavement Marking Distress Fusion Detection. Sensors, 25(13), 3875. https://doi.org/10.3390/s25133875

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop