Next Article in Journal
OutcropHyBNet: Hybrid Backbone Networks with Data Augmentation for Accurate Stratum Semantic Segmentation of Monocular Outcrop Images in Carbon Capture and Storage Applications
Next Article in Special Issue
Influence of Accelerometer Sensor Position for Measurement of Lateral Acceleration of Delivery Van for Cargo Securement
Previous Article in Journal
Evaluation of the TraumaGuard Balloon-in-Balloon Catheter Design for Intra-Abdominal Pressure Monitoring: Insights from Pig and Human Cadaver Studies
Previous Article in Special Issue
A UWB/INS Trajectory Tracking System Application in a Cycling Safety Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Placement Method of Multiple Lidars for Roadside Infrastructure in Urban Environments

1
Research and Development Department, Korea Intelligent Automotive Parts Promotion Institute (KIAPI), Daegu 43011, Republic of Korea
2
Department of Control and Robot Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
3
Department of Intelligent Systems and Robotics, Chungbuk National University, Cheongju 28644, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(21), 8808; https://doi.org/10.3390/s23218808
Submission received: 25 September 2023 / Revised: 27 October 2023 / Accepted: 28 October 2023 / Published: 29 October 2023
(This article belongs to the Special Issue Sensors and Systems for Automotive and Road Safety (Volume 2))

Abstract

:
Sensors on autonomous vehicles have inherent physical constraints. To address these limitations, several studies have been conducted to enhance sensing capabilities by establishing wireless communication between infrastructure and autonomous vehicles. Various sensors are strategically positioned within the road infrastructure, providing essential sensory data to these vehicles. The primary challenge lies in sensor placement, as it necessitates identifying optimal locations that minimize blind spots while maximizing the sensor’s coverage area. Therefore, to solve this problem, a method for positioning multiple sensor systems in road infrastructure is proposed. By introducing a voxel grid, the problem is formulated as an optimization challenge, and a genetic algorithm is employed to find a solution. Experimental findings using lidar sensors are presented to demonstrate the efficacy of this proposed approach.

1. Introduction

In the realm of autonomous driving research, sensors play a pivotal role. Cameras, for instance, perceive incoming light through their lenses and image sensors. Radar, short for “radio detection and ranging”, employs electromagnetic waves to estimate position and relative velocity.
Lidar, or “light detection and ranging”, stands out as a key sensor in autonomous driving research. It gauges position by measuring the time it takes for a laser beam to be emitted, reflect off an object, and return. Lidar, especially the time-of-flight (TOF) variant, is widely used in autonomous vehicles to acquire point cloud data, aiding in object detection, localization, and map construction. Integrating lidar into roadside infrastructure is emerging, and some studies have been conducted using roadside lidar [1,2,3,4,5,6,7,8,9,10].
A high-channel lidar, typically having 64 channels or more, offers advantages such as narrower beam spacing, facilitating feature extraction and the detection of distant objects. However, these sensors are expensive, making it difficult to install multiple units in one space. Moreover, they are constrained by their limited measurement range and susceptibility to occlusion.
Research on the placement of multiple lidars has primarily focused on vehicles. S. Roos et al. [11] placed multiple lidars and assessed their performance using the CARLA simulator. T. Kim et al. [12] introduced a multi-lidar placement approach that takes into account blind spots created by other vehicles. They placed a 2D occupancy grid board at a specified distance and calculated occupancy using the point cloud reflected on the board. Furthermore, various studies have proposed techniques for placing multiple sensors within a single vehicle [13,14,15,16,17].
Existing research on lidar placement primarily focuses on vehicles and does not consider the placement problem for infrastructure sensors. Urban environments often feature numerous obstructions, such as streetlights and traffic signals, at intersections. These obstructions can lead to a decline in the quality of raw data due to occlusion. Figure 1 shows a point cloud acquired from a typical intersection. The quality of these raw data varies depending on lidar placement, leading to a drop in object detection performance. Therefore, the optimization of sensor placement in urban settings emerges as a crucial challenge. Infrastructure lidar placement offers a higher degree of freedom, requiring consideration of both the xyz positions and roll, pitch, and yaw angles [18]. X. Cai et al. [18] conducted a study where they placed multiple lidars at an intersection, analyzing their impact on recognition performance based on their placements. S. Jin et al. [19] put forth an evaluation method for lidar placement, but it lacks a systematic approach, including accounting for blind spots in modeling the real environment.
A. Qu et al. [20] proposed implementing the environment in a simulation and placing sensors, designating candidate locations, and optimizing them using the point cloud projected on the road surface while excluding buildings, sidewalks, etc. L. Kloeker et al. [21] proposed optimizing lidar placement in infrastructure by utilizing a 3D digital map. They divided the road into triangles based on the OpenDRIVE [22] format map and optimized lidar placement by considering the number of points projected onto these triangles.
Several studies have used multiple lidars. However, they often fail to reflect the unique features of urban roads, characterized by numerous buildings, poles, and more, or have primarily relied on simulations. In this study, we propose a method for multiple sensors placement, aiming to identify the optimal position and orientation that maximizes data detection range. Our proposed method takes into account blind spots arising from the road environment. To quantify parameters for sensor placement, we introduce a novel approach for evaluating occupancy through voxelization of a map. Additionally, we employ a genetic algorithm to address the sensor placement problem. The sensor array is represented by chromosomes, and new chromosomes are generated through crossover and mutation operators. The experiments of this study were conducted in a simulation replicating the real environment, and the results affirm the effectiveness of our proposed method. The key contributions of this paper are as follows:
  • Modeling the lidar placement environment based on real-world data rather than simulation.
  • Converting the point cloud map and lidar beams into a computable discrete signal format (voxel) for quantitative evaluation.
  • Optimizing the positions and directions of the lidars using genetic algorithm chromosomes and introducing a 2-opt local optimization method.
  • Proposing a placement method for multiple lidars (two or more) on infrastructure, replicating and validating the simulation placement results in a real environment.
This paper is structured as follows. Section 2 introduces a multiple lidar system in an urban environment, and Section 3 provides a mathematical description of the problem. Section 4 describes the optimization algorithm, while Section 5 presents the experimental results.

2. Multiple Lidar System in an Urban Environment

The mechanically rotating 3D lidar radiates N b m ,   θ channel laser beams in the vertical direction, forming a rotating laser beam array in the horizontal direction. The angle between these beams in the vertical or horizontal direction is referred to as the resolution, denoted by a n g b m ,   θ α   ( α = 1 ,   ,   N b m ,   θ ) and a n g b m ,   ϕ , respectively. The lowest and highest angles in the vertical direction are represented as a n g b m ,   θ , and a n g b m ,   θ + , respectively.
The lidar sensor calculates distances by measuring the time taken for emitted laser beams to reflect off objects and return. This information is represented as points. Figure 2 shows the resolution and reflection points of the lidar.
When a lidar is placed along an urban road, it may encounter obstacles like buildings that can block the laser beams, causing what is commonly known as a “dead zone”. To minimize dead zones, multiple lidars are strategically positioned at intersections. Each lidar covers a specific area, resulting in overlapping coverage regions. Figure 3 provides an example of multiple lidar systems, their coverage areas, and the overlapping coverage regions. Figure 3a displays the positions of three lidars placed at an intersection, with the cyan polygons representing buildings that replicate the urban environment and create blind spots in sensing. Figure 3b shows the coverage of each lidar in relation to the blind spots caused by the buildings. Figure 3c demonstrates the overlapping coverage provided by the three lidars, effectively reducing blind spots at intersections through their strategic deployment. Almost all areas around intersections are covered by lidar.
In a multiple lidar system containing N l d lidars, the position of the l -th lidar is as follows.
p l d l = x l d l , y l d l , z l d l ,   r l d l ,   p l d l S ( l = 1 , , N l d )
where S is the set of potential positions for the placement of lidars. The position ( p l d ,   1 ) of the first (reference) lidar is fixed by the user, while the positions of the remaining lidars are determined using the proposed method.
The number and density of lidar points increase in areas where the coverage overlaps. Efficient placement of multiple lidars reduces dead zones at urban intersections, as shown in Figure 4, and enables the acquisition of high-resolution data, similar to that achievable with high-channel lidars.

3. Problem Formulation

The main problem of multiple lidar placement in urban environments is minimizing blind spots while maximizing point density. One of the main problems is that the locations for lidar placement are restricted to specific areas, such as the roadside, to avoid interfering with the road where vehicles are in operation. Therefore, an optimization method is required to identify the optimal placement while modeling the real environment to reduce blind spots.
In this study, we introduce the concept of Lidar Occupancy Space (LOS) for optimization. LOS comprises voxels, which are a down-sampling method that reduce the number of point clouds and converts them into a normalized discrete signal format. This enables the modeling of the real environment for computational purposes and reduces the computational load for layout optimization. An LOS is generated to match the size of the user’s region of interest ( L O S x ,   L O S y ,   L O S z ) . Since the point cloud map, derived from real lidar data, accurately reflects the actual environment, the LOS is weighted using this map, and the occupancy rate is determined as the lidar beams traverse the LOS.
Additionally, we propose a Lidar Occupancy Voxel (LOV) grid to assess the distribution of lidar beams. LOV consists of unit-length cubes ( U L O V ). The LOS is divided into W L O S × D L O S × H L O S voxel grids, which are determined according to the size of the LOS and the unit length ( L L O V ) of the LOV. Figure 5 shows LOV and LOS, where LOS is an aggregation of LOVs.
To ensure the accurate weighting of LOS using the point cloud map, the point cloud map must undergo preprocessing (filtering) to match the size of ( L O S x ,   L O S y ,   L O S z ). When the LOS and point cloud map overlap, the distribution of points within each LOV will vary, as shown in Figure 6. Figure 6a presents the point cloud map, while Figure 6b shows the LOS generated from this map. The weight of the LOV comprising the LOS is determined by the number of points contained within the corresponding voxel, with yellow indicating the presence of weight. For the voxel grid L O V ( i ,   j ,   k ) , the weight ( L O V w g ( i ,   j ,   k ) ) is defined as follows.
L O V w g i ,   j ,   k = 1 ,   N w g ( i , j ,   k ) 1 0 ,   N w g i , j ,   k = 0  
where N w g ( i , j ,   k ) is the number of points of the point cloud map included in L O V i ,   j ,   k . L O V w g i ,   j ,   k denotes whether the grid is occupied by the point cloud map, and has a weight if it is occupied.
In the proposed method, occupancy is evaluated based on whether the lidar beam passes through the voxel grid L O V i ,   j ,   k that constitutes the LOS, and is determined as follows:
  • Step 1. Set lidar counter l to 1.
  • Step 2. Set the horizontal angle counter β to 0.
  • Step 3. Set the vertical angle counter α to 1, and set the vertical angle variable a n g b m ,   θ ^ to a n g b m ,   θ .
  • Step 4. Calculate the intersection p b m ( α ,   β ) of the six outermost surfaces of the LOS and the lidar beam b m ( α ,   β ) as follows.
    x b m α ,   β = r cos a n g b m ,   θ ^ cos ( β a n g b m ,   ϕ )
    y b m α ,   β = r cos a n g b m ,   θ ^ sin ( β a n g b m ,   ϕ )
    z b m α ,   β = r sin a n g b m ,   θ ^
    where r = p b m α ,   β p l d l is the distance between the lidar and the intersection.
  • Step 5. Using the Bresenham algorithm [23], store index i d x γ = ( i ,   j ,   k ) of the voxel grid L O V i ,   j ,   k included in the line segment between p l d l and p b m ( α ,   β ) in the array I D X .
    I D X = { i d x 0 ,   ,   i d x N I D X }
    where N I D X is the number of voxel grids included in the line segment. The Bresenham algorithm is a computer graphics algorithm designed for drawing straight lines using integer calculations exclusively, avoiding the complexity and slowness associated with real number calculations. Since actual computer screens consist of pixels, and pixels are inherently integers, a straight line drawn through a straight-line equation may span multiple pixels. In this study, we generated LOS by introducing voxels and efficiently approximated straight lines within the LOV that constitutes the LOS.
  • Step 6. Set the index search counter γ to 0.
  • Step 7. If L O V w g i d x γ = 1 , change L O V i d x 0 to L O V w g i d x γ to 1 and move to Step 9.
  • Step 8. Increment the index search counter γ and move to Step 7.
  • Step 9. Increase the vertical angle counter α and calculate a n g b m ,   θ ^ as in the following equation. If a n g b m ,   θ ^     a n g b m ,   θ + , move to Step 4.
    a n g b m ,   θ ^ = a n g b m ,   θ ^ + a n g b m ,   θ α
  • Step 10. Increase the horizontal angle counter β . If β     2 π a n g b m ,   ϕ , move to step 3.
  • Step 11. Increase lidar counter l , If l   N l d , move to Step 2.
The total lidar occupancy L O of lidar beams relative to the LOV, as shown in Figure 7, is the sum of all voxel grids, and is calculated as follows.
L O = k = 1 H L O S j = 1 D L O S i = 1 W L O S L O V ( i ,   j ,   k )
On the other hand, lidar occupancy % L O is the ratio of L O to the total number of voxel grids.
% L O = L O W L O S × D L O S × H L O S
The lidar placement problem aims to determine the lidar locations p l d l   S ( l = 1 , , N l d ) to maximize the lidar occupancy LO. The voxel grid value depends on the lidar’s placement, and the mathematical formula is defined as follows.
p l d l ,   * = argmax p ld l   S k = 1 H L O S j = 1 D L O S i = 1 W L O S L O V ( i ,   j ,   k )

4. Optimization

This section describes the placement optimization algorithm. The placement algorithm is designed to find a near-optimal solution. The genetic algorithm is a search method that identifies the optimal solution by simulating the way organisms evolve and adapt to their environment. This algorithm operates by selecting the chromosome with the best fitness from a set of chromosomes and iteratively refining the search in the direction of the optimal solution. The sensor placement algorithm is as follows:
  • Step 1. Place the first (reference) lidar.
  • Step 2. Create lidar occupancy space (LOS) using the point cloud map.
  • Step 3. Assign weights L O V w s ( i ,   j ,   k ) to the voxel grid.
  • Step 4. Set the lidar count l to 2.
  • Step 5. Find the p l d l ,   * that maximizes the lidar occupancy (LO) using a genetic algorithm.
  • Step 6. Increment the lidar counter. If l   N l d go to Step 5.
In Step 5, a genetic algorithm (GA) is applied to determine the placement of the sensors. The chromosome p o p is a binary string divided by five sections as:
p o p = < X p o p ,   Y p o p ,   Z p o p ,   R p o p ,   P p o p >
X p o p = < s x 1 ,   s x 2 ,   ,   s x N x > ,   s x   { 0,1 }
Y p o p = < s y 1 ,   s y 2 ,   ,   s y N y > ,   s y   { 0,1 }
Z p o p = < s z 1 ,   s z 2 ,   ,   s z N z > ,   s z   { 0,1 }
R p o p = < s r 1 ,   s r 2 ,   ,   s r N r > ,   s r   { 0,1 }
P p o p = < s p 1 ,   s p 2 ,   ,   s p N p > ,   s p   { 0,1 }
where X p o p ,   Y p o p ,   Z p o p ,   R p o p , and P p o p are matched with the lidar placement p l d l = x l d l , y l d l , z l d l , r l d l , p l d l , respectively.
The initial population, denoted as P O P g = < p o p g ,   1 ,   p o p g ,   2 ,   ,   p o p g ,   N P O P > , is generated through random number generation. The next population is formed by the selection operator, with the remaining stochastic sampling [24] reproduces to chromosomes with higher fitness, where fitness is defined as the total lidar occupancy L O .
The crossover and mutation operators create a range of new chromosomes, enhancing the optimization of lidar placements. Chromosomes are randomly selected based on the crossover probability ( P B c s ), and the crossover point is randomly determined at the boundary of sections, as showed in Figure 8. New chromosomes p o p g ,   1 ¯ and p o p g ,   2 ¯ are generated through the crossover operation of p o p g ,   1 and p o p g ,   2 . Similarly, chromosomes are randomly selected according to the mutation probability ( P B m t ). For chromosome p o p g ,   1 ¯ , one bit from each section ( X p o p ,   Y p o p ,   Z p o p ,   R p o p ,   P p o p ) is selected. A new chromosome p o p g ,   1 ̿ is generated by inverting the selected bits, as depicted in Figure 9.
Our genetic algorithm is summarized as follows:
  • Step 5-1. Generate initial population P O P 0 = < p o p 0 ,   1 ,   p o p 0 ,   2 ,   ,   p o p 0 ,   N P O P > randomly.
  • Step 5-2. Set the generation counter p to 1.
  • Step 5-3. Calculate the fitness, and reproduce chromosomes by the remainder stochastic sampling.
  • Step 5-4. Pairs of chromosomes are randomly selected, and crossover operation is performed between chromosomes.
  • Step 5-5. Chromosomes are randomly selected, and mutation operation is performed.
  • Step 5-6. Find the best chromosome and vary it by 2-opt improvement [25]. The 2-opt method is a local search algorithm that examines all feasible combinations and swaps them to find a solution.
  • Step 5-7. Make increments to the generation counter p . If the exit condition is not satisfied, go to Step 5-3.
  • Step 5-8. The final best chromosome is decoded into placement of lidar position p l d l ,   * .

5. Experiments

5.1. Experimental Setup

The proposed sensor placement algorithm was validated using 3D lidars. Two or three forty-channel lidars (HESAI Pandar40P, Shanghai, China) were simulated and used for placement optimization. For performance comparison, a single 128-channel lidar (Velodyne VLS-128, San Jose, CA, USA) was utilized. The algorithm was executed on a workstation running Linux Ubuntu 20.04 and ROS Noetic, with the program developed in the C++ language. The specifications of the lidars used in the experiment are detailed in Table 1.
The test area selected for the experiment was the proving ground of the Korea Automotive Parts Promotion Institute (KIAPI). KIAPI’s proving ground encompasses various test facilities, including an autonomous vehicle test road, a multipurpose test track, and a high-speed circuit. This experiment specifically focused on the autonomous vehicle test road, which features two four-way intersections. These intersections replicate an urban environment and exhibit blind spots created by buildings, as shown in Figure 10.
Table 2 provides a comprehensive overview of the experiment. As part of the simulation, the placement optimization results of two or three forty-channel lidars and one one-hundred-and-twenty-eight-channel lidar were compared at Intersection #1 (S1~S3) and Intersection #2 (S4~S6). The optimization of lidar placement involved adjusting the crossover and mutation probabilities.
The results of the optimal placement in the simulator at Intersection #1 were subsequently replicated in a real environment (R1~R6). In the real environment, the vehicle was driven, and its detection was based on data acquired from the placed lidars, with a comparison of detection ranges (R1~R2). Additionally, pedestrians walking in the real environment were detected using data from the lidars, and their detection ranges were compared (R3~R4). Finally, when two vehicles were driven, one vehicle was partially obscured, and the detection range was compared by detecting the vehicles from the data acquired by the placed lidars. To place lidars on a simulator using the proposed method and verify their performance, an experiment was conducted at two intersections with different environments. Furthermore, by optimally placing multiple 40-channel lidars, which is relatively cost effective, it was intended to reduce the blind spot of the intersection and maintain the detection performance. We sought to validate the effectiveness of the proposed method by applying the simulation results to a real environment. We aimed to verify the utility of the proposed placement method by evaluating the detection performance of objects primarily found in infrastructure, such as vehicles and pedestrians, and demonstrating its reproducibility.

5.2. Experimental Results

5.2.1. Placement Optimization Simulation

The parameters for the genetic algorithm were set as follows: a population of 100 and 300 iterations. The crossover probability ranged from 0.4 to 0.2, and the mutation probability ranged from 0.2 to 0.05, respectively. The size of LOS was (160 m, 72 m, 10 m), with a unit distance of LOV set at 0.4 m, as depicted in Figure 11. In the experiment involving two and three forty-channel lidars at Intersections #1 and #2, the placement of the first (reference) lidar was at (−7.9, −9.1, 2.9, −0.4, 0.5).
The first experiment (S1~S3) was conducted at Intersection #1. Table 3 presents the LO for two forty-channel lidars, three forty-channel lidars, and one one-hundred-and-twenty-eight-channel lidar. Figure 12 displays the placement optimization results for each configuration. In the case of two forty-channel lidars (S1), the placement of the second lidar was determined using the proposed method. Additionally, for three forty-channel lidars (S2), the location of the third lidar was found after the optimal placement in S1 was determined. Figure 12b,c validate the enhancement in measurements achieved by the proposed method. Table 3 shows that the proposed method exhibits performance comparable to that of a 128-channel lidar sensor. The mean and maximum values of LO demonstrate improvements with the proposed method.
The second experiment (S4~S6) was conducted at Intersection #2. Table 4 shows the LO at Intersection #2, and Figure 13 shows the results of placement optimization. As evident from Figure 13b,c, the proposed method improves measurements. Furthermore, it is evident that the proposed method, involving 40-channel multi-lidar placement, demonstrates performance similar to that of the 128-channel lidar, as shown in Figure 13b,c. The average and maximum LO values are either similar to or improved with the proposed method.

5.2.2. Placement and Evaluation in Real Environment

Finally, we placed lidars in the real environment and acquired point cloud data. Among the simulation results for Intersection #1, the optimal placements for two forty-channel lidars (S2) and one one-hundred-and-twenty-eight-channel lidar (S1) were recreated in the real environment. Figure 14 shows the placement of lidars in the actual environment. Three lidars (two forty-channel lidars and one one-hundred-and-twenty-eight-channel lidar) were simultaneously placed, and point clouds were obtained while vehicles and pedestrians were in motion. The point clouds acquired from the two forty-channel lidars and the 128-channel lidar were input into a deep learning-based detection algorithm [28] to compare the positions of detected vehicles and pedestrians. The detection algorithm receives point cloud input from lidar and detects objects such as vehicles and pedestrians based on artificial intelligence, and outputs the type, location, and size of the object in 3D space. Figure 15 shows a detailed scenario, which is as follows.
  • R1, R2: Point clouds were acquired while vehicles were driven on the road near an intersection. Using a deep learning algorithm, vehicles were detected, and their detection ranges were compared (Figure 15a).
  • R3, R4: Point clouds were acquired as people walked on the sidewalk near the intersection. Pedestrians were detected based on a deep learning algorithm, and the recognized ranges were compared (Figure 15b).
  • R5, R6: Point clouds were acquired as two vehicles were driven on the road near the intersection. While driving, one vehicle was positioned to obscure an area by other vehicles. Using a deep learning algorithm, two vehicles were detected, and the recognition of obscured areas was compared (Figure 15c).
Figure 16 shows the vehicle detection results from the acquired point clouds. Figure 16a shows the results of vehicle detection from the point cloud acquired using the proposed method with two forty-channel lidars. Figure 16b shows the results of vehicle detection from the point cloud acquired with a single 128-channel lidar. Figure 16c,d provide a comparison of the trajectories where vehicles were detected. Table 5 shows the maximum detection position (x, y) and distance. The experiment revealed that the maximum detection position and distance exhibited similar results. However, in areas with sensing blind spots due to buildings, the utilization of low-channel multiple lidars through placement optimization yielded superior detection results. The results of pedestrian detection from the acquired data are shown in Figure 17. Figure 17a shows the results of pedestrian detection from the point cloud acquired using the proposed method with two forty-channel lidars. Figure 17b shows the results of pedestrian detection from the point cloud acquired with a single 128-channel lidar. Figure 17c,d show a comparison of the trajectories where pedestrians were detected. Table 6 presents the maximum detection position (x, y) and distance. The experiment demonstrated that the maximum detection position and distance yielded similar results. Finally, Figure 18 shows the results of detecting an obscured driving vehicle. Figure 18a shows the results of occluded driving vehicle detection from the point cloud acquired using the proposed method with two forty-channel lidars. Figure 18b shows the results of occluded driving vehicle detection from the point cloud acquired with a single 128-channel lidar. Figure 18c,d show a comparison of the trajectories where occluded driving vehicles were detected. Table 7 shows the maximum detection position and distance. Vehicle #1 changes lanes in front of vehicle #2, resulting in temporary obscuration of vehicle #2 when changing lanes. This observation highlights that only occluded vehicles can be detected using the proposed method. It was confirmed that the multi-sensor placement method using the proposed approach is effective in handling occlusions between objects.
The experimental results in the real environment were similar to the simulation results of the proposed method, demonstrating the feasibility of the proposed method. Lidar occupancy was compared by implementing the proposed method in the simulation. Multiple lidars were placed to reduce blind spots at urban intersections, resulting in a higher occupancy rate than high-cost lidar. Multiple lidar placements were replicated in a real environment, and vehicles and pedestrians were detected using point clouds acquired from the lidars. The reproducibility and effectiveness of the proposed method were validated by comparing the detection ranges of the acquired data. In a real environment, multiple lidar placements exhibited a detection range similar to that achieved with a 128-channel lidar. It was evident that placing multiple lidars using the proposed method enhanced measurements, as demonstrated through comparative experiments in a real environment using lidar. For the placement of lidars in an actual urban environment, multiple lidars can be efficiently placed by generating a point cloud map of the environment and applying the proposed method, as demonstrated by experimental results. Our method has showcased that environmental information can be acquired from infrastructures. The acquired environmental information can then be processed to extend the vehicle’s detection range and transmit it to connected vehicles using V2X (Vehicle-to-Infrastructure) communication. This approach not only helps in reducing sensor blind spots in autonomous vehicles, enhancing the safety of other vehicles and pedestrians, but it also holds promise for further research in this domain.

6. Conclusions

In this paper, the lidar placement problem was defined as the problem of determining the lidar placement in an urban environment to minimize blind spots and optimize the number of beams reaching the point cloud. To mathematically formalize this problem, a point cloud map was processed and defined as the lidar occupancy space (LOS). Experimental results demonstrated that performance can be enhanced through our lidar placement method. Future work will involve expanding the proposed method to systems utilizing multiple lidar, radar, and cameras, and integrating it with edge–cloud infrastructure and V2X communication technology.

Author Contributions

Methodology, T.-H.K. and T.-H.P.; Software, T.-H.K. and G.-H.J.; Validation, T.-H.K., G.-H.J. and H.-S.Y.; Resources, H.-S.Y.; Data curation, G.-H.J. and H.-S.Y.; Writing—original draft, T.-H.K.; Writing—review & editing, K.-S.Y. and T.-H.P.; Visualization, T.-H.K.; Supervision, T.-H.P.; Project administration, K.-S.Y. and T.-H.P.; Funding acquisition, K.-S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Institute of Information & communications Technology Planning & Evaluation(IITP): 2021-0-01314; MSIT (Ministry of Science and ICT), Korea: IITP-2023-2020-0-01462.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Acknowledgments

This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP), funded by the Korean government (MSIT) under grant No. 2021-0-01314, for the development of driving environment data stitching technology to provide data on shaded areas for autonomous vehicles. Additionally, this research was supported by the Ministry of Science and ICT (MSIT), Korea, under the Grand Information Technology Research Center support program (IITP-2023-2020-0-01462) supervised by the IITP.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ren, H.; Zhang, Y. TrajMatch: Towards Automatic Spatio-temporal Calibration for Roadside LiDARs through Trajectory Matching. arXiv 2023, arXiv:2302.02157. [Google Scholar] [CrossRef]
  2. Lan, X.; Wang, C.; Lv, B.; Li, J.; Zhang, M.; Zhang, Z. 3D Point Cloud Stitching for Object Detection with Wide FoV Using Roadside LiDAR. Electronics 2023, 12, 703. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Zhang, J.; Tao, Y.; Xiao, Y.; Yu, S.; Asiri, S.; Li, J.; Li, T. Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments. Electronics 2022, 11, 1559. [Google Scholar] [CrossRef]
  4. Guan, F.; Xu, H.; Tian, Y. Evaluation of Roadside LiDAR-Based and Vision-Based Multi-Model All-Traffic Trajectory Data. Sensors 2023, 12, 5377. [Google Scholar]
  5. Wen, X.; Hu, J.; Chen, H.; Huang, S.; Hu, H.; Zhang, H. Research on an Adaptive Method for the Angle Calibration of Roadside LiDAR Point Clouds. Sensors 2023, 17, 7542. [Google Scholar] [CrossRef] [PubMed]
  6. Lin, C.; Sun, G.; Wu, D.; Xie, C. Vehicle Detection and Tracking with Roadside LiDAR Using Improved ResNet18 and the Hungarian Algorithm. Sensors 2023, 19, 8143. [Google Scholar]
  7. Liu, H.; Lin, C.; Gong, B.; Liu, H. Lane-Level and Full-Cycle Multivehicle Tracking Using Low-Channel Roadside LiDAR. IEEE Trans. Instrum. Meas. 2023, 72. [Google Scholar] [CrossRef]
  8. Zhou, S.; Xu, H.; Zhang, G.; Ma, T.; Yang, Y. Leveraging deep convolutional neural networks pre-trained on autonomous driving data for vehicle detection from roadside LiDAR data. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22367–22377. [Google Scholar] [CrossRef]
  9. Zhang, J.; Xiao, W.; Mills, J. Optimizing Moving Object Trajectories from Roadside Lidar Data by Joint Detection and Tracking. Remote Sens. 2022, 14, 2124. [Google Scholar] [CrossRef]
  10. Wang, L.; Lan, J. Adaptive Polar-Grid Gaussian-Mixture Model for Foreground Segmentation Using Roadside LiDAR. Remote Sens. 2022, 14, 2522. [Google Scholar] [CrossRef]
  11. Roos, S. A Framework for Simulative Evaluation and Optimization of Point Cloud-Based Automotive Sensor Sets. In Proceedings of the 2021 IEEE Intelligent Transportation Systems Conference, Indianapolis, IN, USA, 19–22 September 2021. [Google Scholar]
  12. Kim, T. Placement optimization of multiple lidar sensors for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 21, 2139–2145. [Google Scholar] [CrossRef]
  13. Mou, S. An Optimal LiDAR Configuration Approach for Self-Driving Cars. arXiv 2018, arXiv:1805.07843. [Google Scholar]
  14. Berens, F. Genetic Algorithm for the Optimal LiDAR Sensor Configuration on a Vehicle. IEEE Sens. J. 2021, 22, 2735–2743. [Google Scholar] [CrossRef]
  15. Hu, H. Investigating the impact of multi-lidar placement on object detection for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022. [Google Scholar]
  16. Gu, J. Range Sensor Overview and Blind-Zone Reduction of Autonomous Vehicle Shuttles. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1140, 012006. [Google Scholar]
  17. Liu, Z. Where should we place lidars on the autonomous vehicle-an optimal design approach, In Proceedings of the 2019 International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019.
  18. Cai, X. Analyzing Infrastructure LiDAR Placement with Realistic LiDAR Simulation Library. arXiv 2022, arXiv:2211.15975. [Google Scholar]
  19. Jin, S. A Novel Information Theory-Based Metric for Evaluating Roadside LiDAR Placement. IEEE Sens. J. 2022, 22, 21009–21023. [Google Scholar] [CrossRef]
  20. Qu, A. SEIP: Simulation-based Desing and Evaluation of Infrastructure-based Collective Perception. arXiv 2023, arXiv:2305.17892. [Google Scholar]
  21. Kloeker, L. Generic Approach to Optimized Placement of Smart Roadside Infrastructure Sensors Using 3D Digital Maps. In Proceedings of the 2022 IEEE International Conference on Intelligent Transportation Systems, Macau, China, 8–12 October 2022. [Google Scholar]
  22. OpenDrive. Available online: https://www.opendrive.com (accessed on 23 October 2023).
  23. Bresenham, J.E. Algorithm for computer control of a digital plotter. IBM Syst. J. 1965, 4, 25–30. [Google Scholar] [CrossRef]
  24. Brindle, A. Genetic Algorithm for Function Optimization. Ph.D. Dissertation, Department of Computer Science, University of Alberta, Edmonton, AB, Canada, 1980. [Google Scholar]
  25. Croes, G.A. A method for solving traveling-salesman problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  26. HESAI. Available online: https://www.hesaitech.com/product/pandar40p (accessed on 23 October 2023).
  27. Velodyne. Available online: https://velodynelidar.com/products/alpha-prime (accessed on 23 October 2023).
  28. Autoware. Available online: https://autoware.org (accessed on 23 October 2023).
Figure 1. Differences in quality of raw data depending on lidar placement. (a) Placement #1. (b) Placement #2.
Figure 1. Differences in quality of raw data depending on lidar placement. (a) Placement #1. (b) Placement #2.
Sensors 23 08808 g001
Figure 2. (a) Vertical angular resolution of lidar. (b) Horizontal angular resolution of lidar.
Figure 2. (a) Vertical angular resolution of lidar. (b) Horizontal angular resolution of lidar.
Sensors 23 08808 g002
Figure 3. Multiple lidar system in an urban environment. (a) Placement of three lidars. (b) Coverage of each lidar. (c) Total coverage.
Figure 3. Multiple lidar system in an urban environment. (a) Placement of three lidars. (b) Coverage of each lidar. (c) Total coverage.
Sensors 23 08808 g003
Figure 4. (a) Placement of single lidar. (b) Placement of dual lidars. (red: point cloud of lidar #1, green: point cloud of lidar #2).
Figure 4. (a) Placement of single lidar. (b) Placement of dual lidars. (red: point cloud of lidar #1, green: point cloud of lidar #2).
Sensors 23 08808 g004
Figure 5. Lidar Occupancy Space (LOS) and Lidar Occupancy Voxel grid (LOV).
Figure 5. Lidar Occupancy Space (LOS) and Lidar Occupancy Voxel grid (LOV).
Sensors 23 08808 g005
Figure 6. (a) Point cloud map. (b) Weight for lidar occupancy space.
Figure 6. (a) Point cloud map. (b) Weight for lidar occupancy space.
Sensors 23 08808 g006
Figure 7. Total lidar occupancy (red voxels).
Figure 7. Total lidar occupancy (red voxels).
Sensors 23 08808 g007
Figure 8. Generation of new chromosomes via the crossover operation.
Figure 8. Generation of new chromosomes via the crossover operation.
Sensors 23 08808 g008
Figure 9. Generation of new chromosomes via the mutation operation.
Figure 9. Generation of new chromosomes via the mutation operation.
Sensors 23 08808 g009
Figure 10. Experimental environment. (a) Satellite map. (b) 4-way intersection (#1).
Figure 10. Experimental environment. (a) Satellite map. (b) 4-way intersection (#1).
Sensors 23 08808 g010
Figure 11. Point cloud map and lidar occupancy space. (a,b) Intersection #1. (c,d) Intersection #2.
Figure 11. Point cloud map and lidar occupancy space. (a,b) Intersection #1. (c,d) Intersection #2.
Sensors 23 08808 g011
Figure 12. Lidar occupancy space and lidar beams at Intersection #1. (a,b) One one-hundred-and-twenty-eight-channel lidar (S3). (c,d) Two forty-channel lidars (S1). (e,f) Three forty-channel lidars (S2).
Figure 12. Lidar occupancy space and lidar beams at Intersection #1. (a,b) One one-hundred-and-twenty-eight-channel lidar (S3). (c,d) Two forty-channel lidars (S1). (e,f) Three forty-channel lidars (S2).
Sensors 23 08808 g012
Figure 13. Lidar occupancy space and lidar beams at Intersection #2. (a,b) One one-hundred-and-twenty-eight-channel lidar (S6). (c,d) Two forty-channel lidars (S4). (e,f) Three forty-channel lidars (S5).
Figure 13. Lidar occupancy space and lidar beams at Intersection #2. (a,b) One one-hundred-and-twenty-eight-channel lidar (S6). (c,d) Two forty-channel lidars (S4). (e,f) Three forty-channel lidars (S5).
Sensors 23 08808 g013
Figure 14. Lidars placed on real environment. (a) First 40-channel lidar. (b) Second 40-channel lidar. (c) The 128-channel lidar.
Figure 14. Lidars placed on real environment. (a) First 40-channel lidar. (b) Second 40-channel lidar. (c) The 128-channel lidar.
Sensors 23 08808 g014
Figure 15. The detailed scenarios. (a) Comparison of driving vehicle detection range. (b) Comparison of walking pedestrian detection range. (c) Comparison of detection range for blocked vehicles.
Figure 15. The detailed scenarios. (a) Comparison of driving vehicle detection range. (b) Comparison of walking pedestrian detection range. (c) Comparison of detection range for blocked vehicles.
Sensors 23 08808 g015
Figure 16. Comparison of vehicle detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of vehicle trajectory. (d) Graph of vehicle trajectory.
Figure 16. Comparison of vehicle detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of vehicle trajectory. (d) Graph of vehicle trajectory.
Sensors 23 08808 g016
Figure 17. Comparison of pedestrian detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of pedestrian trajectory. (d) Graph of pedestrian trajectory.
Figure 17. Comparison of pedestrian detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of pedestrian trajectory. (d) Graph of pedestrian trajectory.
Sensors 23 08808 g017
Figure 18. Comparison of occluded vehicle detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of vehicle trajectory. (d) Graph of vehicle trajectory.
Figure 18. Comparison of occluded vehicle detection range. (a) Two forty-channel lidars. (b) One one-hundred-and-twenty-eight-channel lidar. (c) Visualization of vehicle trajectory. (d) Graph of vehicle trajectory.
Sensors 23 08808 g018
Table 1. Lidar specifications [26,27].
Table 1. Lidar specifications [26,27].
ItemUnitSpecifications
Pandar40PVLS-128
Scan planesChannel40128
RangemUp to 200Up to 245
Range accuracycm ± 2 ± 3
FOV (vertical)Degree40 (−25 to +15)40 (−25 to +15)
Resolution (vertical)Degree0.33 (non-linear)0.11 (non-linear)
FOV (horizontal)Degree360360
Resolution (horizontal)Degree0.2 (10 Hz), 0.4 (20 Hz)0.2 (10 Hz), 0.4 (20 Hz)
Frame rateHz10, 205 to 20
Table 2. Experimental setup.
Table 2. Experimental setup.
ConfigEnvironmentIntersectionSensorDescription
ModelNumber
S1Simulator#1Pandar40P2Lidar placement optimization simulation at Intersection #1
S23
S3VLS1281
S4#2Pandar40P2Lidar placement optimization simulation at Intersection #2
S53
S6VLS1281
R1Real#1Pandar40P2Reproducing optimal placement and comparing vehicle detection ranges
R2VLS1281
R3Pandar40P2Reproducing optimal placement and comparing pedestrian detection ranges
R4VLS1281
R5Pandar40P2Reproducing optimal placement and comparing detection ranges for occluded vehicles
R6VLS1281
Table 3. Lidar occupancy according to each parameter (in Intersection #1).
Table 3. Lidar occupancy according to each parameter (in Intersection #1).
ProbabilityOne One-Hundred-and-Twenty-Eight-Channel
Lidar (S3)
Two Forty-Channel
Lidars (S1)
Three Forty-Channel
Lidars (S2)
CrossoverMutationLOBest
Placement
LOBest
Placement
(Second Lidar)
LOBest
Placement (Third Lidar)
0.40.2154,522 p l d 1 ,   *
=
(−1.5,
0.3,
2.8,
−1.6,
−1.0)
138,903 p l d 2 ,   *
=
(6.8,
6.0,
4.9,
−0.9,
6.0)
174,774 p l d 3 ,   *
=
(6.5,
−21.1,
3.8,
12.4,
−12.4)
0.1146,039157,905166,702
0.05151,800147,529169,019
0.02148,972152,257168,000
0.30.2146,201137,886192,507
0.1157,397141,684175,029
0.05151,451140,585165,479
0.02150,172141,407169,823
0.20.2146,840145,036172,465
0.1155,187147,487168,328
0.05144,968148,546166,546
0.02150,810152,469180,816
0.10.2150,838143,189170,464
0.1157,053158,553169,239
0.05140,229152,964172,629
0.02162,423141,022169,309
Mean LO150,931146,714171,946
Max LO162,423158,553192,507
Table 4. Lidar occupancy according to each parameter (in intersection #2).
Table 4. Lidar occupancy according to each parameter (in intersection #2).
ProbabilityOne One-Hundred-and-Twenty-Eight-Channel
Lidar (S3)
Two Forty-Channel
Lidars (S1)
Three Forty-Channel
Lidars (S2)
CrossoverMutationLOBest
Placement
LOBest
Placement
(Second Lidar)
LOBest
Placement (Third Lidar)
0.40.2144,519 p l d 1 ,   *
=
(−6.9,
−5.5,
4.7,
1.7,
−0.9)
144,376 p l d 2 ,   *
=
(6.4,
4.7,
4.1,
−2.2,
−1.7)
197,441 p l d 3 ,   *
=
(7.7,
−19.6,
4.9,
3.5,
−6.7)
0.1143,080140,789194,700
0.05140,606149,373189,926
0.02146,431145,034194,462
0.30.2146,760149,409192,075
0.1143,894152,930197,398
0.05151,688152,350203,682
0.02145,399166,184191,239
0.20.2142,365149,438191,872
0.1148,304147,046192,287
0.05139,525148,515194,342
0.02141,120164,734187,869
0.10.2143,224146,827187,606
0.1142,970151,856174,500
0.05153,015155,803174,708
0.02155,200159,735169,895
Mean LO145,506151,525189,625
Max LO155,200166,184203,682
Table 5. Comparison of maximum detection position and distance.
Table 5. Comparison of maximum detection position and distance.
One One-Hundred-and-Twenty-Eight-Channel Lidar (R2)Two Forty-Channel Lidars (R1)
XYMaximum DistanceXYMaximum Distance
Easting−95.3−42.795.5−94.8−41.095.9
Northing92.232.495.732.4
Sum187.575.1190.573.4
Table 6. Comparison of maximum detection position and distance.
Table 6. Comparison of maximum detection position and distance.
One One-Hundred-and-Twenty-Eight-Channel Lidar (R4)Two Forty-Channel Lidars (R3)
XYMaximum DistanceXYMaximum Distance
Easting−86.4−35.886.7−85.1−35.485.5
Northing31.628.037.727.1
Sum118.063.8122.862.5
Table 7. Comparison of maximum detection position and distance.
Table 7. Comparison of maximum detection position and distance.
One One-Hundred-and-Twenty-Eight-Channel Lidar (R6)Two Forty-Channel Lidars (R5)
XYMaximum DistanceXYMaximum Distance
Vehicle #1Easting−89.21.191.8−90.30.997.8
Northing91.66.497.66.5
Sum180.85.3187.95.6
Vehicle #2Easting−89.93.699.7−87.82.799.2
Northing99.56.699.16.8
Sum189.43.0186.94.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, T.-H.; Jo, G.-H.; Yun, H.-S.; Yun, K.-S.; Park, T.-H. Placement Method of Multiple Lidars for Roadside Infrastructure in Urban Environments. Sensors 2023, 23, 8808. https://doi.org/10.3390/s23218808

AMA Style

Kim T-H, Jo G-H, Yun H-S, Yun K-S, Park T-H. Placement Method of Multiple Lidars for Roadside Infrastructure in Urban Environments. Sensors. 2023; 23(21):8808. https://doi.org/10.3390/s23218808

Chicago/Turabian Style

Kim, Tae-Hyeong, Gi-Hwan Jo, Hyeong-Seok Yun, Kyung-Su Yun, and Tae-Hyoung Park. 2023. "Placement Method of Multiple Lidars for Roadside Infrastructure in Urban Environments" Sensors 23, no. 21: 8808. https://doi.org/10.3390/s23218808

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop