Next Article in Journal
Frequency-Wavenumber Domain Elastic Full Waveform Inversion with a Multistage Phase Correction
Next Article in Special Issue
Using Relative Projection Density for Classification of Terrestrial Laser Scanning Data with Unknown Angular Resolution
Previous Article in Journal
Patch-Based Discriminative Learning for Remote Sensing Scene Classification
Previous Article in Special Issue
MFNet: Multi-Level Feature Extraction and Fusion Network for Large-Scale Point Cloud Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for the Automatic Extraction of Support Devices in an Overhead Catenary System Based on MLS Point Clouds

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
China Transport Telecommunications & Information Center, Beijing 100011, China
3
Center for Health Statistics and Information, National Health Commission of the People’s Republic of China, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 5915; https://doi.org/10.3390/rs14235915
Submission received: 19 October 2022 / Revised: 15 November 2022 / Accepted: 19 November 2022 / Published: 22 November 2022

Abstract

:
A mobile laser scanning (MLS) system can acquire railway scene information quickly and provide a data foundation for regular railway inspections. The location of the catenary support device in an electrified railway system has a direct impact on the regular operation of the power supply system. However, multi-type support device data accounts for a tiny proportion of the whole railway scene, resulting in its poor characteristic expression in the scene. Therefore, using traditional point cloud filtering or point cloud segmentation methods alone makes it difficult to achieve an effective segmentation and extraction of the support device. As a result, this paper proposes an automatic extraction algorithm for complex railway support devices based on MLS point clouds. First, the algorithm obtains hierarchies of the pillar point clouds and the support device point clouds in the railway scene through high stratification and then realizes the noise that was point-cloud-filtered in the scene. Then, the center point of the pillar device is retrieved from the pillar corridor by a neighborhood search, and then the locating and initial extracting of the support device are realized based on the relatively stable spatial topological relationship between the pillar and the support device. Finally, a post-processing optimization method integrating the pillar filter and the voxelized projection filter is designed to achieve the accurate and efficient extraction of the support device based on the feature differences between the support device and other devices in the initial extraction results. Furthermore, in the experimental part, we evaluate the treatment effect of the algorithm in six types of support devices, three types of support device distribution scenes, and two types of railway units. The experimental results show that the average extraction IoU of the multi-type support device, support device distribution scenes, and railway unit were 97.20%, 94.29%, and 96.11%, respectively. In general, the proposed algorithm can achieve the accurate and efficient extraction of various support devices in different scenes, and the influence of the algorithm parameters on the extraction accuracy and efficiency is elaborated in the discussion section.

1. Introduction

The efficient automatic extraction of railway device data is critical to developing railway digital twins [1,2]. Currently, this is accomplished by constructing a railway digital twin [3] model based on mobile measuring equipment, such as mobile laser scanning (MLS) [4], and then realizing the regular inspection of the railway system by measuring the geometric parameters of railway device models [5,6]. A support device is widely employed in railway electrification projects as a bearing device of the railway power supply system [7,8]. However, support devices in various railway scenes significantly differ in various types of systems while accounting for only a small proportion of the whole scene (Figure 1a). However, the conductive height and pull-value of key railway parameters can be calculated based on the relevant data of the support devices (Figure 1b). As a result, the fully automatic, efficient, and accurate extraction of support devices in railway scenes is of great value and significance.
Extracting a support device in a railway scene is part of object extraction research with the point cloud. As a result, this paper seeks solutions to support device extraction by discussing and analyzing relevant research on specific object extraction methods in point cloud scenes [9,10]. Some studies have acquired railway scene support device information using photogrammetry [7]. However, the maintenance of railway line-related devices is often carried out at night, which makes illumination inevitably affect the collected data. MLS technology can efficiently obtain massive point cloud scene data and is not affected by lighting conditions. Currently, the solutions for object extraction in point cloud scenes can be divided into point cloud filtering [11] and point cloud segmentation [12,13].
Filtering non-target points while retaining target points is what point cloud filtering is all about [14]. It primarily includes the statistical-based [15,16], neighborhood-based [17], projection-based [18], signal-processing [19], and hybrid point-cloud-filtering [20] methods. These methods filter noise points by distinguishing feature differences between target and non-target objects and are appropriate for targets with apparent features in simple scenes [21]. However, due to the interference of other railway devices in the scene, the point cloud filtering method has difficultly in directly capturing the features of the support device in the railway scene. At the same time, the differences in feature expression of different types of support devices also increases the processing difficulty of the point cloud filtering method. It is not easy to effectively extract support devices in complex railway scenes through only a point cloud filtering algorithm.
The point cloud segmentation method achieves target object segmentation and extraction by dividing the point cloud scene into several mutually exclusive subsets. It includes the edge-based [22,23], region-growing [24], model-fitting [25], and clustering-based [26] methods. The edge-based segmentation method obtains edge information by judging the changes in point vectors and then realizing the target object extraction in the scene. The algorithm is simple in structure and has a good segmentation effect on the target with the apparent edge feature. The clustering-based method assigns points with similar feature distributions to corresponding categories, and then the specific target of the point cloud can be clustered [27,28]. However, these two segmentation methods are unsupervised clustering methods, and it is challenging to segment the support device in the railway scene selectively. The model fitting method achieves target segmentation and extraction by matching and recognizing the point cloud data and the target geometry model [29]. Although this method has a good segmentation effect on objects with regular geometry, it takes a lot of time to fit the multi-type support device models. The region-growing segmentation method combines the seed points [30] with the same feature points in the neighborhood to achieve point cloud segmentation for various objects. Although this method can obtain good edge information and target segmentation results, its clustering effect is closely related to the selection of seed points [31]. In a nutshell, the uncertainty of the distribution of the support device in the scene [32,33], as well as the weak feature expression of the small data scale in the scene [34], make it difficult for existing point cloud segmentation methods to be directly applied to the extraction process of a railway scene support device.
Based on the above questions, an automatic extraction method of the support device based on the MLS point cloud is proposed and is based on the characteristics of the support device. To begin, to effectively solve the influence caused by the shadowing effect of the railway background point cloud, this study filters out the noise points by hierarchical chunking and realizes the division of the multi-batch point cloud units. Then, to further amplify the expressive features of the support device, the support device is located and extracted based on the relatively stable spatial relationship between the support device and the pillar and the pillar center’s information. Finally, the fine extraction work of the support device is completed by optimizing the post-processing of the initial extraction support device by integrating the pillar filter and the voxelized projection filter. To summarize, the main contributions of this paper are as follows:
(1)
A new method is proposed for locating support devices based on relatively stable spatial relationships between railway devices. Because each support device has a pillar center point, combining the two retrievals can reduce the occurrence of missing support devices and repeated extraction.
(2)
To achieve the high-precision extraction of the support device, other railway devices in the initial extraction results are filtered out by integrating two filters, the pillar and the voxel projection, which significantly improves the extraction accuracy of the support device. Among them, the voxel scale of the voxel projection filter is re-analyzed and designed based on the characteristics of the contact wires in the scene.
(3)
To assess the extraction effect and robustness of the proposed algorithm, six types of support devices and three types of support device distribution scenes are tested. Furthermore, two groups of railway unit scenes are tested to detect the performance of the algorithm in the actual application process.
The manuscript is organized as follows: The proposed approach is explained in Section 2. Section 3 analyzes and tests the relevant parameters and algorithm performance and demonstrates the effectiveness and robustness of the algorithm. In addition, the ablation experiment also discusses each integrated filter component’s characteristics. The last section provides a synthesis of conclusions and our main contributions.

2. Method

Using the MLS point cloud, the proposed method can extract the support devices automatically and accurately using a gradual processing scheme, including the scene hierarchical chunking, positioning and initial extraction of the support device, and result optimization process. All these steps make a difference in the proposed method’s high extraction accuracy. The algorithm flow chart is shown in Figure 2. Firstly, the original railway scene is divided into hierarchical chunking based on key trajectory points. Affine transformation is performed on the divided region blocks to facilitate batch processing of the railway scenes. Then, the support device is located through the appropriate pillar center point of the neighborhood search and the relatively stable spatial relationship between the pillar and the support device, and then the initial extraction of the support device is realized. Finally, the pillar and voxelized projection filters are integrated to filter out the pillars, suspended insulators, and contact wires in the initial extraction results, and the high-precision automatic extraction of support devices in complex railway scenes is gradually realized.

2.1. Trajectory Rarefying

It is difficult to directly process railway scenes because of their large data scales, long spans, and numerous curve changes. Therefore, we have divided them into a series of railway scene units with the same span and with partial overlap based on the trajectory points PL to enable batch processing of the original railway scene PC, defined as: P L = { p j L | j = 1 , 2 , , n P L } , P C = { p i C | i = 1 , 2 , , n P C } , where n P L and n P c are the total amounts of P L and P C , respectively, and p i C and p j L represent points in P C and P L , respectively.
Although the close trajectory spacing in the original trajectory data makes it easier to process the railway scene units with small spans, it also results in more support devices in the scene being divided into multiple railway scene units. Assuming the trajectory thinning threshold is set to ψ , the trajectory point can be divided into N / ψ + 1 ( x means x rounded down) unit point sets, and we can define the total point set as P T = { p k T | k = 1 , 2 , , n P T } , where n P T is the total amount of P T . Each center point of the unit is designated as a key trajectory point. The last P T contains N % ψ points and the relationship between ψ / 2 and N % ψ determines whether the unit set has a data value for batch-processing railway scenes. Figure 3 depicts a schematic diagram of trace drainage, where P K T C P is calculated as a set of p K T C P using Formula (1):
P K T C P = { 1 ψ k = 1 ψ p k T , n P T = ψ 1 N % ψ k = 1 N N % ψ p k T , ψ / 2 n P T < ψ Discarded , n P T < ψ / 2

2.2. Hierarchical Chunking

There are many noise point clouds in a railway scene whicsignificantly impact the extraction accuracy and efficiency of support devices. As a result, to achieve the hierarchical processing of point cloud scenes, this paper constructs corresponding data corridors of pillars and support devices based on the significant differences in the spatial locat h ions of the pillars and support devices. Furthermore, to realize the batch processing of the scene point cloud, the dataset of key trajectory points is used as the data reference and performs the chunking process on the data corridor of the pillar and support device. In particular, to amplify the feature expression of the support device during subsequent processing, we first defined and built a pillar region block ( R B o x P = {l, w, h1}) and support device region block ( R B o x S D = {l, w, h2}) around each key trajectory point. Furthermore, to avoid additional noise points caused by the inconsistency between the region block’s attitude and the orbit’s direction, we constructed the rotation matrices of RMat β and RMat α according to the offset angle β and α between the direction of the orbit and the X-axis and Z-axis, respectively. Then, the attitude of R B o x P and R B o x S D were adjusted by Formulas (3) and (2), respectively, to reduce the adverse effects of the terrain relief and curved orbit on the extraction process.
α = arccos ( x p K T C P x P i L | p K T C P P i L | )
β = arccos ( z p K T C P z P i L | p K T C P P i L | )
R B o x ( x , y , z ) = R B o x ( x , y , z ) · RMat α · RMat β
Among them: RMat α = [ cos α sin α 0 sin α cos α 0 0 0 1 ] . RMat β = [ cos β 0 sin β 0 1 0 sin β 0 cos β ] .

2.3. Positioning and Initial Extraction of the Support Device

To further amplify the feature expression of the support device in the processing point clouds, the pillar device and the support device were bonded according to the relatively stable spatial topological relationship between them, and then we realized the positioning and initial extraction of the support device. Figure 4 shows the initial extraction process of the support device. The critical idea was to complete the support device’s initial extraction by constructing the support device’s initial extraction region block using the support object’s spatial location as a reference. Therefore, it was necessary to quickly and accurately extract pillar points in the pillar corridors. An algorithm was designed for the pillar center point based on a neighborhood search to extract the pillar point cloud in the pillar corridor effectively. Specifically, due to the significant distances between the pillars, we chose a random point in the pillar data corridor as a seed point to complete the construction of the neighborhood region blocks (RBoxN). Then, by analyzing the number of pillar points in RBoxN, a series of initial pillar center points (pIPC) was obtained. Furthermore, the pillar detection region block (RBoxPI) was built on pIPC because the compensation device on the pillar had similar characteristics (in terms of the number of point clouds) to the pillar in the pillar corridors. The pseudo pillar center points could then be filtered out by analyzing the characteristics of the point clouds in RBoxPI. The detailed procedure is shown in Algorithm 1.
Algorithm 1. Algorithm for extracting pillar center points
Input:pillar region box: { R B o x P | x ( x 1 , x 2 ) y ( y 1 , y 2 ) z ( z 1 , z 2 + h 1 ) } ;
OCS-SD region box: { R B o x S D | x ( x 1 , x 2 ) y ( y 1 , y 2 ) z ( z 2 , z 2 + h 2 ) } ;
neighborhood region box: { R B o x N | x ( x 3 , x 4 ) y ( y 3 , y 4 ) z ( z 1 , z 1 + h 1 ) } ;
pillar inspection region box: { R B o x P I | x ( x 5 , x 6 ) y ( y 5 , y 6 ) z ( z 2 , z 2 + h 2 ) } ;
key trajectory center points set: P K T C P = { p i K T C P | i = 1 , 2 , , n P K T C P } , where n P K T C P is the total amount of P K T C P .
Output:set of pillar center points: P R P C .
1:fori = 1 to n P K T P C do
2:  initialize R B o x P M e r g e = R B o x i P R B o x i 1 P R B o x i + 1 P ;
3:  initialize R B o x S D M e r g e = R B o x i S D R B o x i 1 S D R B o x i + 1 S D ;
4: for j = 0 to n P C  do
5:   if ( p j C R B o x N ) then
6:    initialize k = 0;
7:    k++;
8:     P R B o x N . AddPoint ( p i C ) Add to the collection;
9:   end if
10: end for
11: if ( k > δ ) then δ is the pillar center point initial extraction threshold in R B o x N
12:    p i I P C = avg ( P R B o x N ) Obtain the initial pillar center points
13: end if
14:end for
15:fori = 0 to n P I P C  do
16: for j = 0 to n P C do
17:   initialize k = 0;
18:   if ( p j C R B o x P I )
19:    k++;
20:   end if
21:   if ( k > λ ) λ is the pillar center point check threshold in R B o x C P t
22:     p i R P C = p i I P C
23:   end if
24: end for
25:end for
26:return P R P C ;

2.4. Result Optimization

To achieve the accurate extraction of the support device, the pillars, suspension insulators, and contact wires were gradually removed by integrating the pillar and voxel projection filter. Figure 5 shows the stepwise optimization process. The pillar exhibits a vertical distribution along a single direction in the railway scene. Therefore, the center point of the neighborhood search was utilized as the data reference and the vertical distribution characteristics of the pillar were used to build the pillar filter, which was applied to filter the pillar device point cloud. The pillar filter was a region block with a length, width, and height of 0.8 m, 2 m, and 4 m, respectively (Figure 5a). Furthermore, the suspension insulator and the support device with contact wire in the initial extraction region block were located in two separate spaces because of the filtering of the pillar. Therefore, the suspension insulator filtering in the initial extraction results would be realized by using the differences in point density in the two regions. Specifically, the scale of the point cloud of the suspension insulator was far lower than that of the support device with contact wire, and the rough extraction of the support device could be achieved by comparing the selected region block with a large point density (Figure 5b). The difficulty of filtering the contact wire was proportional to the linear distance from the support device. The rough extracted point cloud voxel was first projected to the grid along the Z-axis to filter the contact wire point clouds. Since most of the support device point clouds were located in the same XOZ plane (Figure 5c), the difference in point density between the support device and the contact wire in the projection results would be further widened. Then, the original area was divided into grids whose length and width were w0 and d, respectively. The contact wires were filtered in the rough extraction results by judging the point cloud density in the grid to achieve the accurate extraction of the support devices.

3. Experiments

3.1. Study Area and Dataset

The 3D LiDAR dataset collected by the light movement scanning measurement system along the Yancheng–Nantong railway is shown in Figure 6. The system comprises an on-orbit surveying and scanning vehicle and a high-precision laser scanning device, Z + F Profile9012. It takes 2 km as a measurement period to measure the data of two groups of railway units from Yancheng–Nantong, with an average number of points of approximately 200 million. The trajectory data is collected while the MLS system scans the railway scene, which is a critical spatial reference to support device positioning. Among them, the GNSS receiver is integrated with the Z + F Profile 9012 device, and the trajectory positioning accuracy is further improved through differential GPS technology.
Figure 7 depicts the numerous types of support devices in a railway scene. Distinct types of support devices have different accessories and expression characteristics. Because of this, it was critical to validate the algorithm’s robustness through an extensive assessment of the support device’s extracts results with various types of devices. Among them, the structures of a single support device (SSD) and a double support device (DSD) are relatively straightforward. The main difference between them is the number of support devices on the pillar and the pillar type. The single ratchet support device (SRSD) and double ratchet support device (DRSD) build on the original support device with a compensating device to automatically adjust the tension of the contact wire and the bearing cable. As in a DSD structure, steel frame support devices (SFSD) are commonly encountered at railway platforms. The loop support device (LSD) is structurally similar to the SSD, but the critical distinction is that the suspension insulator is in the LSD. Among them, the supply voltage suspension insulator of the LSD is 2 × 25 kV.
The randomization of support device distribution in the scene is another critical aspect influencing the extraction impact of the support devices. As illustrated in Figure 8, the distribution of support devices in all scenes falls into three categories: symmetric distribution (SD), asymmetric distribution (AD), and neighboring distribution (ND). The SD scene includes support devices such as a DSD and an SRSD. AD scenes mainly consist of DSDs. The support device is close together in an ND situation with the DSD and LSD.

3.2. Implemental Details

The experimental parameters of the suggested approach are shown in Table 1. Specifically, we analyzed the influence of the length of R B o x S D and R B o x P on the batch processing performance in the discussion and we set R B o x S D and R B o x P at 2 m and 3.5 m above the key trajectory point, respectively, through prior knowledge. Furthermore, to ensure that the R B o x N contained as the whole of the pillar was feasible, its length and width should have been twice the pillar length and width, and its height should have been twice the R B o x P . While the R B o x P F and R B o x P I were built from the pillar’s center point, their length and breadth only needed to be somewhat more significant than the pillar’s length and width. In Section 2.4, to realize the effective separation of the support device and suspension insulator, we constructed the R B o x P L e f t and R B o x P R i g h t based on the pillar’s center point, which were slightly longer and wider than those of the support device.

3.3. Evaluation Indexes

The algorithm’s effectiveness, robustness, and practical application abilities are validated through quantitative and qualitative analysis and the application evaluation of the six support devices and three support device distribution scenes. The precision (P), recall (R), F1-score (F1), and intersection over union (IoU) were used to evaluate the extracted results. Here, the F1 and IoU represent the overall extraction effect, P is the exact prediction result ( P T P r e ) of the predicted result ( P P r e ), and it is used to assess the filtering effect of the contact wire, and R represents the proportion of P T P r e in the actual results ( P R e a l ), which is used to evaluate the extraction effect of the support device. The relevant formula is as follows:
P = P T P r e P P r e
R = P T P r e P R e a l
F 1 = 2 P R ( P + R )
I o U = P P r e P R e a l P P r e P R e a l

3.4. Experimental Results

The application tests reviewed two sets of 2 km railway datasets to further validate the suggested algorithm’s applicability in the unit scenes. In units 1 and 2, there were 43 and 52 support devices, respectively, and two steel frame support devices were extracted wrongly in Figure 9b and one support device was removed in Figure 10c. The extraction issue occurs because these support devices do not require pillar erection for the algorithm to realize the support device’s location, and the omission is due to the omission of the pillar center point. The mean IoUs of the correctly extracted support device in unit 1 and unit 2 were 95.85% and 96.17%, respectively. The results show that the algorithm has good application ability for a wide range of railway scenes.
Table 2 displays the evaluation findings of the six different types of support devices. The mean values of F1 and IoU are 98.23% and 97.22%, respectively, and the F1 and IoU evaluation results of the double support device and the ratchet double support device are lower than the average. The reason is that a tiny number of insulators were misidentified as non-supporting devices and filtered away during initial extraction. The primary support device in the railway scene was the single support device with a reasonably simple structure. Therefore, the proposed method has the best extraction effect for a single support device. The extraction effect of the multi-type support device is shown in Figure 11.
Table 3 and Figure 12 show the extraction effects of the three support device distribution scenes. The average IoU and F1 of the symmetric distribution scene were 95.89% and 97.88%, respectively. The average IoU and F1 of the asymmetric distribution scene were 94.49% and 97.15%, respectively. The average IoU and F1 of the adjacent distribution scenes are 92.49% and 96.05%, respectively. Because some of the insulators are filtered out, the IoU and F1 of SD5, SD6, and AD2 are substantially lower than the average symmetric and asymmetric distribution accuracy. In the neighboring distribution scene, the IoU and F1 of ND2 are well below the average accuracy due to the unfiltered slings between the adjacent support devices.

3.5. Ablation Experiments

In the process of result optimization, the filtering of other devices in the initial extraction results of the support device was realized by integrating the suspension insulator rough cutting (RC), pillar filter (PF), and voxel projection filtering (VPF). The impacts of pillars, suspension insulators, and contact wires were discussed through ablation tests on the final extraction results. The test results reveal that, while RC and VPF may remove most pillar point clouds, some pillar points remain in the black circle, as seen in Figure 13. Figure 14 shows that when the VPF method is not utilized, numerous contact wire point clouds in the black circle are not filtered out. Although the contact wire point cloud with a tiny scale of point cloud has a slight improvement on the extraction accuracy of the support device, it has a significant improvement on the visual effect, which is evident in the SRSD, DRSD, and DSD. Figure 15 shows the RC process for the suspension insulator device in the support device. The results reveal that the PF, RC, and VPF uniquely refine the support device’s initial extraction results to increase the overall extraction accuracy.

4. Discussion

4.1. Analysis of Rarefying Threshold

The key trajectory points provide important spatial reference values for the attitude adjustment of the subsequent region blocks as a result of the trajectory point thinning. The spacing of the key trajectory point (dis) is strongly related to the thinning threshold ( ψ ) that affects the scale of R B o x S D and R B o x P in the hierarchical chunking process. The influence of the value of ψ on the method’s performance in symmetric distribution scenes is evaluated in this work. In an ideal railway arrangement, a single trajectory point should correspond to two support devices on either side of the track. To ensure the spatial reference value of the key trajectory points to the region block of the support device, the ratio between the key trajectory points and the support devices on both sides (RKTO) should be more than 0.5. The experimental results are shown in Table 4. The processor of the test equipment was a 12th Gen Intel(R) Core(TM) i7-12700H 2.70 GHz, and its onboard memory was 16.0 GB. As ψ increases, the running time decreases, and so does the dis and the ratio. Therefore, given the actual requirements and algorithm time consumption, we set ψ and dis at 10 and 40 m, respectively. In addition, there was a 4 m overlap area between the two adjacent R B o x S D or R B o x P to avoid the repeated extraction caused by hierarchical chunking. In a nutshell, the widths of R B o x S D and R B o x P were set to 22 m.

4.2. Analysis of the Thresholds of the Pillar Center Points

The extraction threshold ( δ ) of the pillar neighborhood search and the filtering threshold ( λ ) of the pillar center inspection area will significantly affect the extraction result of the pillar center. Therefore, we observed changes in the number of pillar center extraction results in the symmetric distribution scenes by adjusting the values of δ and λ (Figure 16). There were six pillars in the original scene. The number of pillar center points extracted decreased with the increase in parameters δ and λ and the number of pillar center points extracted by the algorithm matched the number of pillars in [200,1600] and δ in [1250,3250]. This suggests that when δ and λ are too low, the presence of compensation devices is misidentified as pillars and they not detected during inspection, resulting in more extracted results than the actual number of pillars. When δ and λ are too high, some pillars are ignored during the neighborhood search, or some correct results are filtered during checking, resulting in fewer extracted pillars than the actual number of pillars. In addition, when δ is too low, we can use a larger λ to filter the wrong results in the checking process, as shown in Figure 15. When δ is 100, λ is 4250 or 4000, and the number of extracted pillars is correct. However, considering the applicability of the railway scene, relatively average parameters are chosen as δ and λ in this paper, that is, δ is 900 and λ is 2250.

4.3. Analysis of Voxel Size and Contact Line Threshold

The number of support devices and the number of contact wire points in the voxel area are strongly related to the filtering effect of voxel projection, and the widths of the two ends of the flat wrist arm are noticeably different. As a result, multiple voxel area sizes were investigated in this research to determine the best voxel filtering threshold ( ε ), allowing for successful contact line filtering. At the same time, because of the vertical distribution between the contact wire and the support device, the X-axis edge ( l 0 ) of the voxel area is longer than the Y-axis edge (d) to improve the extraction accuracy of the support device. Considering the above factors, we set the l 0 test interval to be [ 0.03 , 0.06 ] [ 0.15 , 0.18 ] and the d test interval to be [ w 0 / 16 , w 0 ] , where w 0 is the length of the upper side of the coarse clipping area in the Y-axis direction. The test results are shown in Figure 17.
The upper end of the dumbbell seen in Figure 17 represents the highest test accuracy of the six types of support devices, while the lower end represents the lowest. When this parameter is used, the median length of the dumbbell reflects the algorithm’s stability, and the longer the length, the larger the precision fluctuation and the lower the stability. As shown in Figure 17a, with the increase in ε , P gradually increases and R gradually decreases and the IoU and F1 first increase and then decrease. This shows that the contact wire and part of the support device point cloud are gradually filtered out, but the filtered part of the support device can be ignored, and so the IoU and F1 gradually increase. Then, as ε becomes more prominent, most of the contact wires are filtered out, and the support device filter portion begins to affect the final extraction accuracy, leading to a gradual decrease in the IoU and F1. In addition, With the gradual decrease in d and w 0 , the points in the voxel region are gradually reduced and the overall stability of the algorithm is gradually improved. In summary, the values of w 0 , d, and ε are closely related to the final extraction effect of the algorithm.
To more clearly show the relationship between the voxel scale and extraction effect, the optional ε in each voxel is shown in Figure 18. With the increase in w 0 , the optimal IoU and F1 gradually increase, but the disparity between the maximum and minimum of IoU and F1 increases significantly. With the increase in d, the disparity between the maximums and minimums of the IoU and F1 only increase at first, and then they decrease. Therefore, w 0 is set to 0.06, d is 1/16 w 0 , and the optimal ε is 15.

4.4. Discussion on Point Sparsity

The point sparsity of small-scale targets is mainly determined by their size and the scanning point density of the laser scanning equipment. In this manuscript, the proposed method can break through the size limitation of the support device itself and achieve the fast and efficient extraction of the support devices in complex railway datasets. In addition, the core of the proposed algorithm is the relative spatial relationship between railway devices in the railway scene. Therefore, the decline of the overall point density has a limited impact on the processing performance of the algorithm. To prove this argument, we diluted the point cloud data of the original six support devices to one-fifth of the original so as to test the performance of the proposed algorithm on sparse point clouds. The test results are shown in Table 5 and Figure 19. The accuracy of the test results is more sensitive because of the sparse points, and the extraction accuracy is lower than the original data. However, after thinning by 4/5 of the data, the mean F1 and IoU of the six support devices can still achieve 97.29% and 94.75%, respectively, demonstrating that the proposed method can still complete the automatic and accurate extraction of the support device in the railway dataset. Thus, the proposed method can still realize the automatic extraction of support devices from railway data with low-density point clouds.

5. Conclusions

As the core device of a railway system carrying the power supply system, the operation status of a support device is directly related to the normal operation of a railway system. At present, the acquisition of information related to support devices is still primarily manual. MLS technology can efficiently obtain geometric and spectral information about surrounding objects and is not affected by external factors such as lighting conditions. This is very important for the routine inspection of a railway, which is often carried out at night. However, the processing method of massive point clouds in large scenes is not mature. Therefore, to quickly and effectively extract the point cloud data of support devices for routine railway inspection, such as defect detection and geometric parameter measurement, an automatic extraction method of support devices based on the MLS point cloud is proposed in this paper. First, the large-span railway scene is batch processed by the hierarchical chunking the railway scene data. Second, the positioning and initial extraction of the support device are realized based on the reasonably steady spatial relationship between the pillar and the supporting device in the scene. The initial extraction results are then optimized by the integrated pillar filter and voxel projection filter to realize high-precision support device extraction in complicated railway scenarios. Furthermore, the algorithm’s relative parameters are verified and analyzed in the discussion section, and the automatic extraction of assistance devices in railway scenarios can be realized using the analyzed parameters. We tested the algorithm’s performance with six different types of support devices and three different distribution scenes. Test samples were obtained from two railway track data sets using the Z + Fprofile 9012 laser scanning equipment. The quantitative and qualitative test results revealed that the extraction support device’s P, R, F1, and IoU coefficients may reach more than 95%, and the overall visual effect of the extraction results is good. Furthermore, the average mean IoU of 96.11% and the good visual effect in the application test of the two groups of 2 km railway units demonstrated that the method has significant robustness.

Author Contributions

Conceptualization, S.Z. and Q.M.; methodology, S.Z.; software, S.Z.; validation, S.Z., Y.H. and Z.F.; formal analysis, L.C.; investigation, S.Z.; resources, S.Z.; data curation, S.Z.; writing—original draft preparation, S.Z.; writing—review and editing, S.Z.; visualization, S.Z.; supervision, S.Z.; project administration, S.Z.; funding acquisition, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boccardo, P.; Arneodo, F.; Botta, D. Application of Geomatic techniques in Infomobility and Intelligent Transport Systems (ITS). Eur. J. Remote Sens. 2014, 47, 95–115. [Google Scholar] [CrossRef] [Green Version]
  2. Arco, E.; Ajmar, A.; Arneodo, F.; Boccardo, P. An operational framework to integrate traffic message channel (TMC) in emergency mapping services (EMS). Eur. J. Remote Sens. 2017, 50, 478–495. [Google Scholar] [CrossRef]
  3. Sánchez-Vaquerizo, J.A. Getting Real: The Challenge of Building and Validating a Large-Scale Digital Twin of Barcelona’s Traffic with Empirical Data. ISPRS Int. J. Geo-Inf. 2022, 11, 24. [Google Scholar] [CrossRef]
  4. Fang, L.N.; Chen, H.; Luo, H.; Guo, Y.Y.; Li, J. An intensity-enhanced method for handling mobile laser scanning point clouds. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102684. [Google Scholar] [CrossRef]
  5. Qin, X.Q.; Li, Q.Q.; Ding, X.L.; Xie, L.F.; Wang, C.S.; Liao, M.S.; Zhang, L.; Zhang, B.; Xiong, S. A structure knowledge-synthetic aperture radar interferometry integration method for high-precision deformation monitoring and risk identification of sea-crossing bridges. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102476. [Google Scholar] [CrossRef]
  6. Wang, J.; Zhao, H.R.; Wang, D.; Chen, Y.Y.; Zhang, Z.Q.; Liu, H. GPS trajectory-based segmentation and multi-filter-based extraction of expressway curbs and markings from mobile laser scanning data. Eur. J. Remote Sens. 2018, 51, 1022–1035. [Google Scholar]
  7. Liu, Z.G.; Song, Y.; Han, Y.; Wang, H.R.; Zhang, J.; Han, Z.W. Advances of research on high-speed railway catenary. J. Mod. Transp. 2017, 26, 1–23. [Google Scholar] [CrossRef] [Green Version]
  8. Midya, S.; Bormann, D.; Schutte, T.; Thottappillil, R. Pantograph Arcing in Electrified Railways-Mechanism and Influence of Various Parameters-Part I: With DC Traction Power Supply. IEEE Trans. Power Deliv. 2009, 24, 1931–1939. [Google Scholar] [CrossRef]
  9. Cheng, M.; Zhang, H.C.; Wang, C.; Li, J. Extraction and Classification of Road Markings Using Mobile Laser Scanning Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1182–1196. [Google Scholar] [CrossRef]
  10. Wen, C.L.; Sun, X.T.; Li, J.; Wang, C.; Guo, Y.; Habib, A. A deep learning framework for road marking extraction, classification and completion from mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 147, 178–192. [Google Scholar] [CrossRef]
  11. Meng, X.L.; Currit, N.; Zhao, K.G. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
  12. Chen, D.; Zhang, L.Q.; Mathiopoulos, P.T.; Huang, X.F. A Methodology for Automated Segmentation and Reconstruction of Urban 3-D Buildings from ALS Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4199–4217. [Google Scholar] [CrossRef]
  13. Cui, H.; Li, J.; Hu, Q.W.; Mao, Q.Z. Real-Time Inspection System for Ballast Railway Fasteners Based on Point Cloud Deep Learning. IEEE Access 2020, 8, 61604–61614. [Google Scholar] [CrossRef]
  14. Nurunnabi, A.A.M.; Teferle, N.; Li, J.; Lindenbergh, R.C.; Hunegnaw, A. An efficient deep learning approach for ground point filtering in aerial laser scanning point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2021, 43, 31–38. [Google Scholar] [CrossRef]
  15. Jenke, P.; Wand, M.; Bokeloh, M.; Schilling, A.; Straßer, W. Bayesian Point Cloud Reconstruction. Comput. Graph. Forum 2006, 25, 379–388. [Google Scholar] [CrossRef]
  16. Han, X.F.; Jin, J.S.; Wang, M.J.; Jiang, W.; Gao, L.; Xiao, L.P. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  17. Schall, O.; Belyaev, A.; Seidel, H.P. Adaptive feature-preserving non-local denoising of static and time-varying range data. Comput.-Aided Des. 2008, 40, 701–707. [Google Scholar] [CrossRef]
  18. Liao, B.; Xiao, C.X.; Jin, L.Q.; Fu, H.B. Efficient feature-preserving local projection operator for geometry reconstruction. Comput.-Aided Des. 2013, 45, 861–874. [Google Scholar] [CrossRef] [Green Version]
  19. Xiao, C.X.; Miao, Y.W.; Liu, S.; Peng, Q.S. A dynamic balanced flow for filtering point-sampled geometry. Vis. Comput. 2006, 22, 210–219. [Google Scholar] [CrossRef]
  20. Jaafary, A.H.; Salehi, M.R. Analysis of an All-Optical Microwave Mixing and Bandpass Filtering with Negative Coefficients. In Proceedings of the 2007 IEEE International Conference on Signal Processing and Communications, Dubai, United Arab Emirates, 24–27 November 2007; pp. 1123–1126. [Google Scholar]
  21. Zhang, L.Y.; Chang, J.H.; Li, H.X.; Liu, Z.X.; Zhang, S.Y.; Mao, R. Noise Reduction of LiDAR Signal via Local Mean Decomposition Combined with Improved Thresholding Method. IEEE Access 2020, 8, 113943–113952. [Google Scholar] [CrossRef]
  22. Barca, E.; Castrignanò, A.; Ruggieri, S.; Rinaldi, M. A new supervised classifier exploiting spectral-spatial information in the Bayesian framework. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 101990. [Google Scholar] [CrossRef]
  23. Zhao, G.Q.; Xiao, X.H.; Yuan, J.S. Fusion of Velodyne and camera data for scene parsing. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 1172–1179. [Google Scholar]
  24. Li, R.; Qi, R.; Liu, L.P. Point Cloud De-Noising Based on Three-Dimensional Projection. In Proceedings of the 2015 International Conference on Computational Intelligence and Communication Networks (CICN), Jabalpur, India, 12–14 December 2015; pp. 942–946. [Google Scholar]
  25. Jiang, X.Y.; Meier, U.; Bunke, H. Fast Range Image Segmentation Using High-Level Segmentation Primitives. In Proceedings of the Third IEEE Workshop on Applications of Computer Vision WACV, Sarasoto, FL, USA, 2–4 December 1996; pp. 83–88. [Google Scholar]
  26. Zhang, R.; Li, G.Y.; Wunderlich, T.; Wang, L. A survey on deep learning-based precise boundary recovery of semantic segmentation for images and point clouds. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102411. [Google Scholar] [CrossRef]
  27. Xia, Y.Q.; Xie, X.W.; Wu, X.W.; Zhi, J.; Qiao, S.H. An Approach of Automatically Selecting Seed Point Based on Region Growing for Liver Segmentation. In Proceedings of the 2019 8th International Symposium on Next Generation Electronics (ISNE), Zhengzhou, China, 9–10 October 2019; pp. 1–4. [Google Scholar]
  28. Vo, A.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  29. Zhao, B.F.; Hua, X.H.; Yu, K.G.; Xuan, W.; Chen, X.J.; Tao, W.J. Indoor Point Cloud Segmentation Using Iterative Gaussian Mapping and Improved Model Fitting. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7890–7907. [Google Scholar] [CrossRef]
  30. Ma, X.F.; Luo, W.; Chen, M.Q.; Li, J.H.; Yan, X.; Zhang, X.; Wei, W. A Fast Point Cloud Segmentation Algorithm Based on Region Growth. In Proceedings of the 2019 18th International Conference on Optical Communications and Networks (ICOCN), Huangshan, China, 5–8 August 2019; pp. 1–2. [Google Scholar]
  31. Sun, S.P.; Li, C.Y.; Chee, P.W.; Paterson, A.H.; Jiang, Y.; Xu, R.; Robertson, J.S.; Adhikari, J.; Shehzad, T. Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering. ISPRS J. Photogramm. Remote Sens. 2020, 160, 195–207. [Google Scholar] [CrossRef]
  32. Xu, L.; Oja, E.; Kultanen, P. A new curve detection method: Randomized Hough transform (RHT). Pattern Recognit. Lett. 1990, 11, 331–338. [Google Scholar] [CrossRef]
  33. Filin, S. Surface classification from airborne laser scanning data. Comput. Geosci. 2004, 30, 1033–1041. [Google Scholar] [CrossRef]
  34. Biosca, J.M.; Lerma, J.L. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS J. Photogramm. Remote Sens. 2007, 63, 84–98. [Google Scholar] [CrossRef]
Figure 1. Support device in a complex railway scene and its geometric parameter measurement. (a) A support in a complex railway scene. (b) The measurement method for some geometric parameters of a railway system.
Figure 1. Support device in a complex railway scene and its geometric parameter measurement. (a) A support in a complex railway scene. (b) The measurement method for some geometric parameters of a railway system.
Remotesensing 14 05915 g001
Figure 2. The overall flowchart of the algorithm.
Figure 2. The overall flowchart of the algorithm.
Remotesensing 14 05915 g002
Figure 3. Schematic diagram of trajectory extraction.
Figure 3. Schematic diagram of trajectory extraction.
Remotesensing 14 05915 g003
Figure 4. Process flow diagram for the initial extraction of the OCS-S.
Figure 4. Process flow diagram for the initial extraction of the OCS-S.
Remotesensing 14 05915 g004
Figure 5. Flowchart of the result optimization. (a) Pillar filter. (b) Rough crop. (c) Voxel projection filter.
Figure 5. Flowchart of the result optimization. (a) Pillar filter. (b) Rough crop. (c) Voxel projection filter.
Remotesensing 14 05915 g005
Figure 6. Experimental datasets. (a) Railway dataset from Yancheng to Nantong. (b) Z + F Profile 9012. (c) GNSS track acquisition device.
Figure 6. Experimental datasets. (a) Railway dataset from Yancheng to Nantong. (b) Z + F Profile 9012. (c) GNSS track acquisition device.
Remotesensing 14 05915 g006
Figure 7. Display diagram of the six types of support devices: (a) the original image of the support device, (b) the SSD, (c) the DSD, (d) the SRSD, (e) the DRSD, (f) the SFSD, and (g) the LSD.
Figure 7. Display diagram of the six types of support devices: (a) the original image of the support device, (b) the SSD, (c) the DSD, (d) the SRSD, (e) the DRSD, (f) the SFSD, and (g) the LSD.
Remotesensing 14 05915 g007
Figure 8. Display diagram of the three types of distribution of support device scenes: (a) the SD, (b) the AD, and (c) the ND.
Figure 8. Display diagram of the three types of distribution of support device scenes: (a) the SD, (b) the AD, and (c) the ND.
Remotesensing 14 05915 g008
Figure 9. Applied experiment test results of unit 1. Specifically, (a) shows the test results, (b) shows close-up views of region #1, (c) shows close-up views of region #2, and (d) shows close-up views of region #3.
Figure 9. Applied experiment test results of unit 1. Specifically, (a) shows the test results, (b) shows close-up views of region #1, (c) shows close-up views of region #2, and (d) shows close-up views of region #3.
Remotesensing 14 05915 g009
Figure 10. Applied experiment test results of unit 2. Specifically, (a) shows the test results, (b) shows close-up views of region #1, (c) shows close-up views of region #2, and (d) shows close-up views of region #3.
Figure 10. Applied experiment test results of unit 2. Specifically, (a) shows the test results, (b) shows close-up views of region #1, (c) shows close-up views of region #2, and (d) shows close-up views of region #3.
Remotesensing 14 05915 g010
Figure 11. Extraction results of the six types of support devices: (a) the extracted result of the SSD, (b) the extracted result of the DSD, (c) the extracted result of the SRSD, (d) the extracted result of the DRSD, (e) the extracted result of the SFSD, and (f) the extracted result of the LSD.
Figure 11. Extraction results of the six types of support devices: (a) the extracted result of the SSD, (b) the extracted result of the DSD, (c) the extracted result of the SRSD, (d) the extracted result of the DRSD, (e) the extracted result of the SFSD, and (f) the extracted result of the LSD.
Remotesensing 14 05915 g011
Figure 12. The test results of the support device scenes: (a) the SD extraction results, (b) the AD extraction results, and (c) the ND extraction results.
Figure 12. The test results of the support device scenes: (a) the SD extraction results, (b) the AD extraction results, and (c) the ND extraction results.
Remotesensing 14 05915 g012
Figure 13. Differences in the extraction effects of the integrated PF, RC, and VPF filters and integrated RC and VPF filters. Specifically, the blue results are the extractions of the integrated PF, RC, and VPF, and the red results are the extractions of the integrated RC and VPF. (a) The application of the two filters on the SRSD. (b) The application of the two filters on the DRSD. (c) The application of the two filters on the SFSD. (d) The application of the two filters on the LSD.
Figure 13. Differences in the extraction effects of the integrated PF, RC, and VPF filters and integrated RC and VPF filters. Specifically, the blue results are the extractions of the integrated PF, RC, and VPF, and the red results are the extractions of the integrated RC and VPF. (a) The application of the two filters on the SRSD. (b) The application of the two filters on the DRSD. (c) The application of the two filters on the SFSD. (d) The application of the two filters on the LSD.
Remotesensing 14 05915 g013
Figure 14. Differences in the extraction effects of the integrated PF, RC, and VPF filters and integrated PF and RC filters. Specifically, the blue results are the extractions of the integrated PF, RC, and VPF, and the red results are the extractions of the integrated PF and RC. (a) The application of the two filters on the SRSD. (b) The application of the two filters on the DRSD. (c) The application of the two filters on the SFSD.
Figure 14. Differences in the extraction effects of the integrated PF, RC, and VPF filters and integrated PF and RC filters. Specifically, the blue results are the extractions of the integrated PF, RC, and VPF, and the red results are the extractions of the integrated PF and RC. (a) The application of the two filters on the SRSD. (b) The application of the two filters on the DRSD. (c) The application of the two filters on the SFSD.
Remotesensing 14 05915 g014
Figure 15. Different extraction effects of the integrated PF, RC, and VPF filters and integrated PF and VPF filters. Specifically, (a) is the extraction of the integrated PF, RC, and VPF and (b) is the extraction of the integrated PF and VPF.
Figure 15. Different extraction effects of the integrated PF, RC, and VPF filters and integrated PF and VPF filters. Specifically, (a) is the extraction of the integrated PF, RC, and VPF and (b) is the extraction of the integrated PF and VPF.
Remotesensing 14 05915 g015
Figure 16. The relationship between the pillar center point extraction and the elimination threshold.
Figure 16. The relationship between the pillar center point extraction and the elimination threshold.
Remotesensing 14 05915 g016
Figure 17. The relationship between voxel size and voxel filter threshold.
Figure 17. The relationship between voxel size and voxel filter threshold.
Remotesensing 14 05915 g017
Figure 18. The extraction accuracy corresponds to each voxel. (a) The IoU corresponding to each voxel. (b) The F1 corresponding to each voxel.
Figure 18. The extraction accuracy corresponds to each voxel. (a) The IoU corresponding to each voxel. (b) The F1 corresponding to each voxel.
Remotesensing 14 05915 g018
Figure 19. Comparison of the point cloud data before and after thinning. (a0) is the processed result of origin point cloud of SSD, (a1) is the processed result of rarefied point cloud of SSD, (b0) is the processed result of origin point cloud of DSD, (b1) is the processed result of rarefied point cloud of DSD, (c0) is the processed result of origin point cloud of SRSD, (c1) is the processed result of rarefied point cloud of SRSD, (d0) is the processed result of origin point cloud of DRSD, (d1) is the processed result of rarefied point cloud of DRSD, (e0) is the processed result of origin point cloud of SFSD, (e1) is the processed result of rarefied point cloud of SFSD, (f0) is the processed result of origin point cloud of LSD, (f1) is the processed result of rarefied point cloud of LSD.
Figure 19. Comparison of the point cloud data before and after thinning. (a0) is the processed result of origin point cloud of SSD, (a1) is the processed result of rarefied point cloud of SSD, (b0) is the processed result of origin point cloud of DSD, (b1) is the processed result of rarefied point cloud of DSD, (c0) is the processed result of origin point cloud of SRSD, (c1) is the processed result of rarefied point cloud of SRSD, (d0) is the processed result of origin point cloud of DRSD, (d1) is the processed result of rarefied point cloud of DRSD, (e0) is the processed result of origin point cloud of SFSD, (e1) is the processed result of rarefied point cloud of SFSD, (f0) is the processed result of origin point cloud of LSD, (f1) is the processed result of rarefied point cloud of LSD.
Remotesensing 14 05915 g019
Table 1. Descriptions of the parameters.
Table 1. Descriptions of the parameters.
ParameterDescriptionValue
R B o x S D support device region box (affine transformation is required)length: 30 m, width: 22 m, height: 1.0 m
R B o x P pillar region box (affine transformation is required)length: 30 m, width: 22 m, height: 3.5 m
R B o x N neighborhood region boxlength: 0.8 m, width: 0.8 m, height: 2.0 m
R B o x P I pillar inspection region boxlength: 0.4 m, width: 0.4 m, height: 3.5 m
R B o x P F pillar filter region box length: 0.4 m, width: 2.6 m, height: 3.5 m
R B o x P L e f t the left region box adjacent to R B o x P F length: 6.2 m, width: 3.0 m, height: 3.5 m
R B o x P R i g h t the right region box adjacent to R B o x P F length: 6.2 m, width: 3.0 m, height: 3.5 m
R B o x I E the initial extraction region box of the support devicelength: 12.8 m, width: 3.0 m, height: 3.5 m
ψ the rarefying threshold of trajectory data10
δ pillar center point initial extraction threshold in R B o x N 1200
λ pillar center point check threshold in R B o x P I 2500
d voxel width1/16 w 0
ε contact line rejection threshold in voxel25
w 0 voxel length0.06 m
Table 2. Extraction outcome quantitative evaluation.
Table 2. Extraction outcome quantitative evaluation.
TypesSSDDSDSRSDDRSDSFSDLSDAverage
Predict (%)
P (%) 99.5998.0299.7499.7498.7699.8399.28
R (%) 97.5397.9797.7096.5298.9497.3897.67
F1 (%) 97.149898.7198.198.8598.5998.23
IoU (%) 98.5596.0897.4696.2897.7397.2397.22
Table 3. Quantitative extraction results.
Table 3. Quantitative extraction results.
SceneSDADND
Predict SD1SD2SD3SD4SD5SD6AverageAD1AD2AD3AverageND1ND2Average
P (%)99.0899.8599.7298.0995.0799.1298.4898.4099.4398.8598.8998.0597.7797.91
R (%)99.1398.1896.7599.7895.6493.8497.2197.8793.3595.3195.5198.5690.1294.34
F1 (%)99.1199.0198.2198.9295.6496.4197.8898.1396.2997.0597.1598.393.7996.05
IoU (%)98.2498.0496.4997.8891.6493.0795.8996.3492.8594.2894.4996.6788.3192.49
Table 4. Trajectory rarefying threshold test results.
Table 4. Trajectory rarefying threshold test results.
ParameterExperimental Results
n56789101112131415
dis (m)2024283236404448525660
RKTO15/65/62/32/31/21/21/21/31/31/3
Running time (s)10301010990990980980980970960960940
Table 5. Test results.
Table 5. Test results.
TypeSSDDSDSRSDDRSDSFSDLSDAverage
Predict
Origin points number105,112270,310454,728835,883995,035106,517461,264
Filtered points number21,02354,06290,946167,177199,00721,30492,253
Origin P (%)99.5998.0299.7499.7498.7699.8399.28
P (%)97.9396.4899.5199.4099.1697.0198.25
Origin R (%)97.5397.9797.7096.5298.9497.3897.67
R (%)95.4398.1694.6196.6397.9894.6996.25
Origin F1 (%)97.1498.0098.7198.198.8598.5998.23
F1 (%)96.6697.3197.0197.9998.5796.2297.29
Origin IoU (%)98.5596.0897.4696.2897.7397.2397.22
IoU (%)93.5594.7794.1896.0797.1892.7294.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, S.; Meng, Q.; Hu, Y.; Fu, Z.; Chen, L. A Method for the Automatic Extraction of Support Devices in an Overhead Catenary System Based on MLS Point Clouds. Remote Sens. 2022, 14, 5915. https://doi.org/10.3390/rs14235915

AMA Style

Zhang S, Meng Q, Hu Y, Fu Z, Chen L. A Method for the Automatic Extraction of Support Devices in an Overhead Catenary System Based on MLS Point Clouds. Remote Sensing. 2022; 14(23):5915. https://doi.org/10.3390/rs14235915

Chicago/Turabian Style

Zhang, Shengyuan, Qingxiang Meng, Yulong Hu, Zhongliang Fu, and Lijin Chen. 2022. "A Method for the Automatic Extraction of Support Devices in an Overhead Catenary System Based on MLS Point Clouds" Remote Sensing 14, no. 23: 5915. https://doi.org/10.3390/rs14235915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop