Next Article in Journal
GF-2 Data for Lithological Classification Using Texture Features and PCA/ICA Methods in Jixi, Heilongjiang, China
Previous Article in Journal
Studying Tropical Dry Forests Secondary Succession (2005–2021) Using Two Different LiDAR Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features

1
School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
2
Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, China
3
School of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(19), 4678; https://doi.org/10.3390/rs15194678
Submission received: 24 August 2023 / Revised: 19 September 2023 / Accepted: 20 September 2023 / Published: 24 September 2023
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)

Abstract

:
Point feature-based visual simultaneous localization and mapping (SLAM) systems are prone to performance degradation in low-texture environments due to insufficient extraction of point features. In this paper, we propose a tightly-coupled stereo visual-inertial SLAM system with point and line features (PLI-SLAM) to enhance the robustness and reliability of systems in low-texture environments. We improve Edge Drawing lines (EDlines) for line feature detection by introducing curvature detection and a new standard for minimum line segment length to improve the accuracy of the line features, while reducing the line feature detection time. We contribute also with an improved adapting factor based on experiment to adjust the error weight of line features, which further improves the localization accuracy of the system. Our system has been tested on the EuRoC dataset. Tests on public datasets and in real environments have shown that PLI-SLAM achieves high accuracy. Furthermore, PLI-SLAM could still operate robustly even in some challenging environments. The processing time of our method is reduced by 28%, compared to the ORB-LINE-SLAM based on point and line, when using Line Segment Detector (LSD).

Graphical Abstract

1. Introduction

Visual Simultaneous Localization and Mapping (SLAM) is considered to be one of the core technologies for mobile robots. The main task is to simultaneously estimate the trajectory of a mobile robot and reconstruct a map of its surroundings and environment from successive frames. Visual SLAM has attracted much attention in drones and self-driving cars because it has the advantage of low cost and the rich environmental information obtained by using the cameras as sensors. Meanwhile, the trend of multi-sensor information fusion makes IMU (Inertial Measurement Unit) widely combined with cameras in the SLAM system, because they can complement each other [1]. IMU allows the robot to directly obtain both acceleration and angular velocity information, making the SLAM system more robust even in low-texture environments where tracking by pure vision may fail.
Visual SLAM is generally divided into two main approaches: direct method and feature-based method. Direct methods are used to estimate the camera motion by minimizing the pixel brightness errors between consecutive frames, such as DSO [2] and SVO [3]. However, its prerequisite requires the assumption that the local brightness of the sequence is constant, making it sensitive to changes in brightness. In contrast, the feature-based method detects and matches key points between consecutive frames and then minimizes the reprojection errors to simultaneously estimate the poses and construct a map [4], such as PTAM [5] and ORB-SLAM2 [6]. The feature-based method is more robust than the direct method because the discriminative key points are relatively invariant to changes in viewpoint and illumination [7]. However, the continued expansion of SLAM applications and mobile robots, such as augmented reality and autonomous driving, presents some new challenges, including low-texture or structured engineering environments [8,9]. Therefore, extracting key points will become difficult, which can result in degraded performance of the SLAM system and even tracking failure. Fortunately, most low-texture environments, such as white walls and corridors, have more line segment features, although fewer feature points can be extracted. Therefore, more and more SLAM systems with point and line features have been proposed in recent years, such as PL-VIO [10] and PL-SLAM [11].
The method based on point and line features greatly enhances the robustness of the SLAM system in low-texture environments. However, it still cannot change the shortcoming of the purely visual SLAM system, which is sensitive to rotation and high speed. For the above issue, a multi-sensor information fusion of IMU and camera is a good strategy. The IMU measurement can provide more precise motion data, while the combination of both can compensate for visual degradation of the camera and correct IMU drift. At present, the most successful systems combining camera and IMU are ORB-SLAM3 [12] and VINS-Mono [13]. ORB-SLAM3 integrated IMU on the basis of ORB-SLAM2, which greatly improved the performance of the system. ORB-SLAM3 is one of the most advanced visual-inertial SLAM systems based on the point feature method. Our work is also based on the ORB-SLAM3. The essence of our work is to propose a tightly-coupled visual-inertial SLAM system based on point and line features and to improve Edge Drawing lines (EDlines) [14] in order to increase the processing speed of the system. The main contributions are as follows:
(1)
An improved line feature detection method based on EDlines. This method improves the accuracy of the system by detecting the curvature of line segments and improving the selection standard of the minimum line segment length to eliminate line segments that would increase error, while having less processing time.
(2)
A more advanced experiment-based adapting factor that further balances the error weights of line features based on a combination of the number of interior point matches and the length of the line segment.
(3)
An autonomously selectable loop detection method for combined point and line features, while having a more advanced similarity score evaluation criterion. The similarity scores of point features and line features are considered both in time and space, while an adaptive weight is used to adapt to texture-varying scenes
(4)
A tightly-coupled stereo visual-inertial SLAM system with point and line features. Experiments conducted on the EuRoC dataset [15] and in real environments demonstrate better performance than those SLAM systems based on point and line features or based on point features and IMU.

2. Related Work

In this section, we will discuss point-line SLAM systems and visual inertial SLAM systems. Most of the existing visual-inertial fusion methods can be divided into loosely-coupled and tightly-coupled approaches [16]. Loosely-coupled approaches separately estimate the IMU data and vision, and then makes a fusion of two results, such as [17,18]. The tightly-coupled approach uses IMU and camera together to construct motion and observation equations and then performs state estimation. This method makes the sensors more complementary and can achieve better results through mutual optimization. VINS-Mono [13] and ORB-SLAM3 [12] are two of the most well-known open source visual-inertial SLAM systems, on which many studies have improved. ORB-SLAM3 is an improvement on ORB-SLAM2 [6]. ORB-SLAM3 improved the robustness of the system by integrating IMU, multi-map stitching techniques and achieved the highest accuracy on some public datasets.
Among the related work on SLAM with point and line features, Line Segment Detector (LSD) [19] is a straight line detection segmentation algorithm, which is widely used in line feature extraction and SLAM systems based on line features because of its ability to control the number of false detections at a low level without parameter adjustment. The core idea is to merge pixel points with similar gradients in the same gradient to achieve the effect of extracting local line features with sub-pixel level accuracy in linear time. The most representative works on the combination of point-line features are PL-SLAM [20], which used line segment endpoints to represent line features, and the PL-SLAM [11] of the same name, which is a stereo SLAM system using LSD and a line band descriptor (LBD) method [21] to match in real time. Regarding point and line matching, PL-SLAM [11] compared the descriptors of features and the process was only accepted if the candidate was the best mutual match. We also adopt this policy for lines in our work. Another work [22] used Plücker coordinates and orthogonal representations to derive the Jacobian matrix and construct line feature reprojection errors, which is a great help for bundle adjustment (BA) with points, lines and IMU in the other, later SLAM systems. PL-VIO [10], based on the VINS-Mono, also used the LSD to extract line segment features and the LBD for line segment matching. At the same time, it added line features to the visual-inertial odometry to achieve a tightly-coupled optimization of point-line features and IMU measurements, which is better than the visual-inertial SLAM systems based only on point features. However, due to the time-consuming process of line feature extraction used to compute the matching, PL-VIO cannot extract and match line features in real time. PL-VINS [23], on the basis of PL-VIO, achieved real-time operation of the LSD algorithm as much as possible, without affecting the accuracy, by adjusting the implicit parameters of the LSD. PL-VINS realized a real-time visual inertial SLAM system based on point and line features. However, the performance of LSD line detection was still unsatisfactory for real-time applications [22], so only the point feature was applied in the closed-loop part and the line feature was not fully utilized. In addition, PEI-VINS [24] applied the EDlines method to reduce the line feature detection time and proved the effectiveness of system. Also based on VINS-Mono is the PLI-VINS [25], which tightly coupled point features, line features, and IMU, and has been experimentally validated for accurate position evaluation by multi-sensor information fusion.
The classic bundle adjustment (BA) aims to optimize the poses and landmarks, which play an important role in SLAM optimization. Expanded Kalman filtering (EKF) was used in early SLAM optimization, such as MonoSLAM [26] in 2007. Only in recent years has the nonlinear optimization method BA become popular due to the increase in computational power. In 2007, PTAM [5] first used the nonlinear optimization method for combined optimization of poses and landmarks, and used the reprojection error of points as a constraint edge for BA, which proved that the nonlinear optimization results were better than those based on filtering. Later, ORB-SLAM2 has a complete realization of constrained edges of points in BA based on the ORB feature [27], which had a significant effect on the following optimization based on point features. Zuo et al. first used orthogonal representations in SLAM based on line features, and added the complete reprojection error of the line to the constrained edges of BA [22], which has been followed in point-line SLAM since then. PL-VIO used LSD to combine line features into visual inertial tightly coupled optimization, while using IMU residuals, reprojection errors of point and line, as constraint edges. Then, PL-SLAM [11] took the distance from the endpoints of the line to the projected straight line as the reprojection error of the line, and fully added the point features and line features to the various parts of the tightly coupled bundle adjustment, which made the bundle adjustment based on the combination of points and lines even better.
Another work that inspired our system is ORB-LINE-SLAM [28], based on the framework of ORB-SLAM3. ORB-LINE-SLAM added an experimentally tuned adapting factor, which is also used and improved in our system. However, ORB-LINE-SLAM has large biases and even tracking failure in challenging environments, for example, motion blur. Meanwhile, line feature detection with LSD took a lot of time in ORB-LINE-SLAM. According to our experiments, its real-time performance is not ideal. Therefore, we improve EDlines to enhance accuracy and robustness with less processing time than ORB-LINE-SLAM.

3. System Overview

Our method mainly improves on the ORB-SLAM3 and also implements three different threads: tracking, local mapping, and loop closing. We incorporate line features into each module of ORB-SLAM3. Figure 1 shows the framework of our system, in which the orange-colored part is where we add line features, and the rest is the same as ORB-SLAM3.

3.1. Tracking

The tracking thread calculates the initial pose estimation by minimizing the reprojection error of point and line matches detected between the current frame and the previous frame, and updates the IMU pre-integration from the last frame. A detailed description of pose estimation will be demonstrated in Section 5. When IMU is initialized, the initial pose will be predicted by IMU. After that, we combine the point and line feature matching and IMU information to jointly optimize the pose of the current frame.
For point features, we use the ORB [27] (Oriented FAST and Rotated BRIEF) method because of its good performance for key point detection. Its improved “Steer BRIEF” also allows for fast and efficient key point matching. Meanwhile, to accelerate the matching speed and reduce the number of outliers, as with PL-SLAM [11], we only choose the best match in the left image corresponding to the best match in the right image as the best match pair. The processing of line features will be described in detail in Section 4.

3.2. Local Mapping

The core idea of this section is divided into two parts. The first is local bundle adjustment based on keyframes, and the second is IMU initialization and state vector optimization.
After inserting the keyframes from the tracking thread, the system will check the new map points and map lines, and then reject the new map points and map lines with bad quality according to observation of them, which will be guided by the following strategy:
(1)
If the number of frames tracked to the point and line is less than 25% and 20%, respectively, of the number of frames visible in their local map, then delete the map point or map line;
(2)
if map points and map lines are observed less than three times in three consecutive frames created, then delete them.
As for the initialization of the IMU, we follow the ORB-SLAM3 method [12] to quickly obtain more accurate IMU parameters. After that, we filter the line segments again according to the update of the IMU gravity direction. When the angle between the gravity direction and the 3D line changes between the current frame and the last keyframe by more than a given threshold, this 3D line will be set as an exception and it will be discarded.

3.3. Loop Closing

Loop closing thread uses a bag of words method which is implemented by DBoW2 [29], based only on the key points and lines for the loop detection. Then, it performs loop correction or map merge. After that, the system performs a full bundle adjustment (BA) to further optimize the map.
It is worth noting that we provide a loop detection part that can autonomously select whether or not to add line features. According to our experiments, although adding line features in loop detection is beneficial to the accuracy, it will also bring a large computational overhead. This is not conducive to the original intention of real-time SLAM, so we provide a selectable model to balance the accuracy and speed.

3.4. Multi-Map

Our system also includes a multi-map (Atlas) representation, consisting of the currently active map and the inactive map, which consists of the following parts:
  • Key points and keylines.
  • A set of keyframes.
  • A co-visibility graph which connects the keyframes.
  • A spanning tree that links all the keyframes together.

4. Tracking

This section introduces the processing methods of point and line features, a two-stage tracking model and an adapting weight factor based on experiment for the reprojection error on the balance line feature. Finally, we propose a new keyframe selection strategy on line features to ensure its effectiveness.

4.1. Feature Selection and Match

4.1.1. Point Features Detection by Gradient Threshold

For the extraction of point features, a fixed extraction threshold is used in the traditional method (ORB-SLAM3), which cannot adapt to the changing environment of texture and easily causes tracking failure. In addition, with the addition of line features to the system, point features are no longer the only source of features, so the use of traditional fixed thresholds to extract point features can also cause redundancy in the system and increase unnecessary computation. Therefore, in order to increase the adaptability of the system to low texture environments while avoiding redundancy, we propose adaptive gradient thresholding to extract point features. The core idea is to predict the threshold of the current frame based on the number of feature points in the previous frame. For the current frame, we start with the threshold predicted by this previous frame and if the number of points extracted is less than that of the previous frame, we update the number threshold gradient for the next frame so that it extracts more feature points until the Maximum gradient threshold G max . Conversely, the extraction gradient for the next frame is reduced until the minimum gradient threshold G min in order to reduce the redundancy of the features.
For the selection of the initial value, we first follow the setting in ORB-SLAM3 for the number of points successfully tracked after map initialization; if it is larger than the experiential threshold, it is considered that the current environment is texture information rich and a smaller threshold needs to be set to avoid redundancy, and conversely, it is considered that there is less texture information and a larger initial value needs to be set. The empirical threshold is valued at 400. After that, we add the difference between the number of points successfully tracked after map initialization and the empirical threshold to the original parameter, which is the initial value of the final gradient threshold G i .

4.1.2. Line Features Detection

For the processing of line features, LSD is a widely popular method. However, its high computational cost usually prevents the system from running in real-time. In addition, LSD usually detects a large number of short line features that are difficult to match and are likely to disappear in the next frame, which not only wastes computational resources but also generates outliers for which matching line features cannot be found. To improve the real-time performance of the line feature extraction, we use EDlinesalgorithm to detect line features instead of LSD. EDlines utilizes an edge-drawing algorithm to generate edge pixel chains that have a faster running speed, about 10 times faster than LSD for a given equivalent image, while maintaining essentially the same accuracy [1]. Therefore, using the EDlines algorithm is more suitable for visual SLAM than LSD.
However, it is worth noting that EDlines is more likely to detect curves [24], which may degrade the performance of the system and affect the accuracy of the SLAM system. EDlines calculates parameters such as the length and direction of a line segment, but not the curvature of the line segment. Therefore, we introduce the curvature estimation method. After the line segment detection, the curvature is calculated for each line segment to determine the curvature degree of the line segment. This is done by traversing the pixel points of the line segment, and then calculating the curvature of each point. Finally, we average the curvature of all points to get the average curvature of the line segment. After that, we make a threshold judgment based on the curvature value of the line segment and filter out the line segments with the required curvature. Based on the experiment, we choose a maximum curvature threshold of 0.02, after which we discard the line segments that exceed the threshold because they increase the chances of false matches later on.
In addition, although EDlines can run without tuning parameters, when line features are applied in SLAM, we need more long line segments that can be tracked stably. Short line segments may be obscured in the next frame, making matching difficult, and the processing of a large number of useless short segments increases the computation time. Therefore, we add a filtering criterion to the original EDlines method of selecting the minimum line segment length based on different pixels, i.e., determining the length based on the number of feature points. The core idea is that, when the number of feature points is larger than the threshold n p , our goal is to filter out the long line segments with higher quality to further improve the accuracy of matching. When the number of feature points is less than the threshold, we need to detect as many line segments as possible to improve the overall tracking robustness of the system. The minimum line segment length is calculated as follows.
n R l 4 log N log p
where R l is the ratio parameter for the minimum line segment length, which is formulated as Equation (2), and the rest of the parameters are the same as defined in EDlines, where N is the pixel side length of the image, and p is the probability of the binomial distribution calculation, which is used to indicate the line direction accuracy and usually takes the value of 0.125.
R l = 3 10 n i 1 n p n i 1 > n p n p n i 1 10 + 1 n i 1 n p
where n i 1 stands for the number of extracted feature points for the i-th frame, and n p equals 20. The final result of the improved line segment detection is shown in Figure 2.

4.1.3. Line Matching

For stereo matching, as with most methods, we selected the 256 bits LBD [21], which contains geometric properties and appearance descriptions of the corresponding line features, and then the KNN method of Hamming distance is used to match along with the corresponding relation between the lines. To reduce outliers, we also refer to the PL-SLAM [11], to allow for possible occlusions and perspective variations in real-world environments, where line pairs are not considered to be matched if their lengths differ by more than twice. In addition, if the distance between the midpoints of two lines on the image is greater than a given threshold, the line pairs are considered as mismatched. Finally, we use the useful geometric information provided by the line segments to filter out line matches with different orientations and lengths, as well as those with large differences in endpoints.

4.2. Motion Estimation

The tracking part consists of two main phases. The first stage of initial pose estimation includes three tracking methods: tracking with reference keyframes, tracking with constant velocity models, and tracking with reposition, which aim to ensure that the motion can be followed in time, but the estimated pose is not very accurate. After that, the pose will be optimized in the part Track Local Map.

4.2.1. Initial Pose Estimation Feature

When the map initialization is complete, if the current velocity is null and IMU has not completed the initialization, the initial pose estimation is performed in tracking in reference keyframes mode. Firstly, the point and line features between the current frame and the reference frame are matched, and then the pose of the last reference keyframe is used as the initial value of the pose of the current frame. After that, the pose will be optimized in the part Track Local Map. In addition, we use a more stringent condition to determine whether the tracking is successful, unlike ORB-SLAM3. Tracking is only successful if the number of matched interior points plus the number of line segments exceeds 15, even in the case of IMU.
If the IMU initialization is complete and the state is normal, we use the constant velocity motion model for initialized pose estimation and directly use IMU to predict the pose, otherwise, the difference of pose is used as the velocity. If the sum of matching pairs of points and lines is less than 20, we will expand the search radius and search again. After that, if the number is still less than 20 but the IMU status is normal at the moment, we add this status as pending. After the pose is optimized and the outer points are removed, tracking will be considered successful if the remaining matching quantity is still greater than 10.
If both of the above modes fail, the relocalization mode is used. First, we detect the candidate frames that satisfy the relocation conditions, and then search for the matching points and lines between the current frame and the candidate frames. and use EPnP for pose estimation if the pair of point and line matches is greater than 15. Finally, the pose will be optimized in the part Track Local Map.
Afterwards, we optimize the poses again with local map tracking according to the state of IMU, as specified in Section 5.

4.2.2. Adapting Weight Factor Based on Experiment

Compared to point features, the endpoints of line segments are not stable and may be obscured when moving from the current frame to the next frame, resulting in larger line projection errors than for points. Therefore, we balance the reprojection error of the line according to the number of matched interior points, which can reduce the positional estimation error. When the number of interior point matches is small, we will increase the reprojection error weight W of the line features and, conversely, if there are enough interior point matching pairs, we will decrease the reprojection error weight. In addition, the length of a line segment is also related to the size of the error. Generally speaking, a longer line segment is more robust and less likely to be lost by tracking. Therefore, based on the strategy in ORB-LINE-SLAM, we improve the reprojection error adapting factor F l of line features which is formulated as:
F l = l i l A V G W α P d i v T
where P stands for the groups of the sets of point matches, α P stands for the total number of elements of group P , d i v is an operator which gives the integer part of a division, l i stands for the length of segment i , and l A V G stands for the average length of all line segments of the current frame. When the number of points increases, the weight W will decrease, because the line features are unstable compared to the point features, and the whole system will be more inclined to the point-based SLAM system if there are enough points, but more inclined to the line-based SLAM system when there are not enough points. In addition, a reasonable threshold T provides a good prior situation for the next tracking, while for the weight W we hope to find the best balance between the projection errors of the points and lines. Since the reprojection error of point features is more reliable than that of line features, we need to adjust these two parameters. The values of threshold T and weight parameters W will be described in detail in Experiment A.

4.3. KeyFrame Decision

As for the selection of keyframe, we need to ensure that it does not cause redundancy, but also ensure the validity. The keyframe selection strategy of ORB-SALM3 is worth learning. The selection of keyframe is initially based on loose standards, with subsequent elimination based on redundancy in the co-visibility graph. Therefore, our system follows the keyframe selection strategy of ORB-SLAM3 while adding a new judgment condition of keyframes for line features:
(1)
If the number of line features tracked in the current frame is less than one-quarter of the number in the last reference keyframe, we insert the keyframe;
(2)
Current frame tracked at least 50 points and 25 lines.
After that, the current frame is constructed as a keyframe, and the keyframe is set as a reference keyframe for the current frame, and a new map point is generated for the current frame.

5. Bundle Adjustment

This section will first introduce the reprojection error of point and line measurement, and then give the Jacobian matrices of residuals vector for line. After that, we construct the tightly coupled graph optimization model with point, line, and IMU. Then, we establish the error function expressions for different modes of BA respectively.

5.1. Reprojection Error of Point and Line Measurement

The conventional error function used for points is the 2D distance between the reprojected 3D point and its corresponding point in the image. However, this procedure cannot be applied directly to line segments because the endpoints of line segments are unstable and may be obscured as they move from one frame to the next. Ref. [30] provides three ways to define the error of a straight line and we use the measurement of the linear observation error based on the geometric distance definition. In Figure 3, we show the fusion of point and line features and visual inertia. For point features, as in most methods, we define the reprojection error as the image distance between the projection point and the observation point. For the reprojection error of a line, we define it as the Euclidean distance from the endpoint to the projected line.
For a 3D spatial line L l W in the world coordinate system and its orthogonal representation O l in the world coordinate system, we project it to the image plane to get the projection line segment l c i , The two endpoints C and D of the line segment are projected onto the image plane at endpoints c and d , respectively. l ^ c i stands for the spatial line observed in the i-th image frame L l W , which can be represented by the two endpoints of the line segment, s c i and e c i . The reprojection error of a line is defined as the distance between two lines l c i and l ^ c i .
e l = d ( s c i , l c i ) d ( e c i , l c i )
where d ( s , l ) is the distance function from the endpoint to the projected line segment, which can be formulated as:
d ( s c i , l c i ) = s c i T   l c i l 1 2 + l 2 2
d ( e c i , l c i ) = e c i T   l c i l 1 2 + l 2 2
The corresponding Jacobi matrix can be derived by the chain rule:
J l = r l l c i l c i L c i L c i δ x i L i L W L W δ O
where r l / l c i is the derivative of the residuals of the line with respect to the line segments in the image coordinate system, expressed as:
r l l c i = l 1 s 1 c i T l l 1 2 + l 2 2 3 2 + u s l 1 2 + l 2 2 1 2 l 2 s 1 c i T l l 1 2 + l 2 2 3 2 + v s l 1 2 + l 2 2 1 2 1 l 1 2 + l 2 2 1 2 l 1 e 1 c i T l l 1 2 + l 2 2 3 2 + u e l 1 2 + l 2 2 1 2 l 1 e 1 c i T l l 1 2 + l 2 2 3 2 + v e l 1 2 + l 2 2 1 2 1 1 l 1 2 + l 2 2 1 2 2 × 3
l c i / L c i is the derivative of a line segment in the image coordinate system with respect to a line in the camera coordinate system.
l c i L c i = K   0 3 × 6
L c i / δ x i is the derivative of a line in the camera coordinate system with respect to the optimized variable.
L c i δ x i = T b c 1 R w b T d w × 0 3 × 3 T b c 1 R w b T n w + d w × p w b × R w b T d w × 0 0 0 6 × 15
where L i / L W is the derivative of a line in the camera coordinate system with respect to a line in the world coordinate system and L W / δ O is the derivative of a line in the camera coordinate system with respect to a line in the world coordinate system:
L i L W L W δ O = T w c 1 0 w 1 u 3 w 1 u 2 w 2 u 1 w 2 u 3 0 w 2 u 1 w 1 u 2 6 × 4
For the covariance of points and end points, following the same method as PL-VIO, we define their covariance matrix e i , j as a 2 × 2 diagonal matrix by supposing that the points have pixel noise both vertically and horizontally in the image plane. In addition, we consider that this is related to the reprojection error when calibrating the camera’s internal reference, which is set to 1.5 pixels in both PL-VINS and PL-VIO, and which also requires dividing by the focus length f when transforming to the normalized plane. Therefore, the covariance of points and end point is defined as:
e i , j = 1.5 f I 2 × 2
For the covariance of lines, we follow PL-SLAM [11]. First, the covariance matrix of line measurement e i , k is defined as a 2 × 2 identity matrix similar to the points, which is then approximated by the inverse of the Hessian of the cost function in the last iteration. After obtaining the Jacobi matrix, we will use the Levenberg–Marquardt method for optimization.

5.2. Tightly-Coupled Visual–Inertial Fusion

For multi-sensor information fusion pose estimation, we use the optimization-based approach to merge stereo visual information and IMU information. Its nonlinear optimization process can be regarded as a factor diagram, as shown in Figure 4, where vertexes are the pose of keyframes, Velocity, and IMU bias. Edges are inertial residuals and reprojection residuals of points and lines.

5.2.1. Initial Pose Estimation Feature

When the IMU is not initialized, and only the visual BA optimization is considered, ω is the vector that contains the variable to be optimized which is the pose of the i - t h frame, which can be expressed as:
ω = arg min j P ρ e i , j T e i . j 1 e i . j + k L ρ F l e i , k T e i . k 1 e i . k
where P and L are the groups of the sets of point and line matches, ρ ( . ) is the robust Huber cost function, and F l is the adapting factor. e i , j stands for the projection error of the i-th map point in the j-th frame defined exactly the same as in ORB-SLAM2.

5.2.2. Visual-Inertial Bundle Adjustment in Tracking

After the IMU is initialized, the Track Local Map in the tracking thread will perform Visual-Inertial Bundle Adjustment with the IMU to optimize the pose, velocity, and IMU bias of the current frame. E i , j and E i , k are the errors presented in Equation (12), in which ω V I stands for the vector containing the above optimized variables, which can be expressed as:
ω V I = arg min j P ρ E i , j + k L ρ F l E i , k + k B r I i T I i ,   i + 1 1 r I i
where B is a set of inertial measurements, and r I i stands for inertia residual, as defined in ORB-SLAM3.

5.2.3. Local Bundle Adjustment

In the local bundle adjustment, visual inertial SLAM can also be expressed as a keyframe-based minimization problem. The poses of a set of keyframes k and the positions of all points and lines seen in these keyframes, along with the IMU parameters, are contained in vector ω V I k , expressed in the following equation.
ω V I k = arg min i k j P ρ ( E i . j ) + k L ρ ( E i . k ) + k B r I i T I i ,   i + 1 1 r I i

6. Loop Closing

Current visual inertial SLAM systems basically follow the approach in ORB-SLAM2 for loop detection, which is based on the bag-of-word with point features only. However, the low texture or frequent illumination variations lead to wrong detection of the traditional point feature-based BoW. Therefore, we use a loop closing method based on the combination of point and line features to add line features to the loop detection. Meanwhile, we propose a combined time and space based adjudication method to weight the similarity score of the point and line features, to more fully take advantage of the data correlation from the line features, and to reduce the error detection, in order to improve the robustness of the system in low-texture and low-light situations.
Since the LBD descriptors of the line features and the rBREF descriptors of the ORB features are both binary vectors, we can use clustering of the two descriptors to build the point and line K-D tree to create a visual dictionary that combines the point and line features. After that, we propose a combined weighted point-line similarity score criterion with an adaptive weight. From the perspective of space, for the similarity of the single features, we consider weighting by the proportion of the features to the sum of all the features. In addition, since it is difficult to describe the image as a whole well with features that are too clustered, it is also necessary to weight the point and line features based on the dispersion of the distribution of point and line features in the image. The distribution of point features is calculated as standard deviation σ p based on their coordinates x i , y i , and the distribution of line features is calculated as standard σ l deviation based on the midpoint coordinates.
σ p = 1 n p x i P x i x ¯ p + 1 n p y i P y i y ¯ p
σ l = 1 n l   x i L x i x ¯ l + 1 n l   y i L y i y ¯ l
where P and L are the groups of point features and line features in the image, respectively, and n p and n l are the number of points and lines extracted in the image, respectively. The coordinates of the points in the candidate keyframe are ( x ¯ p , y ¯ p ) and the coordinates of the midpoint of the lines in the candidate keyframe are ( x ¯ l , y ¯ l ) .
For the combined similarity weights of points and lines, traditional methods such as PL-SLAM and PEI-SLAM only set the similarity weights of both points and lines to 0.5, but with low texture features, the number of points will be reduced and the number of lines will be increased, which will tend to weight the similarity scores of the lines more. Therefore, we set the weight w l on the similarity of the lines based on the final value of the gradient threshold G i at the point feature extraction of that frame, defined as:
w l = G i G min G max G min
Finally, the total combined point-line similarity score is defined as follows:
s = ( 1 w l ) n p n p + n l σ p σ p + σ l s p + w l n l n p + n l σ p σ p + σ l s l
where, s p and s l are the similarity scores of points and lines relative to the candidate keyframes, respectively. The single feature similarity score for frame i is calculated as follows:
s i = i V i V ¯ V i V ¯
where, V i and V ¯ are the corresponding BOW vectors of frame i and candidate keyframe, respectively.
Finally, we decide whether a loop closing has occurred by comparing the total score s with the threshold value. In addition, from the perspective of time, since neighboring keyframes tend to have more of the same features, the loop closing can only be considered effective when two frames are more than 10 frames distant from each other in order to avoid misclassification.

7. Experimental Results

In this section, we experiment on the adapting weight factor to select the best parameters. In addition, we evaluate the performance of PLI-SLAM in comparison to popular methods on EuRoC MAV [15], including the visual-inertial SLAM system based on point features (ORB-SLAM3), the pure visual SLAM system based on point and line features (ORB-LINE-SLAM), and the visual-inertial SLAM system based on point and line features (PEI-VINS). We also present the performance of our system using the original EDlines to verify the effectiveness of adding curvature detection and suppression of short lines to the EDlines. For PEI-VINS, we present the results provided in the paper directly, as it does not have the full code. For all other algorithms, we used the original authors’ parameters tuned for the EuRoC dataset. Meanwhile, the initial parameters of PLI-SLAM are the same as those set by ORB-SLAM3. The metric for the evaluation we used is absolute trajectory error (ATE) [31]. Due to the fact that real-time performance is an important indicator of the SLAM process, we present the performance of the average processing time per frame on each sequence of the EuRoC dataset to verify the validity of our use of EDlines instead of LSD. Finally, the test in a real environment verifies the validity of our approach.
All experiments have been run on an Intel Core i7-8750 CPU, at 2.20 GHz, with 16 GB memory, using only CPU.

7.1. Adapting Weight Factor

The EuRoC MAV dataset provides three flight environments with different speeds, illumination, and textures, including two indoor rooms and an industrial scene. The first experimental sequence we selected is the V101 with slow motion, bright scene, and good texture. The second sequence we chose was the MH03 with fast motion, good texture, and bright scenes. The last sequence we selected, V203, is a motion blur, low-texture scene. We determine the best parameters to balance the line reprojection error based on the performance in three different environments, and also select ATE as the evaluation criterion. Referring to the threshold 20, at which tracking is considered successful, tracking starts by increasing the weights and thresholds simultaneously until the system reaches optimal accuracy. Table 1 shows the average of the results of the three sequence run and, taking into account the randomness of multithreading, each sequence was run 30 times. From the results, for a fixed threshold, as the weighting factor increase, the absolute trajectory error roughly tends to first reach a minimum and then increase until no value of the weight parameter is likely to improve the results. The system achieves the best performance in different environments when T equals 60 and W equals 2.

7.2. EuRoC MAV Dataset Quantitative Evaluation on EuRoC MAV Dataset

Table 2 shows the performance of the above algorithms on all 11 sequences of the EuRoC dataset. The lowest absolute translation errors for each test are marked in bold. From the obtained results, our proposed PLI-SLAM significantly improves the robustness and accuracy to EuRoc, especially for the MH01 sequence with rich line features and challenging sequences, like MH05 and V203. Figure 5 shows the details of the per-frame translation error for the MH01 and MH05, and the results show that our algorithm has the lowest trajectory error. Figure 6 and Figure 7 show the trajectory estimated and relative translation errors on the V203 sequence, which is a most challenging sequence. On this sequence, the purely visual SLAM system ORB-LINE-SLAM performed poorly, with multiple and severe deviations or lost tracking. The pose estimation was severely affected by severe motion blur, resulting in ORB-LINE-SLAM matching only few points and lines in some of these frames. In contrast, several other visual-inertial SLAM systems can operate robustly due to the combination of IMU, which allows the system to determine its own motion attitude through the angular velocity and acceleration information provided by IMU measurement, even in the state of pure vision failure.
At the same time, compared with the point feature-based visual inertia method, the use of high-quality line features can effectively improve the accuracy of the track. Especially, the medium-size factory scene in MH01 and MH03 sequences have a large number of well-structured line segment features, which is conducive to our system improving the trajectory accuracy of motion by using high-quality line features. However, using the original EDlines did not improve the system performance significantly, and even reduced the trajectory accuracy in a few sequences. This is also reasonable, because the reprojection error of line segments is less reliable compared to point features, and using low quality line segments will undoubtedly affect the system performance, which further proves the superiority of our solution. On the whole, it is discernible that PLI-SLAM significantly prevails among the 11 sequences.
In addition, we compare the performance using the bag of words based on point features only with that using the combined bag of words based on point and line features. From the results, the addition of line features in loop detection can significantly reduce the trajectory error. The main reason for this is, when calculating the similarity score, the point features reflect the local information of the image, which is not robust enough for the similar scene, and thus it is more likely to have an error loop. However, line features have higher dimensionality, larger coverage, and are more reflective of global information than point features. Meanwhile, the proposed adaptive weight similarity score criterion makes line features more reliable for loop closing in environments with rich line textures, and reduces false loop and trajectory errors compared to point features.

7.3. Processing Time

With regard to performance of time, we evaluated the average processing time per frame of different methods on each sequence, and the results are shown in Table 3. From the results, the addition of line features increases the processing time of the system, especially in the detection of line features in tracking thread and loop closing thread, but still has advantages compared to LSD. Our system has 28% less average processing time per frame than ORB-LINE-SLAM even with the addition of IMU. This benefits from the speed advantage of EDlines in detecting line features in the tracking thread. In addition, IMU allows for less drift of the poses compared to a pure vision system, thus making a closed-loop correction or map fusion with smaller computational overhead. There is also an advantage in the running time of our system compared to the original EDlines, due to the filtering of the curves and shorts, which saves a lot of time in the subsequent computation of the line descriptors and matching. In addition, we also provide performance of PLI-SLAM with bag-of-words based on point features only, which is comparable to ORB-SLAM3 or even faster. This is due to the reason that we use gradient thresholding to extract point features, which avoids redundant features and reduces computing time in environments with rich point feature textures. However, PLI-SLAM with full bag of words will have a disadvantage in speed compared to ORB-SLAM3, even though it has a large improvement in accuracy. Therefore, we provide a bag of words in loop detection with autonomously selectable line features to achieve a balance in accuracy and speed.

7.4. Real-World Experiment

In order to verify the performance of the proposed method in real-world scenarios with fewer textures, we conducted experiments in Building 3, Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing, China. The real-time data was acquired by a RealSense D455 consisting of a stereo camera (30FPS, 640 × 480) and an IMU (100 HZ). The equipment was calibrated and performed well before the experiment.
The test environment is a common indoor corridor including two scenes with different textures as shown in Figure 8, and the texture features are compared with two typical sequences of EuRoC dataset as shown in Table 4. Corridor1 is an open corridor with more texture and the Corridor2 is a narrow corridor where it is difficult to extract point features, but with many line features. The core purpose of our experiment is to prove the robustness improvement of line features to the system in an environment where it is hard to extract point features. The experimental platform is a four-wheeled Mecanum wheel robot, equipped with a ROS master (Jetson Nano) and a microcontroller (STM32F103VET6), which can set up the program of the microcontroller to obtain the true value of the length of the ground truth with a systematic odometry error of less than 0.5% by giving the target straight-line distance and speed. The target was defined as a line distance of 35 m, and the speed was set at between 0.2 m/s and 0.5 m/s. The robot moved from a rest, in the simplest form of linear motion. The topics of stereo and IMU released by the D455 camera will be collected in real time by recording a rosbag. Afterwards, the recorded rosbag will be played without any other processing to run algorithms. We quantitatively evaluate the ability to estimate the true scale by relative error e s c a l e over the total length of the trajectory between the results and the ground truth [32]. Finally, we visualize trajectories by the EVO evaluation tool and qualitatively compare the ability to work properly from the starting point to the end point.
e s c a l e = l e n g t h r e s u l t l e n g t h g r o u n d t r u t h l e n g t h g r o u n d t r u t h
The e s c a l e of the two algorithms are shown in Table 5. The results show that PLI-SLAM has a better ability to estimate the true scale in Real environment with low textures compared to ORB-SLAM3. As shown in Figure 9, from the trajectory results of the two algorithms, PLI-SLAM has good robustness to low texture, while ORB-SLAM3 leads to a large error of visual-IMU alignment because of few point features extracted, which in turn leads to tracking fails. Figure 10 further shows the performance of the both algorithms for the two corridors. In corridor 1, ORB-SLAM3 can extract enough point features to keep working properly, but please note that most of the point features extracted are focused on the right part of corridor 1, so that few point features can be extracted after entering corridor 2, and this finally fails. Conversely, PLI-SLAM can extract a large number of structural line features in both corridors and thus has good robustness. As a result, adding line features can greatly improve the robustness of SLAM in low texture environments.

8. Conclusions

In this paper, we proposed PLI-SLAM, a tightly coupled point-and-line visual-inertial SLAM system based on ORB-SLAM3, with faster and improved EDlines, instead of the widely used LSD. Our approach achieves good robustness and accuracy in low-texture environments by adding line features and adapting weighting factors for reprojection error. Besides, the multi-sensor information fusion of IMU and camera enables the PLI-SLAM system to operate robustly in scenarios where pure vision may fail. Compared to several different types of SLAM system, our method has great advantages in experiments with the EuRoC dataset, especially on the V203 sequence, which confirms the robustness and accuracy of our system. Meanwhile, PLI-SLAM also has time advantages over methods using LSD. Experiments in real environments also show the superiority of the proposed PLI-SLAM in low texture environments.
In the future, we will look for more ways to reduce the computation time of the line features or combine other structural features to make the system further improve in speed and accuracy.

Author Contributions

Z.T. conceived the idea; Z.T. and B.H. designed the software; Z.T., X.T. and Z.L. collected the test data of real environment; J.C. and Q.H. collected the related resources and supervised the experiment; Z.T. and J.C. proposed the comment for the paper and experiment. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Beijing Nature Science Foundation of China (No. 4232014). Funding of Science And Technology Entry program under grant (KJFGS-QTZCHT-2022-008).

Data Availability Statement

The EuRoC MAV dataset is obtained from https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (accessed on 10 August 2023). Our real-world test data is available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, X.; Ning, S. Real-Time Visual-Inertial SLAM with Point-Line Feature using Improved EDLines Algorithm. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; pp. 1323–1327. [Google Scholar]
  2. Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
  3. Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems. IEEE Trans. Robot. 2017, 33, 249–265. [Google Scholar] [CrossRef]
  4. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  5. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
  6. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  7. Fu, Q.; Yu, H.; Wang, X.; Yang, Z.; He, Y.; Zhang, H.; Mian, A. Fast ORB-SLAM without Keypoint Descriptors. IEEE Trans. Image Process. 2022, 31, 1433–1446. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, Y.; Hsiao, M.; Zhao, Y.; Dong, J.; Engel, J.J. Distributed Client-Server Optimization for SLAM with Limited On-Device Resources. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5336–5342. [Google Scholar]
  9. Li, Y.; Brasch, N.; Wang, Y.; Navab, N.; Tombari, F. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments. IEEE Robot. Autom. Lett. 2020, 5, 6583–6590. [Google Scholar] [CrossRef]
  10. He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. PL-VIO: Tightly-Coupled Monocular Visual-Inertial Odometry Using Point and Line Features. Sensors 2018, 18, 1159. [Google Scholar] [CrossRef] [PubMed]
  11. Gomez-Ojeda, R.; Moreno, F.-A.; Zuniga-Noel, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef]
  12. Campos, C.; Elvira, R.; Rodriguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  13. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  14. Akinlar, C.; Topal, C. Edlines: Real-time line segment detection by Edge Drawing (ed). In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 2837–2840. [Google Scholar]
  15. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  16. Liu, Y.; Yang, D.; Li, J.; Gu, Y.; Pi, J.; Zhang, X. Stereo Visual-Inertial SLAM With Points and Lines. IEEE Access 2018, 6, 69381–69392. [Google Scholar] [CrossRef]
  17. Falquez, J.M.; Kasper, M.; Sibley, G. Inertial aided dense & semi-dense methods for robust direct visual odometry. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 3601–3607. [Google Scholar]
  18. Weiss, S. Vision Based Navigation for Micro Helicopters. Doctor’s Dessertation, ETH Zürich, Zürich, Switzerland, 2012. [Google Scholar]
  19. Gioi, R.G.v.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  20. Pumarola, A.; Vakhitov, A.; Agudo, A.; Sanfeliu, A.; Moreno-Noguer, F. PL-SLAM: Real-time monocular visual SLAM with points and lines. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4503–4508. [Google Scholar]
  21. Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
  22. Zuo, X.; Xie, X.; Liu, Y.; Huang, G. Robust visual SLAM with point and line features. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1775–1782. [Google Scholar]
  23. Fu, Q.; Wang, J.; Yu, H.; Ali, I.; Guo, F.; He, Y.; Zhang, H.J. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line Features. arXiv 2020, arXiv:2009.07462. [Google Scholar] [CrossRef]
  24. Rong, H.; Gao, Y.; Guan, L.; Ramirez-Serrano, A.; Xu, X.; Zhu, Y. Point-Line Visual Stereo SLAM Using EDlines and PL-BoW. Remote Sens. 2021, 13, 3591. [Google Scholar] [CrossRef]
  25. Zhao, Z.; Song, T.; Xing, B.; Lei, Y.; Wang, Z. PLI-VINS: Visual-Inertial SLAM Based on Point-Line Feature Fusion in Indoor Environment. Sensors 2022, 22, 5457. [Google Scholar] [CrossRef] [PubMed]
  26. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
  27. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  28. Alamanos, I.; Tzafestas, C. ORB-LINE-SLAM: An Open-Source Stereo Visual SLAM System with Point and Line Features. 2022. Available online: https://www.techrxiv.org/articles/preprint/ORB-LINE-SLAM_An_Open-Source_Stereo_Visual_SLAM_System_with_Point_and_Line_Features/21691949/1 (accessed on 5 August 2023). [CrossRef]
  29. Galvez-López, D.; Tardos, J.D. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Trans. Robot. 2012, 28, 1188–1197. [Google Scholar] [CrossRef]
  30. Bartoli, A.; Sturm, P. The 3D line motion matrix and alignment of line reconstructions. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; p. I. [Google Scholar]
  31. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
  32. Servières, M.; Renaudin, V.; Dupuis, A.; Antigny, N. Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking. J. Sens. 2021, 2021, 2054828. [Google Scholar] [CrossRef]
Figure 1. Overview of the general structure of the system.
Figure 1. Overview of the general structure of the system.
Remotesensing 15 04678 g001
Figure 2. The extracted lines of EDlines in the EuRoC dataset. (a) The original images. (b) The line features detected by original EDlines. (c) The line features detected by EDlines with curvature detection.
Figure 2. The extracted lines of EDlines in the EuRoC dataset. (a) The original images. (b) The line features detected by original EDlines. (c) The line features detected by EDlines with curvature detection.
Remotesensing 15 04678 g002
Figure 3. A Demonstration of visual–inertial sensors, point observations, and line observations.
Figure 3. A Demonstration of visual–inertial sensors, point observations, and line observations.
Remotesensing 15 04678 g003
Figure 4. SLAM System factor map.
Figure 4. SLAM System factor map.
Remotesensing 15 04678 g004
Figure 5. The translation error per-frame for the MH01 and MH05. (a) The translation error per-frame for the MH01 (b) The translation error per-frame for the MH05.
Figure 5. The translation error per-frame for the MH01 and MH05. (a) The translation error per-frame for the MH01 (b) The translation error per-frame for the MH05.
Remotesensing 15 04678 g005
Figure 6. Trajectory estimated of PLI-SLAM (left), ORB-SLAM3 (middle), and ORB-LINE-SLAM (right) by heat maps color-coded on the sequence V203. Red corresponds to higher error levels, and blue to lower ones. The gray dotted lines are the ground truth trajectories.
Figure 6. Trajectory estimated of PLI-SLAM (left), ORB-SLAM3 (middle), and ORB-LINE-SLAM (right) by heat maps color-coded on the sequence V203. Red corresponds to higher error levels, and blue to lower ones. The gray dotted lines are the ground truth trajectories.
Remotesensing 15 04678 g006
Figure 7. Relative translational error. (a) Comparison of relative translation errors of PLI-SLAM and ORB-SLAM3 (b) Relative translation error of ORB-LINE-SLAM.
Figure 7. Relative translational error. (a) Comparison of relative translation errors of PLI-SLAM and ORB-SLAM3 (b) Relative translation error of ORB-LINE-SLAM.
Remotesensing 15 04678 g007
Figure 8. The test environment and experimental platform. The first half of the test environment (Corridor1) is an open corridor with more texture and the second half (Corridor2) is a narrow corridor with less texture.
Figure 8. The test environment and experimental platform. The first half of the test environment (Corridor1) is an open corridor with more texture and the second half (Corridor2) is a narrow corridor with less texture.
Remotesensing 15 04678 g008
Figure 9. Trajectory comparison between ORB-SLAM3 and PLI-SLAM.
Figure 9. Trajectory comparison between ORB-SLAM3 and PLI-SLAM.
Remotesensing 15 04678 g009
Figure 10. The extraction of feature points and feature lines when working. (a) Comparison of two algorithms for feature extraction in Corridor1—a sufficient number of point features can be detected; (b) Comparison of two algorithms for feature extraction in Corridor2—very few point features but enough line features can be detected.
Figure 10. The extraction of feature points and feature lines when working. (a) Comparison of two algorithms for feature extraction in Corridor1—a sufficient number of point features can be detected; (b) Comparison of two algorithms for feature extraction in Corridor2—very few point features but enough line features can be detected.
Remotesensing 15 04678 g010
Table 1. System’s performance for various values of threshold and weight parameters of the adapting factor (Unit: m).
Table 1. System’s performance for various values of threshold and weight parameters of the adapting factor (Unit: m).
ThresholdWeighting Factor
1.251.51.7522.252.5
200.02560.02310.02250.02190.02350.0278
300.02360.02320.02410.02460.02420.0266
400.02990.02550.02280.02320.02810.0311
500.02750.02850.02860.02410.02470.0273
600.02860.02620.02420.02120.02230.0257
700.02620.02490.02310.02230.02670.0278
800.02890.02650.022980.02550.02640.0289
Table 2. Units for Magnetic Properties Performance comparison on EuRoC dataset (Unit: m).
Table 2. Units for Magnetic Properties Performance comparison on EuRoC dataset (Unit: m).
Without IMUWith IMU
ORB-LINE-SLAMPEI-VINSORB-SLAM3PLI-SLAM
(with Original EDlines)
PLI-SLAM
with Bag of Words Based on Points only
PLI-SLAM
with Full Bag of Words (Ours)
MH01_easy0.0375 0.03710.05180.05080.03360.0203
MH02_easy0.0442 0.04760.02320.04570.03060.0208
MH03_medium0.0419 0.04360.03200.03120.02980.0285
MH04_difficult0.1083 0.05910.05050.05510.05080.0445
MH05_difficult0.0552 0.04760.06970.05630.05120.0485
V101_easy0.0857 0.08250.03690.04230.03530.0351
V102_medium0.0638 0.06440.01820.02320.01580.0132
V103_difficult0.0647 0.08510.02330.02310.02300.0226
V201_easy0.0585 0.06350.02610.05010.03770.0167
V202_medium0.0555 0.05420.01420.02330.01140.0110
V203_difficult0.1497 0.40570.02530.02660.02080.0181
Mean0.0695 0.09000.03370.04420.03090.0253
Table 3. Average processing time per frame on EuRoC.
Table 3. Average processing time per frame on EuRoC.
Without LineWith LSDWith Original EDlinesWith Improved EDlines
ORB-SLAM3ORB-LINE-SLAMPEI-VINSPLI-SLAM with
Original EDlines
PLI-SLAM
with Bag of Words Based on Points Only
PLI-SLAM
with Full Bag of Words (Ours)
MH01_easy0.059880.105210.074100.059040.059120.06614
MH02_easy0.058590.099440.077980.067360.059210.06413
MH03_medium0.055520.094820.076350.064020.053240.06119
MH04_difficult0.044670.077520.069560.068860.046430.05641
MH05_difficult0.046820.081090.066600.057320.049110.05608
V101_easy0.043350.071930.061330.046450.045360.05126
V102_medium0.044770.071220.065850.050010.046700.04863
V103_difficult0.041190.062390.059100.051980.045740.05068
V201_easy0.049540.076670.061720.059720.054430.06039
V202_medium0.050260.073310.058560.058220.049910.05485
V203_difficult0.054540.05834-0.056440.054150.05679
AGV0.049860.079260.0671150.058120.051120.05695
Table 4. Comparison of the texture features between test environment and two typical sequences of EuRoC dataset.
Table 4. Comparison of the texture features between test environment and two typical sequences of EuRoC dataset.
Level of Extraction Ease of
Point Features
Level of Extraction Ease of Line Features
MH01_easyeasyeasy
V203_difficultmiddlemiddle
Test
environment
Corridor1middleeasy
Corridor2difficulteasy
Table 5. The relative error over the total length of the trajectory between the results and the ground truth.
Table 5. The relative error over the total length of the trajectory between the results and the ground truth.
Length of the Trajectory (m) e s c a l e
PLI-SLAM34.3600.01828
ORB-SLAM321.6330.38194
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Teng, Z.; Han, B.; Cao, J.; Hao, Q.; Tang, X.; Li, Z. PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features. Remote Sens. 2023, 15, 4678. https://doi.org/10.3390/rs15194678

AMA Style

Teng Z, Han B, Cao J, Hao Q, Tang X, Li Z. PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features. Remote Sensing. 2023; 15(19):4678. https://doi.org/10.3390/rs15194678

Chicago/Turabian Style

Teng, Zhaoyu, Bin Han, Jie Cao, Qun Hao, Xin Tang, and Zhaoyang Li. 2023. "PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features" Remote Sensing 15, no. 19: 4678. https://doi.org/10.3390/rs15194678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop