A Novel Real-Time Reference Key Frame Scan Matching Method

Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.


Introduction
In recent decades, unmanned aerial vehicles (UAVs) have become an active area of research and development because of their ability to extend human capability that allows them to execute dangerous tasks safely, saving time, and, more importantly, saving lives. Furthermore, their increasing relevance stems from the potential diversity of their use both outdoors and indoors. Beyond mere observation and surveillance tasks, UAVs are increasingly being used as part of search and rescue operations, providing real-time mapping of the environment, locating victims and hard-hit areas after a natural disaster. In conjunction with UAVs, recent advances in hardware and software have made it possible to conduct accurate mapping without using costly, high-end data acquisition systems. Low-cost laser scanners and navigation systems provide accurate mapping if properly integrated at the hardware and software level. As such, UAVs have emerged as an aerial mapping platform providing additional From the same perspective, the proposed algorithm extracts lines from the laser range finder point cloud because line matching is more robust than point matching, and furthermore, lines are robust to the disruption of the moving objects in the dynamic environments [23]. Therefore, two nonparallel lines are chosen from the extracted lines to be the mapping reference lines as described in Section 3.2. These two reference lines play the role of the initial reference key frame of the map. Thereafter, the two reference lines are successively matched in the in-between scan frames in order to compute the transformation between the current scan frame with the reference key frame. Due to the vehicle's movement, the existing reference lines of the current key frame might not be detectable all the time. Therefore, another reference key frame is created with two new reference lines which marks the beginning of new transformation. The transformation parameters between the old and new reference lines are computed every transition. Consequently, the last transformation parameters are computed relative to the first reference key frame. As a result, the proposed algorithm does not depend on the transformation history, and further, the effect of rotation error at any epoch between two successive frames will disappear in the upcoming transformation. Therefore, the proposed algorithm mitigates the accumulated errors by using reference lines method. Finally, the ICP algorithm is applied as a fine tuning process for the global scan matching. In case of lines outage period, due to navigating in corridor or unstructured environment, the proposed algorithm will alternate to the ICP algorithm alone. The ICP algorithm is performed for local and global scan matching as well for map consistency. Implementation of the ICP algorithm, during the lines outage period, also decreases the sensitivity of the proposed algorithm to the thresholds values because accepting reference lines depends on thresholds, discussed later, and tuning the thresholds is an important process but deploying the ICP algorithm reduces the thresholds tuning process. Figure 2 depicts the alternating structure between the RKF and ICP methods. As long as the proposed algorithm can choose two reference lines from the detected lines of the current scan frame, the RKF method is implemented. Otherwise, the ICP method is performed. If reference lines are detected once more in both previous and current scan frame, the proposed algorithm alternate back to the RKF method. From the same perspective, the proposed algorithm extracts lines from the laser range finder point cloud because line matching is more robust than point matching, and furthermore, lines are robust to the disruption of the moving objects in the dynamic environments [23]. Therefore, two non-parallel lines are chosen from the extracted lines to be the mapping reference lines as described in Section 3.2. These two reference lines play the role of the initial reference key frame of the map. Thereafter, the two reference lines are successively matched in the in-between scan frames in order to compute the transformation between the current scan frame with the reference key frame. Due to the vehicle's movement, the existing reference lines of the current key frame might not be detectable all the time. Therefore, another reference key frame is created with two new reference lines which marks the beginning of new transformation. The transformation parameters between the old and new reference lines are computed every transition. Consequently, the last transformation parameters are computed relative to the first reference key frame. As a result, the proposed algorithm does not depend on the transformation history, and further, the effect of rotation error at any epoch between two successive frames will disappear in the upcoming transformation. Therefore, the proposed algorithm mitigates the accumulated errors by using reference lines method. Finally, the ICP algorithm is applied as a fine tuning process for the global scan matching. In case of lines outage period, due to navigating in corridor or unstructured environment, the proposed algorithm will alternate to the ICP algorithm alone. The ICP algorithm is performed for local and global scan matching as well for map consistency. Implementation of the ICP algorithm, during the lines outage period, also decreases the sensitivity of the proposed algorithm to the thresholds values because accepting reference lines depends on thresholds, discussed later, and tuning the thresholds is an important process but deploying the ICP algorithm reduces the thresholds tuning process. Figure 2 depicts the alternating structure between the RKF and ICP methods. As long as the proposed algorithm can choose two reference lines from the detected lines of the current scan frame, the RKF method is implemented. Otherwise, the ICP method is performed. If reference lines are  Figure 3 shows the overall structure of the proposed algorithm. The lines are extracted from every two successive scan frames. The availability of selecting two non-parallel lines as reference lines is checked. If the reference lines are detected, the lines matching process will be implemented to determine the matched reference lines in the current scan frame. Thereafter, the transformation parameters are computed with respect to the last reference key frame. After transformation to the mapping frame, the ICP algorithm is utilized with the previous mapping scan frame. Finally, the current mapping and position are computed. On the other hand, if the reference lines are not detected; the ICP algorithm is executed alone as a lines outage period.

Line Extraction
The line-tracking (LT) algorithm [24] is used for clustering the point cloud, provided by the laser scan rangefinder, into groups per threshold ( ) , as shown in Figure 4. This algorithm is characterized by low time complexity, an important factor in real-time systems.  Figure 3 shows the overall structure of the proposed algorithm. The lines are extracted from every two successive scan frames. The availability of selecting two non-parallel lines as reference lines is checked. If the reference lines are detected, the lines matching process will be implemented to determine the matched reference lines in the current scan frame. Thereafter, the transformation parameters are computed with respect to the last reference key frame. After transformation to the mapping frame, the ICP algorithm is utilized with the previous mapping scan frame. Finally, the current mapping and position are computed. On the other hand, if the reference lines are not detected; the ICP algorithm is executed alone as a lines outage period.  Figure 3 shows the overall structure of the proposed algorithm. The lines are extracted from every two successive scan frames. The availability of selecting two non-parallel lines as reference lines is checked. If the reference lines are detected, the lines matching process will be implemented to determine the matched reference lines in the current scan frame. Thereafter, the transformation parameters are computed with respect to the last reference key frame. After transformation to the mapping frame, the ICP algorithm is utilized with the previous mapping scan frame. Finally, the current mapping and position are computed. On the other hand, if the reference lines are not detected; the ICP algorithm is executed alone as a lines outage period.

Line Extraction
The line-tracking (LT) algorithm [24] is used for clustering the point cloud, provided by the laser scan rangefinder, into groups per threshold ( ) , as shown in Figure 4. This algorithm is characterized by low time complexity, an important factor in real-time systems.

Line Extraction
The line-tracking (LT) algorithm [24] is used for clustering the point cloud, provided by the laser scan rangefinder, into groups per threshold (T max ), as shown in Figure 4. This algorithm is characterized by low time complexity, an important factor in real-time systems.  The threshold ( ) is chosen depending on the sensor's precision of the laser scanner range finder and the characteristics of the environment such as bricks, tiling, and painted wall. Since all the experiments are performed using a low-cost laser scanner range finder, its specifications are described in Section 4. Thus, sensor calibration process is accomplished to estimate the standard deviation ( ) of the sensor in different detection range. Table 1 lists different standard deviations of the used laser scanner range finder pursuant to the detection range. As a result of many experiments and for more confidence, it is preferable to use (2 ) in the determination of the threshold ( ), and adding the environment effect.  Figure 5 shows the detection behavior of the laser scanner range finder in a static mode for the same wall, of about 2 m in length, and from detection distance, of about 3 m. This strange behavior is due to the distance resolution and is approximately 1% of the detection distance. Therefore, the same wall detection will be represented as successive lines with a separation distance equal to the 1% of the detection distance; 3 cm in this example.   Figure 7 demonstrates an adjusted line-tracking algorithm for reducing time complexity down to 4 ms to be appropriate for a real-time system. The algorithm will not build a line for every new point added, unless the orthogonal distance is more than the threshold ( ). In this case, a new line is built from all previous points, and so the algorithm checks the threshold condition again ( ), and, if the condition is valid, then a new point is successively added to the previous line. If the condition is invalid, the line is terminated and a new line is started, and this is repeated until reaching the end of the dataset. The threshold (T max ) is chosen depending on the sensor's precision of the laser scanner range finder and the characteristics of the environment such as bricks, tiling, and painted wall. Since all the experiments are performed using a low-cost laser scanner range finder, its specifications are described in Section 4. Thus, sensor calibration process is accomplished to estimate the standard deviation (σ) of the sensor in different detection range. Table 1 lists different standard deviations of the used laser scanner range finder pursuant to the detection range. As a result of many experiments and for more confidence, it is preferable to use (2σ) in the determination of the threshold (T max ), and adding the environment effect.  Figure 5 shows the detection behavior of the laser scanner range finder in a static mode for the same wall, of about 2 m in length, and from detection distance, of about 3 m. This strange behavior is due to the distance resolution and is approximately 1% of the detection distance. Therefore, the same wall detection will be represented as successive lines with a separation distance equal to the 1% of the detection distance; 3 cm in this example. The threshold ( ) is chosen depending on the sensor's precision of the laser scanner range finder and the characteristics of the environment such as bricks, tiling, and painted wall. Since all the experiments are performed using a low-cost laser scanner range finder, its specifications are described in Section 4. Thus, sensor calibration process is accomplished to estimate the standard deviation ( ) of the sensor in different detection range. Table 1 lists different standard deviations of the used laser scanner range finder pursuant to the detection range. As a result of many experiments and for more confidence, it is preferable to use (2 ) in the determination of the threshold ( ), and adding the environment effect.  Figure 5 shows the detection behavior of the laser scanner range finder in a static mode for the same wall, of about 2 m in length, and from detection distance, of about 3 m. This strange behavior is due to the distance resolution and is approximately 1% of the detection distance. Therefore, the same wall detection will be represented as successive lines with a separation distance equal to the 1% of the detection distance; 3 cm in this example.   Figure 7 demonstrates an adjusted line-tracking algorithm for reducing time complexity down to 4 ms to be appropriate for a real-time system. The algorithm will not build a line for every new point added, unless the orthogonal distance is more than the threshold ( ). In this case, a new line is built from all previous points, and so the algorithm checks the threshold condition again ( ), and, if the condition is valid, then a new point is successively added to the previous line. If the condition is invalid, the line is terminated and a new line is started, and this is repeated until reaching the end of the dataset.   Figure 7 demonstrates an adjusted line-tracking algorithm for reducing time complexity down to 4 ms to be appropriate for a real-time system. The algorithm will not build a line for every new point added, unless the orthogonal distance is more than the threshold (T max ). In this case, a new line is built from all previous points, and so the algorithm checks the threshold condition again (T max ), and, if the condition is valid, then a new point is successively added to the previous line. If the condition is  Line fitting is implemented using principle component analysis (PCA) [25]. This statistical procedure concerns itself with the interpretation of the covariance structure of a dataset in order to   Line fitting is implemented using principle component analysis (PCA) [25]. This statistical procedure concerns itself with the interpretation of the covariance structure of a dataset in order to  Line fitting is implemented using principle component analysis (PCA) [25]. This statistical procedure concerns itself with the interpretation of the covariance structure of a dataset in order to identify in which principle direction the data varies. The first principle component has the largest variance, and successive principle components possess variances in descending order. From this, Eigenvalues and Eigenvectors of the covariance matrix of the data set are computed. As the data set is 2D, it has two principle components; the eigenvector (v 1 ) of the biggest eigenvalue represents the principle component-the line fitting the data set, and; the second eigenvector (v 2 ) represents the robustness of the line (line uncertainty), as shown in Figure 8. identify in which principle direction the data varies. The first principle component has the largest variance, and successive principle components possess variances in descending order. From this, Eigenvalues and Eigenvectors of the covariance matrix of the data set are computed. As the data set is 2D, it has two principle components; the eigenvector ( 1 ) of the biggest eigenvalue represents the principle component-the line fitting the data set, and; the second eigenvector ( 2 ) represents the robustness of the line (line uncertainty), as shown in Figure 8. The line availability in indoor environments has been evaluated using various data sets offered by different research groups. The results show that the mean number of line availability is ranging from 4.10 to 8.86 lines per scan. Figure 9 presents the tested data sets that comprise of MIT Killian Court, MIT CSAIL Building, Intel Research Lab Seattle, ACES Building at the University of Texas, and building 079 University of Freiburg, respectively [26].   The line availability in indoor environments has been evaluated using various data sets offered by different research groups. The results show that the mean number of line availability is ranging from 4.10 to 8.86 lines per scan. Figure 9 presents the tested data sets that comprise of MIT Killian Court, MIT CSAIL Building, Intel Research Lab Seattle, ACES Building at the University of Texas, and building 079 University of Freiburg, respectively [26]. identify in which principle direction the data varies. The first principle component has the largest variance, and successive principle components possess variances in descending order. From this, Eigenvalues and Eigenvectors of the covariance matrix of the data set are computed. As the data set is 2D, it has two principle components; the eigenvector ( 1 ) of the biggest eigenvalue represents the principle component-the line fitting the data set, and; the second eigenvector ( 2 ) represents the robustness of the line (line uncertainty), as shown in Figure 8. The line availability in indoor environments has been evaluated using various data sets offered by different research groups. The results show that the mean number of line availability is ranging from 4.10 to 8.86 lines per scan. Figure 9 presents the tested data sets that comprise of MIT Killian Court, MIT CSAIL Building, Intel Research Lab Seattle, ACES Building at the University of Texas, and building 079 University of Freiburg, respectively [26].       Table 2 lists the availability of lines and extraction time for different data sets. All the displayed results are for extracted lines that are composed of more than seven points.

Reference Key Frame
Initially, two reference lines are chosen from the extracted lines of the first point cloud, which represent the first key frame, in consonance with the following criteria: (a) the longest lines; (b) number of the points building the line is not less than the threshold ( ); (c) the relative robustness of the lines, and; (d) the inscribed angle between the two reference lines is not less than the threshold ( ). The threshold ( ) is selected depending on the value of the adjusted coefficient of determination ( ̅ 2 ). Since the explanatory variables, ( , ), are fixed because the linear model is dominated for the 2D environments and although the coefficient of determination ( 2 ); which is equal to the square of the correlation coefficient ( ), provides an explained impression of the total variation in ( ) by the linear relationship between ( ) and ( ) . However, the coefficient of determination does not include the impact of the number of the data points that builds the line.     Table 2 lists the availability of lines and extraction time for different data sets. All the displayed results are for extracted lines that are composed of more than seven points.

Reference Key Frame
Initially, two reference lines are chosen from the extracted lines of the first point cloud, which represent the first key frame, in consonance with the following criteria: (a) the longest lines; (b) number of the points building the line is not less than the threshold ( ); (c) the relative robustness of the lines, and; (d) the inscribed angle between the two reference lines is not less than the threshold ( ). The threshold ( ) is selected depending on the value of the adjusted coefficient of determination ( ̅ 2 ). Since the explanatory variables, ( , ), are fixed because the linear model is dominated for the 2D environments and although the coefficient of determination ( 2 ); which is equal to the square of the correlation coefficient ( ), provides an explained impression of the total variation in ( ) by the linear relationship between ( ) and ( ) . However, the coefficient of determination does not include the impact of the number of the data points that builds the line.  Table 2 lists the availability of lines and extraction time for different data sets. All the displayed results are for extracted lines that are composed of more than seven points.

Reference Key Frame
Initially, two reference lines are chosen from the extracted lines of the first point cloud, which represent the first key frame, in consonance with the following criteria: (a) the longest lines; (b) number of the points building the line is not less than the threshold (min Pcount ); (c) the relative robustness of the lines, and; (d) the inscribed angle between the two reference lines is not less than the threshold min angle .
The threshold (min Pcount ) is selected depending on the value of the adjusted coefficient of determination R 2 . Since the explanatory variables, (x, y), are fixed because the linear model is dominated for the 2D environments and although the coefficient of determination R 2 ; which is equal to the square of the correlation coefficient (r), provides an explained impression of the total variation Sensors 2017, 17, 1060 9 of 29 in (y) by the linear relationship between (x) and (y). However, the coefficient of determination does not include the impact of the number of the data points that builds the line. Therefore, the adjusted coefficient of determination interprets the measure of fit considering the number of data points. The accepted percentage of the adjusted coefficient of determination is 80%: where σ xy is the covariance between (x) and (y), (σ x ) and σ y are the standard deviations of (x) and (y) respectively, (n) is the number of the data points, and (k) is the number of explanatory variables. The inscribed angle min angle is chosen depending on the angle that minimize the distance of the center uncertainty d CenUncertainty of the intersection between the two reference lines. v 2 is the second eigenvector that represents the line uncertainty: (3) Figure 12 shows the inscribed angle between the two reference lines and its impact on the distance of the center uncertainty. The red solid line represents the first reference line while the red dashed lines present the line uncertainty. On the other hand, the green solid line represents the second reference line and the green dashed lines represent the line uncertainty. The blue line represents the uncertainty distance (second eigenvector) of the second reference line. The distance of the center uncertainty will positively affect the selection of the two reference lines. Since the algorithm depends on two non-parallel reference lines, assigning the min accepted inscribed angle depends on the environment structure and the precision of the range finder sensor. Therefore, the adjusted coefficient of determination interprets the measure of fit considering the number of data points. The accepted percentage of the adjusted coefficient of determination is 80%: where ( ) is the covariance between ( ) and ( ), ( ) and ( ) are the standard deviations of ( ) and ( ) respectively, ( ) is the number of the data points, and ( ) is the number of explanatory variables. The inscribed angle ( a ) is chosen depending on the angle that minimize the distance of the center uncertainty ( ) of the intersection between the two reference lines. 2 is the second eigenvector that represents the line uncertainty: (3) Figure 12 shows the inscribed angle between the two reference lines and its impact on the distance of the center uncertainty. The red solid line represents the first reference line while the red dashed lines present the line uncertainty. On the other hand, the green solid line represents the second reference line and the green dashed lines represent the line uncertainty. The blue line represents the uncertainty distance (second eigenvector) of the second reference line. The distance of the center uncertainty will positively affect the selection of the two reference lines. Since the algorithm depends on two non-parallel reference lines, assigning the min accepted inscribed angle depends on the environment structure and the precision of the range finder sensor.  Figure 13 illustrates the two reference lines of the first key frame. The main reference line is represented by the red color and the green line is the second reference line, while the yellow color presents the rest lines. The position of the laser range finder is represented by the red asterisk. Table 3 lists the computed coefficient of determination, adjusted coefficient of determination, number of data points that build each extracted line in the scan, and length of the extracted lines in the first key frame as shown in Figure 13. Line number (1) is chosen to be the first reference line because it is the longest line, most robust line, and is built from the max points group. Although line number (3) is selected to be the second reference line and this line is not the second robust line, but it is the second longest line and is built from the second max points group, and further, it exceeds the min accepted percentage of the adjusted coefficient of determination which is 80%.  Figure 13 illustrates the two reference lines of the first key frame. The main reference line is represented by the red color and the green line is the second reference line, while the yellow color presents the rest lines. The position of the laser range finder is represented by the red asterisk. Table 3 lists the computed coefficient of determination, adjusted coefficient of determination, number of data points that build each extracted line in the scan, and length of the extracted lines in the first key frame as shown in Figure 13. Line number (1) is chosen to be the first reference line because it is the longest line, most robust line, and is built from the max points group. Although line number (3) is selected to be the second reference line and this line is not the second robust line, but it is the second longest line and is built from the second max points group, and further, it exceeds the min accepted percentage of the adjusted coefficient of determination which is 80%.  The first two reference lines are the kernel for the mapping frame. Afterward, all the in-between frames until receiving another key frame are sharing in constructing a coherent map using their directly relation with the reference key frame, as shown in Figure 14. The vehicle's absolute position with respect to the reference lines in the mapping frame is determined by calculating the orthogonal distances from it to the reference lines at every period. This helps compensate for any potential accumulated errors, especially given that other techniques for determining relative position [27] have difficulties with error accumulation over time. Orthogonal distances from the vehicle position to all the lines are calculated as well, and additionally, the  The first two reference lines are the kernel for the mapping frame. Afterward, all the in-between frames until receiving another key frame are sharing in constructing a coherent map using their directly relation with the reference key frame, as shown in Figure 14.  The first two reference lines are the kernel for the mapping frame. Afterward, all the in-between frames until receiving another key frame are sharing in constructing a coherent map using their directly relation with the reference key frame, as shown in Figure 14. The vehicle's absolute position with respect to the reference lines in the mapping frame is determined by calculating the orthogonal distances from it to the reference lines at every period. This helps compensate for any potential accumulated errors, especially given that other techniques for determining relative position [27] have difficulties with error accumulation over time. Orthogonal distances from the vehicle position to all the lines are calculated as well, and additionally, the The vehicle's absolute position with respect to the reference lines in the mapping frame is determined by calculating the orthogonal distances from it to the reference lines at every period. This helps compensate for any potential accumulated errors, especially given that other techniques for determining relative position [27] have difficulties with error accumulation over time. Orthogonal distances from the vehicle position to all the lines are calculated as well, and additionally, the intersection points between the orthogonal lines and all the lines are computed.

Scan Matching
Representing each scan using a group of lines is more reliable than using a group of points because correct data association for each point between two successive scans is quite a difficult process. In contrast, matching lines is both reliable and robust. A line is considered to be matched after accepting mutual compatibility with the previous scan line. Matching criteria, described hereafter, are performed according to several computations, thereby ensuring a correct match. Thence, angles between the previous scan lines and all current lines are computed using a vector dot product.
Candidate lines whose angle is less than a certain threshold max ScanRotationAngle are selected. The threshold max ScanRotationAngle is opted according to the max rotation angle of the vehicle based on its dynamic. Subsequently, the candidate line achieving the following two conditions is selected: (a) it possesses the smallest orthogonal distance compared with the previous one, and; (b) it possesses the smallest Euclidean distance of the current intersection points with the previous one. Finally, if the smallest orthogonal distance is bigger than the threshold max OrthogonalShi f t , there is no matching, otherwise, matching does occur. The max OrthogonalShi f t is selected depending on the max speed of the vehicle based on its dynamic as well.
After the lines are matched, the new matched reference lines become the new reference lines and a new position for the vehicle is determined. Thereafter, the angles between all matched lines in each scan frame are computed. Using a specified threshold min angle angle, lines accepting this threshold are selected, and intersection points for the selected lines are then calculated regardless of the physical intersection of these line segments in the scan. These, then, become the corners for both previous and current environments. To account for inherent uncertainties within detected corners, the covariance of the corners is estimated using extracted line variances, as illustrated in Figure 15. The red ellipses present the confidence ellipse regions for each detected corner, while the black dashed lines present the precision of the lines that built around the matched lines using the second Eigenvalue. The intersection points, between the position of the laser range finder and each detected line, are represented by the black asterisks.

Scan Matching
Representing each scan using a group of lines is more reliable than using a group of points because correct data association for each point between two successive scans is quite a difficult process. In contrast, matching lines is both reliable and robust. A line is considered to be matched after accepting mutual compatibility with the previous scan line. Matching criteria, described hereafter, are performed according to several computations, thereby ensuring a correct match. Thence, angles between the previous scan lines and all current lines are computed using a vector dot product. Candidate lines whose angle is less than a certain threshold ( ) are selected. The threshold ( ) is opted according to the max rotation angle of the vehicle based on its dynamic. Subsequently, the candidate line achieving the following two conditions is selected: (a) it possesses the smallest orthogonal distance compared with the previous one, and; (b) it possesses the smallest Euclidean distance of the current intersection points with the previous one. Finally, if the smallest orthogonal distance is bigger than the threshold ( ℎ ℎ ), there is no matching, otherwise, matching does occur. The ( ℎ ℎ ) is selected depending on the max speed of the vehicle based on its dynamic as well.
After the lines are matched, the new matched reference lines become the new reference lines and a new position for the vehicle is determined. Thereafter, the angles between all matched lines in each scan frame are computed. Using a specified threshold ( i ) angle, lines accepting this threshold are selected, and intersection points for the selected lines are then calculated regardless of the physical intersection of these line segments in the scan. These, then, become the corners for both previous and current environments. To account for inherent uncertainties within detected corners, the covariance of the corners is estimated using extracted line variances, as illustrated in Figure 15. The red ellipses present the confidence ellipse regions for each detected corner, while the black dashed lines present the precision of the lines that built around the matched lines using the second Eigenvalue. The intersection points, between the position of the laser range finder and each detected line, are represented by the black asterisks. Using least squares, the detected corners are used for estimating transformation parameters ( , , ) between successive scans. These parameters are used to calculate an adjusted initialization for the scan matching process. This method can be employed to match successive scans, but it can also be used to support other iterative methods for achieve a more effective and faster convergence. However, the detected corners sometimes decreased to be only one corner. Therefore, the corners Using least squares, the detected corners are used for estimating transformation parameters (θ, x tr , y tr ) between successive scans. These parameters are used to calculate an adjusted initialization for the scan matching process. This method can be employed to match successive scans, but it can also be used to support other iterative methods for achieve a more effective and faster convergence. However, the detected corners sometimes decreased to be only one corner. Therefore, the corners registration fails to compute the transformation parameters. Figure 16 demonstrates the line registration algorithm using two non-parallel lines with one corner. The required rotation (θ) is the angle that minimizes: where S p and (S c ) are the slope of the previous and current lines respectively. While the translation (x tr , y tr ) is computed from distance that minimizes the range between the detected corners in the successive scans. Consequently, an adjusted corners registration is proposed to estimate the transformation parameters using corners registration and further line algorithm for one corner condition.
Sensors 2017, 17, 1060 12 of 28 where ( ) and ( ) are the slope of the previous and current lines respectively. While the translation ( , ) is computed from distance that minimizes the range between the detected corners in the successive scans. Consequently, an adjusted corners registration is proposed to estimate the transformation parameters using corners registration and further line algorithm for one corner condition. The ICP is an algorithm employed for finding the transformation between two sets of point clouds by minimizing the difference between them. In case of a 2D data set, the transformation possesses three degrees of freedom (i.e., a combination of translation and rotation). The primary challenge for the ICP algorithm is determining the correct data association between the two point clouds. While the ICP algorithm encounters problems in data association during sharp rotation and/or fast movement, the corners registration method helps the ICP improve data association even in harsh situations [28]. Occasionally there are no new references matched with previous ones, and in this case, swapping to new reference lines occurs, creating new reference key frame.

Successive Key Frame
After a while the reference lines lessen due to the movement of the vehicle. In order to preserve continuity of the reference lines occurrence, the successive key frames are created when the length of the reference lines reaches a threshold. The goal here is to locate two new reference lines in the previous scan frame, and matched lines in the current scan frame, while preserving the chosen line criteria using the algorithm as outlined in Section 3.2. The transformation matrix between the old reference lines and the new reference lines in the previous scan frame is then computed, and from this, the transformation matrix from the new matched reference lines in the current scan frame and the new reference lines in the previous scan frame, can be determined. Additionally, the vehicle's relative current position to the new matched reference lines in the current scan frame is computed. Figure 17 illustrates the swapping process between the old reference lines of the first key frame and the new reference key frame, this process is formalized in Algorithms 1 and 2 as well. The old reference lines of the first key frame are represented by the dotted lines either in the in-between frames and the key frame, while the solid lines present the two new reference lines. Transformation 1 is the relation between the old reference lines of the first key frame and the two new reference lines in the in-between frames. Transformation 2 is the transition between the two matched reference lines in the in-between frames and the key frame. The ICP is an algorithm employed for finding the transformation between two sets of point clouds by minimizing the difference between them. In case of a 2D data set, the transformation possesses three degrees of freedom (i.e., a combination of translation and rotation). The primary challenge for the ICP algorithm is determining the correct data association between the two point clouds. While the ICP algorithm encounters problems in data association during sharp rotation and/or fast movement, the corners registration method helps the ICP improve data association even in harsh situations [28]. Occasionally there are no new references matched with previous ones, and in this case, swapping to new reference lines occurs, creating new reference key frame.

Successive Key Frame
After a while the reference lines lessen due to the movement of the vehicle. In order to preserve continuity of the reference lines occurrence, the successive key frames are created when the length of the reference lines reaches a threshold. The goal here is to locate two new reference lines in the previous scan frame, and matched lines in the current scan frame, while preserving the chosen line criteria using the algorithm as outlined in Section 3.2. The transformation matrix between the old reference lines and the new reference lines in the previous scan frame is then computed, and from this, the transformation matrix from the new matched reference lines in the current scan frame and the new reference lines in the previous scan frame, can be determined. Additionally, the vehicle's relative current position to the new matched reference lines in the current scan frame is computed. Figure 17 illustrates the swapping process between the old reference lines of the first key frame and the new reference key frame, this process is formalized in Algorithms 1 and 2 as well. The old reference lines of the first key frame are represented by the dotted lines either in the in-between frames and the key frame, while the solid lines present the two new reference lines. Transformation 1 is the relation between the old reference lines of the first key frame and the two new reference lines in the in-between frames. Transformation 2 is the transition between the two matched reference lines in the in-between frames and the key frame.

1:
If flag is high 2: calculate the unit vector of the old 1st reference line 3: calculate the unit vector of the new 1st reference line 4: compute the intersected angle between the two vectors 5: compute the cross product of the two vectors 6: If the 3rd component of the cross-product result is less than zero 7: angle = −angle 8: End if 9: End if for each previous rest line from long to short 6: If this line has matching line in the current scan and composes from number of points more than threshold 7: calculate the unit vector of this line (2nd reference) 8: calculate the intersected angle between the two vectors 9: If angle > 90 10: angle = 180−angle 11: End if 12: If compute the intersected angle between the two vectors 5: compute the cross product of the two vectors 6: If the 3rd component of the cross-product result is less than zero 7: angle = −angle 8: End if 9: End if Figure 18 demonstrates the swapping result between the old reference lines of the first key frame and the new reference key frame. It is obvious that the first old reference line, line number 8 in the in-between frame, does not exist in the next frame. Therefore, a new key frame must perform in order to select new reference lines for the next frames. The new first and second reference lines are numbered by 5 and 4 respectively, in the key frame.

Iterative Closest Point (ICP)
Finally, the ICP algorithm is used to approximate the current adjusted scan's point cloud to the previous one in the mapping frame in order to fine tune the mapping, and accurately determine the vehicle's new position. Using this algorithm by itself is problematic for two reasons: (a) the vehicle can get lost in the case of rapid movement or sharp rotation, and; (b) the ICP algorithm uses an iterative technique, and so requires significant amounts of time for scan matching convergence.
On the other hand, the ICP algorithm is solely performed during the lines outage period. During this period the transformation parameters, in the laser scanner coordinate frame, are cumulatively computed besides the transformation parameters of the global map. When the lines are detected over again, the last reference lines are transformed to the current scan frame using the computed laser scanner transformation. Thereafter, the relation between the new reference lines and the transformed reference lines are calculated to overcome the reference lines discontinuity during the lines outage period as shown in Figure 19.

Experimental Results
All the experimental results are performed using a low-cost laser scanner range finder (RPLIDAR 360°, SlamTec, Zhangjiang, Shanghai, China) to reduce the cost of the system. This laser range finder is characterized by approximately short detection range, 6 m, max scan rate (rotation

Iterative Closest Point (ICP)
Finally, the ICP algorithm is used to approximate the current adjusted scan's point cloud to the previous one in the mapping frame in order to fine tune the mapping, and accurately determine the vehicle's new position. Using this algorithm by itself is problematic for two reasons: (a) the vehicle can get lost in the case of rapid movement or sharp rotation, and; (b) the ICP algorithm uses an iterative technique, and so requires significant amounts of time for scan matching convergence.
On the other hand, the ICP algorithm is solely performed during the lines outage period. During this period the transformation parameters, in the laser scanner coordinate frame, are cumulatively computed besides the transformation parameters of the global map. When the lines are detected over again, the last reference lines are transformed to the current scan frame using the computed laser scanner transformation. Thereafter, the relation between the new reference lines and the transformed reference lines are calculated to overcome the reference lines discontinuity during the lines outage period as shown in Figure 19.

Iterative Closest Point (ICP)
Finally, the ICP algorithm is used to approximate the current adjusted scan's point cloud to the previous one in the mapping frame in order to fine tune the mapping, and accurately determine the vehicle's new position. Using this algorithm by itself is problematic for two reasons: (a) the vehicle can get lost in the case of rapid movement or sharp rotation, and; (b) the ICP algorithm uses an iterative technique, and so requires significant amounts of time for scan matching convergence.
On the other hand, the ICP algorithm is solely performed during the lines outage period. During this period the transformation parameters, in the laser scanner coordinate frame, are cumulatively computed besides the transformation parameters of the global map. When the lines are detected over again, the last reference lines are transformed to the current scan frame using the computed laser scanner transformation. Thereafter, the relation between the new reference lines and the transformed reference lines are calculated to overcome the reference lines discontinuity during the lines outage period as shown in Figure 19.

Experimental Results
All the experimental results are performed using a low-cost laser scanner range finder (RPLIDAR 360°, SlamTec, Zhangjiang, Shanghai, China) to reduce the cost of the system. This laser range finder is characterized by approximately short detection range, 6 m, max scan rate (rotation

Experimental Results
All the experimental results are performed using a low-cost laser scanner range finder (RPLIDAR 360 • , SlamTec, Zhangjiang, Shanghai, China) to reduce the cost of the system. This laser range finder is characterized by approximately short detection range, 6 m, max scan rate (rotation speed), 7 Hz, and angular resolution at the max rotation speed, 1.5 • . Figure 20 presents the aerial platform equipped with the laser scanner range finder. All the datasets are collected in manual mode. For the sake of comparison between different algorithms, all algorithms have been implemented using the same computing platform (MATLAB) on a Lenovo ThinkPad, Intel core i7-4702MQ 2.2 GHz, 4G RAM, and 64-bit operating system. In order to validate and evaluate the proposed algorithm, the mapping performance and time consumption of the proposed algorithm are compared with Hector SLAM with different grid cell dimensions [29], iterative closest point (ICP), and feature to feature registration such as corners, in static and dynamic environments. The environments include, but not limited to, glass objects, bricks, longer corridors than the max detection range of the laser scanner and aluminum curtains to create harsh scenarios as described in Table 4.  Figure 21 represents the environmental structure of the dataset I; ENF building at University of Calgary, and the performed trajectory is represented by the red line. The red circle presents the start point of the trajectory and the red star represents the final destination of the trajectory. All the corridors lengths are longer that the max detection range of the laser rangefinder. For the sake of comparison between different algorithms, all algorithms have been implemented using the same computing platform (MATLAB) on a Lenovo ThinkPad, Intel core i7-4702MQ 2.2 GHz, 4G RAM, and 64-bit operating system. In order to validate and evaluate the proposed algorithm, the mapping performance and time consumption of the proposed algorithm are compared with Hector SLAM with different grid cell dimensions [29], iterative closest point (ICP), and feature to feature registration such as corners, in static and dynamic environments. The environments include, but not limited to, glass objects, bricks, longer corridors than the max detection range of the laser scanner and aluminum curtains to create harsh scenarios as described in Table 4.  Figure 21 represents the environmental structure of the dataset I; ENF building at University of Calgary, and the performed trajectory is represented by the red line. The red circle presents the start point of the trajectory and the red star represents the final destination of the trajectory. All the corridors lengths are longer that the max detection range of the laser rangefinder. Figure 22 illustrates Hector SLAM three level multi-resolution map representation result using grid cell dimensions 20, 10 and 5 cm. Due to the existence of the long corridors, Hector SLAM fails to estimate the longitudinal movement. Thus, it accumulates the point cloud of the successive scans approximately at the same position. As a result, it fails to converge and build a representation for the environment. 4.1.1. Dataset I Figure 21 represents the environmental structure of the dataset I; ENF building at University of Calgary, and the performed trajectory is represented by the red line. The red circle presents the start point of the trajectory and the red star represents the final destination of the trajectory. All the corridors lengths are longer that the max detection range of the laser rangefinder.     Figure 23 shows the mapping and position results using ICP algorithm. The red points present the implemented trajectory. The ICP algorithm fails to correctly represent the environment because of the corridors as well. Furthermore, it fails to determine the orthogonality behavior of the corners, this is due to the aggressive maneuver causes incorrect estimation of the rotation parameter.    Figure 23 shows the mapping and position results using ICP algorithm. The red points present the implemented trajectory. The ICP algorithm fails to correctly represent the environment because of the corridors as well. Furthermore, it fails to determine the orthogonality behavior of the corners, this is due to the aggressive maneuver causes incorrect estimation of the rotation parameter. Figure 24 demonstrates the mapping and position results using the proposed algorithm alone without using loop closure and/or external sensors. The blue points represent the mapping result during the RKF method while the green points represent the mapping result using the ICP algorithm during the lines outage period. The red points present the implemented trajectory. Although the existence of the corridors and aggressive maneuvers, it is obvious that the proposed algorithm succeeds to converge and build a map of the environment. It also achieves to represent the orthogonality behavior of the corners. The shrink in some corridors is due to the dependence of the ICP algorithm alone during the lines outage period. The mean execution time is 7 ms. dimensions 20, 10 and 5 cm. Figure 23 shows the mapping and position results using ICP algorithm. The red points present the implemented trajectory. The ICP algorithm fails to correctly represent the environment because of the corridors as well. Furthermore, it fails to determine the orthogonality behavior of the corners, this is due to the aggressive maneuver causes incorrect estimation of the rotation parameter.  Figure 24 demonstrates the mapping and position results using the proposed algorithm alone without using loop closure and/or external sensors. The blue points represent the mapping result during the RKF method while the green points represent the mapping result using the ICP algorithm during the lines outage period. The red points present the implemented trajectory. Although the existence of the corridors and aggressive maneuvers, it is obvious that the proposed algorithm succeeds to converge and build a map of the environment. It also achieves to represent the orthogonality behavior of the corners. The shrink in some corridors is due to the dependence of the ICP algorithm alone during the lines outage period. The mean execution time is 7 ms.

Dataset II
Dataset II is collected at ENE building University of Calgary. Hector SLAM completely fails to converge and build a map when using single level with grid cell dimension equal to 5 and 10 cm because the algorithm is stuck in local minima as it is based on gradient ascent. However, it succeeds with higher cell dimension such as 20 cm and more, as shown in Figure 25.

Dataset II
Dataset II is collected at ENE building University of Calgary. Hector SLAM completely fails to converge and build a map when using single level with grid cell dimension equal to 5 and 10 cm because the algorithm is stuck in local minima as it is based on gradient ascent. However, it succeeds with higher cell dimension such as 20 cm and more, as shown in Figure 25.

Dataset II
Dataset II is collected at ENE building University of Calgary. Hector SLAM completely fails to converge and build a map when using single level with grid cell dimension equal to 5 and 10 cm because the algorithm is stuck in local minima as it is based on gradient ascent. However, it succeeds with higher cell dimension such as 20 cm and more, as shown in Figure 25.      Using Hector SLAM with single level grid cell dimension is potentially apt to get stuck in local minima. Therefore, multi-resolution map representation is used to mitigate this problem [28]. However, these multiple map levels are memory and time consuming because they are keeping different map levels in memory and simultaneously updating them, furthermore, each level takes many iterations in order to converge. Figure 27 shows the mapping result of three level multi-resolution map representation using grid cell dimensions 20, 10 and 5 cm. Although the high grid level aids the low grid level to converge and build the entire map compared with Figure 25a but it also fails to converge in different parts as presented by the red arrows in Figure 27. Using Hector SLAM with single level grid cell dimension is potentially apt to get stuck in local minima. Therefore, multi-resolution map representation is used to mitigate this problem [28]. However, these multiple map levels are memory and time consuming because they are keeping different map levels in memory and simultaneously updating them, furthermore, each level takes many iterations in order to converge. Figure 27 shows the mapping result of three level multiresolution map representation using grid cell dimensions 20, 10 and 5 cm. Although the high grid level aids the low grid level to converge and build the entire map compared with Figure 25a but it also fails to converge in different parts as presented by the red arrows in Figure 27.       Table 5 shows the mean execution time and number of iterations for multi-resolution map representation with different levels and grid cell dimensions.  Table 5 shows the mean execution time and number of iterations for multi-resolution map representation with different levels and grid cell dimensions.  Figures 29 and 30 show the mapping and position results using ICP algorithm and adjusted corners registration respectively. The red asterisk represents the trajectory of the vehicle. The ICP algorithm succeeds to assign good correspondences because there are no aggressive manoeuvers. The corners registration fails because sometimes the algorithm does not find two corners while the adjusted corners registration succeeds. The mean execution time and number of iterations for the ICP algorithm are 13 ms and 20, respectively, while the corners registration results for the mean execution time and number of iterations are 9 ms and 17, respectively. The performance of the generated map is sensitive to the maximum range detection of the sensor [30]. It is clear that both algorithms were prone to bending in the corridor. However, they are still capable for vehicle's navigation because the generated map maintains a correct topology of the environment by reflecting the spatial structure of the corridor.   Figures 29 and 30 show the mapping and position results using ICP algorithm and adjusted corners registration respectively. The red asterisk represents the trajectory of the vehicle. The ICP algorithm succeeds to assign good correspondences because there are no aggressive manoeuvers. The corners registration fails because sometimes the algorithm does not find two corners while the adjusted corners registration succeeds. The mean execution time and number of iterations for the ICP algorithm are 13 ms and 20, respectively, while the corners registration results for the mean execution time and number of iterations are 9 ms and 17, respectively. The performance of the generated map is sensitive to the maximum range detection of the sensor [30]. It is clear that both algorithms were prone to bending in the corridor. However, they are still capable for vehicle's navigation because the generated map maintains a correct topology of the environment by reflecting the spatial structure of the corridor.               Figure 35 shows the scan matching result, between two successive scan frames, during aggressive maneuver where the ICP algorithm has been trapped in local minima. The matching RMSE between the matched points is 23.33 cm. Figure 36 shows the scan matching one ( ) error ellipse. However, such misalignment has been corrected in the final solution using the proposed RKF approach as shown in Figure 37.  Figure 35 shows the scan matching result, between two successive scan frames, during aggressive maneuver where the ICP algorithm has been trapped in local minima. The matching RMSE between the matched points is 23.33 cm. Figure 36 shows the scan matching one (σ) error ellipse. However, such misalignment has been corrected in the final solution using the proposed RKF approach as shown in Figure 37. Figure 35 shows the scan matching result, between two successive scan frames, during aggressive maneuver where the ICP algorithm has been trapped in local minima. The matching RMSE between the matched points is 23.33 cm. Figure 36 shows the scan matching one ( ) error ellipse. However, such misalignment has been corrected in the final solution using the proposed RKF approach as shown in Figure 37.    Figure 35 shows the scan matching result, between two successive scan frames, during aggressive maneuver where the ICP algorithm has been trapped in local minima. The matching RMSE between the matched points is 23.33 cm. Figure 36 shows the scan matching one ( ) error ellipse. However, such misalignment has been corrected in the final solution using the proposed RKF approach as shown in Figure 37.

Glass Window
Long Corridor Short Corridor                 Figures 43 and 44 show the mapping and position results using ICP algorithm and adjusted corners registration respectively. Although, both algorithms succeed to converge, that they have a scale problem, which is clear in the second corridor. The bad data association due to the moving objects affects the solution in both algorithms.     Figures 43 and 44 show the mapping and position results using ICP algorithm and adjusted corners registration respectively. Although, both algorithms succeed to converge, that they have a scale problem, which is clear in the second corridor. The bad data association due to the moving objects affects the solution in both algorithms.     Figures 43 and 44 show the mapping and position results using ICP algorithm and adjusted corners registration respectively. Although, both algorithms succeed to converge, that they have a scale problem, which is clear in the second corridor. The bad data association due to the moving objects affects the solution in both algorithms.   does not accept the reference lines selection criteria. Thus, the extracted lines from the moving objects would not share the estimation of the transformation parameters. The mean execution time and number of iterations for 9 ms and 13.5, respectively.  Figure 45 presents the mapping and position results using the proposed algorithm. The proposed algorithm does not suffer from the moving objects because the extracted lines from the moving objects does not accept the reference lines selection criteria. Thus, the extracted lines from the moving objects would not share the estimation of the transformation parameters. The mean execution time and number of iterations for 9 ms and 13.5, respectively.

Threshold Dependence
The proposed algorithm depends on a group of thresholds. The values of the thresholds are computed according to sensor precision and/or dynamics of the vehicle as described above. However, the proposed algorithm can achieve the exploration mission, even if the values of the thresholds are altered around the right computed values of the thresholds. Nevertheless, the altered threshold values depend on the environment of the exploration mission, and they will affect the sharing percentage between the two methods (RKF and ICP) in the entire mission. Figure 46 shows the mapping and position results for the proposed algorithm of the dataset V after changing some values of the thresholds. It is clear that the environmental structure was not affected by changing the values of the thresholds. The sharing percentage of the ICP algorithm for the entire dataset is 5.4% while the sharing percentage before changing the values of the thresholds is 1.5%. The mean execution time is 8.1 ms.

Conclusions
In this paper, a low-cost novel real-time scan matching algorithm, inspired by the video streaming broadcast key frame technique, is proposed. The proposed algorithm depends on a sole laser scanner range finder and does not need external aided sensors. The proposed algorithm endeavors to mitigate the accumulated errors that exist in the local and global scan matching, as the

Threshold Dependence
The proposed algorithm depends on a group of thresholds. The values of the thresholds are computed according to sensor precision and/or dynamics of the vehicle as described above. However, the proposed algorithm can achieve the exploration mission, even if the values of the thresholds are altered around the right computed values of the thresholds. Nevertheless, the altered threshold values depend on the environment of the exploration mission, and they will affect the sharing percentage between the two methods (RKF and ICP) in the entire mission. Figure 46 shows the mapping and position results for the proposed algorithm of the dataset V after changing some values of the thresholds. It is clear that the environmental structure was not affected by changing the values of the thresholds. The sharing percentage of the ICP algorithm for the entire dataset is 5.4% while the sharing percentage before changing the values of the thresholds is 1.5%. The mean execution time is 8.1 ms.  Figure 45 presents the mapping and position results using the proposed algorithm. The proposed algorithm does not suffer from the moving objects because the extracted lines from the moving objects does not accept the reference lines selection criteria. Thus, the extracted lines from the moving objects would not share the estimation of the transformation parameters. The mean execution time and number of iterations for 9 ms and 13.5, respectively.

Threshold Dependence
The proposed algorithm depends on a group of thresholds. The values of the thresholds are computed according to sensor precision and/or dynamics of the vehicle as described above. However, the proposed algorithm can achieve the exploration mission, even if the values of the thresholds are altered around the right computed values of the thresholds. Nevertheless, the altered threshold values depend on the environment of the exploration mission, and they will affect the sharing percentage between the two methods (RKF and ICP) in the entire mission. Figure 46 shows the mapping and position results for the proposed algorithm of the dataset V after changing some values of the thresholds. It is clear that the environmental structure was not affected by changing the values of the thresholds. The sharing percentage of the ICP algorithm for the entire dataset is 5.4% while the sharing percentage before changing the values of the thresholds is 1.5%. The mean execution time is 8.1 ms.

Conclusions
In this paper, a low-cost novel real-time scan matching algorithm, inspired by the video streaming broadcast key frame technique, is proposed. The proposed algorithm depends on a sole laser scanner range finder and does not need external aided sensors. The proposed algorithm endeavors to mitigate the accumulated errors that exist in the local and global scan matching, as the

Conclusions
In this paper, a low-cost novel real-time scan matching algorithm, inspired by the video streaming broadcast key frame technique, is proposed. The proposed algorithm depends on a sole laser scanner range finder and does not need external aided sensors. The proposed algorithm endeavors to mitigate the accumulated errors that exist in the local and global scan matching, as the transformation matrix of the proposed algorithm is computed with respect to the previous key frame and not with respect to the previous scan. Initially, the proposed algorithm depends on selecting two reference lines from the extracted lines which compose the first reference key frame. The transformation parameters of all the consecutive frames are computed with respect to this first reference key frame, until new reference key frame is chosen. Thus, two new reference lines are swapped, and furthermore, the transformation parameters between the old and new reference lines are estimated. The transformation parameters of the next frames are computed with respect to the first reference key frame but taking into consideration the transformation parameters of the swapping process. For validating and evaluating the proposed algorithm, the mapping performance and time consumption are studied under different algorithms such as Hector SLAM, ICP, and feature to feature registration such as corners, in static and dynamic environments. It was found that the time consumption of the proposed algorithm is approximately reduced by 99%, 35.7%, 10% comparable with multi-level Hector SLAM, ICP, feature registration, respectively. The computational time of estimating the transformation parameters between each two successive scans is approximately 9 ms, which indicates the qualification of the proposed algorithm for real-time system implementations.
Although the proposed algorithm depends on the availability of two non-parallel lines, it succeeds to provide a solution in the corridors and unstructured environment by switching to the ICP algorithm. The proposed algorithm is appropriate for dynamic environments, as the moving object features will not satisfy the reference lines selection criteria. Therefore, the extracted lines from the moving objects never compose a reference key frame.