Next Article in Journal
Nonlinear Analysis and Bifurcation Characteristics of Whirl Flutter in Unmanned Aerial Systems
Previous Article in Journal
Coverage Strategy for Target Location in Marine Environments Using Fixed-Wing UAVs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs

by
Buğra Şimşek
1,* and
Hasan Şakir Bilge
2
1
ASELSAN Entrepreneurship Center, ASELSAN A.Ş., Ankara 06200, Turkey
2
Department of Electrical, Electronics Engineering, Gazi University, Ankara 06570, Turkey
*
Author to whom correspondence should be addressed.
Drones 2021, 5(4), 121; https://doi.org/10.3390/drones5040121
Submission received: 16 August 2021 / Revised: 12 October 2021 / Accepted: 13 October 2021 / Published: 17 October 2021

Abstract

:
Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.

1. Introduction

Nowadays, it has become possible to come across robots and unmanned systems in various areas, including health, agriculture, mining, driverless vehicles, planet exploration and nuclear studies [1]. In the unmanned systems integrated roadmap document [2] presented by the Department of Defense, it is seen that the use of mini/micro robots and Nano-Unmanned Aerial Vehicles (UAVs) will become widespread by 2035 and beyond. As the dimensions of platforms become smaller, it becomes impossible to use a Global Positioning System (GPS) with position errors expressed in meters [3]. Specifically, the amount of position-error becomes a hundred times larger than the mini-platform size with an approximately 10 cm scale, a level of error which may cause collisions. In addition, GPS signals cannot be used indoors [4] and in planetary exploration [5]. Simultaneous localization and mapping (SLAM) have been an active research area in Robotics for 30 years that enable operations even in a GPS-denied environment [6]. The weight of the mentioned mini/micro-platforms is approximately 100 g, and their length and width are a few centimeters (Figure 1). Nano mini- and micro-platforms are only sized to carry compact, light, low cost, easily calibrated monocular RGB cameras [7]. Micro/nano-UAVs must be able to perform localization and mapping simultaneously by using such cameras. These cameras are mostly mounted directly on the vehicle and are used without a gimbal [8,9]. As the main challenges of using monocular cameras in a mounting structure, image scale uncertainty [10] and motion blur [11] in images are observed. In recent studies, resolution of such troubles has been attempted by using stereo [12,13] and RGB-D [14,15,16] cameras and visual inertial SLAM (viSLAM) methods. However, such types of cameras have disadvantages in terms of size, weight and energy consumption. In addition, since the flight time in micro/nano-UAVs is about 10 min due to battery limitations, visual SLAM (vSLAM) applications are performed via transmitting the video on the control panel rather than on-board computations, which enhances the corresponding battery efficiency. Furthermore, IMU calibration time can take up to three minutes on the platform for using IMU data in viSLAM algorithms, which is not required for vSLAM algorithms. Because of the difficulty in synchronization of the IMU and video data at control station, and the long calibration times, the viSLAM methods may not be suitable for such vehicles, which is the main reason for the vSLAM preference in our study.
In this study, a new framework is proposed to solve the problems caused by motion blur for monocular RGB cameras. To the best of the authors’ knowledge, this is the first time that the proposed framework has included modules for detecting and reducing motion blur. Thus, the framework becomes more resistant to existing motion blurs than the frameworks that have been introduced earlier. In our framework, a focus measurement operator (LAP4) has been used to detect the motion blur level and blurry images that remain under the specified threshold are directed to the deblurring module. Motion blur is then reduced by using the selected algorithm in the deblurring module. After that, the process continues with tracking and local mapping stages similar to previously studied frameworks. The proposed method has been tested in the state-of-the-art ORB-SLAM2 [17] (feature-based method) and DSO [18] (direct method) algorithms, and its success has been demonstrated.
By definition, the SLAM technique, in which only cameras are used in unmanned systems, is specifically named as vSLAM. The vSLAM method consists of three main modules [19].
  • Initialization.
  • Tracking.
  • Mapping.
Three main modules can be negatively affected by motion blur. Generally, motion blur arises from the relative motion between the camera and the scene during the exposure time [20]. In our case, the fast rotational movements of robotic platforms create the motion blur, causing the vSLAM algorithms to lose the pose estimation and thus, the track losses occur [21]. In this case, if the re-localization of UAVs cannot be applied, the pose of the platform cannot be estimated after the tracking loss. Therefore, some kidnapped robot problems occur. This situation prevents the planned task from being carried out correctly. Moreover, the created map becomes unusable due to inconsistency between the new position and the former one.
Feature-based methods map and track the feature points (corner, line and curves) by extracting the features in the frame with preprocessing. After that, a descriptor defines the features. Some commonly used descriptors are ORB [22], FAST [23], SIFT [24], Harris [25], SURF [26]. On the other hand, direct methods use the input image directly without using any feature detector or descriptor [19]. Nevertheless, motion blur has negative impacts on vSLAM performance in both feature-based and direct methods. Another crucial remark is that while motion blur prevents the detection of feature points in feature-based methods, strong rotations obstruct triangulation in direct methods. Therefore, the proposed framework should be compatible with the use of feature-based [17] and direct [18] methods (Figure 2). In our study, both methods are validated experimentally and the corresponding results are presented in the “Experimental Results” section, Section 3.

2. Proposed Framework and Experimental Setup

Hitherto, limited robotics and SLAM studies have been conducted on reducing motion blur-induced errors or preventing tracking loss in monocular camera-based vSLAM methods; in the paper [21], several solutions are presented with the prevention of data loss (reverse replay) after the emergence of tracking loss and the implementation of a branching thread structure (parallel tracking) even though it is not suitable for real-time applications. Similarly, in another study [28], a feature matcher method has been presented for humanoid robots by using point spread function estimation, which is robust to the motion blur effects originating from walking, rotation and squatting movements. In addition to the aforementioned studies, various research has already been established that one must detect additional features such as edges, lines, etc., in order to enhance the corresponding map richness and tracking performance [29,30]. If the direction of the motion blur and the direction of the lines/edges are consistent, such approaches become appropriate to improve tracking performance, i.e., trajectory estimation of the vehicle. However, the same performance cannot be achieved in mapping, i.e., the projection of 2D image-features to the 3D space: mapping performance decreases due to floating lines on the map caused by motion blur. It is also crucial to ensure map consistency while avoiding tracking loss.
In the image processing approaches, motion blur is described by the following equation [20]:
b = p ⊗ o + n
In this expression, o is the original image, b is the blurry image, p is the point spread function and operator () is the convolution process. Additive noise is denoted by n. Image deblurring algorithms can utilize a point spread function (PSF) to deconvolve the blurred image. Deconvolution is categorized into two types: blind and non-blind deconvolution. Blind deconvolution uses the blurred image whereas non-blind deconvolution uses the blurred image and known point spread function for the deblurring process. Blind deconvolution is more complicated and more time-consuming than non-blind deconvolution because it estimates the point spread function after each iteration [31].
A great number of approaches have been developed in recent years to solve the motion blur problem. For example, a novel local intensity-based prior, namely the patch-wise minimal pixels prior (PMP) [32], a novel recurrent structure across multiple scales (SRN) [33], SIUN [34] with a more flexible network and additive super-resolution, a natural image prior named Extreme Channels Prior (ECP) [35], graph-based blind image deblurring [36] and other state of the art methods such Lucy and Richardson [37], blind deconvolution [38] and Wiener filter [39] are available in the literature.
We proposed a framework (Figure 3) to detect and to reduce motion blur occurrence when compact, lightweight, low cost, easily calibrated monocular cameras are mounted on micro/nano-UAVs. In the proposed framework, Variance of Laplacian (LAP4) is selected as a focus measure operator for detecting motion blur. In the preferred LAP4 method, a single channel of the image is convolved with the Laplacian kernel, and the focus measure score is found by calculating the variance of response. If the focus measure score is above the threshold, then the vSLAM process continues as expected. Otherwise, images are restored by image deblurring methods. Deblurring is applied only for frames below the threshold and not for all frames. In this way, the processing time is kept at a suitable level. It has been observed that the tracking performance is increased, and the tracking loss ratio is decreased in the case of vSLAM algorithms with images restored by deblurring techniques.
In this study, selected motion-deblurring methods (PMP, SRN, SIUN, ECP, graph-based blind image deblurring, Lucy and Richardson, blind deconvolution and Wiener filter) are applied to prevent tracking loss in vSLAM algorithms on a dataset prepared for mini/micro robots and nano-UAVs. A low-cost, low-power light camera was mounted on the mini-UAV and then the blurred low-resolution images were merged to prepare the created dataset. Obtained results based on the created datasets reveal that the proposed framework in Figure 3 can be implemented in both direct and feature-based vSLAM algorithms.
An experiment was performed at an average speed of 1.2 m/s in the corridor environment to observe the tracking loss. A schematical view of the experimental area is drawn in Figure 4. The experimental area consists of three corridors (17–26.5 and 15.7 m) and two sharp corners. As declared in Figure 4, the forward movement is plotted in a green color, and fast rotational movements are shown in a red color. In addition, the starting, finish points and the frame numbers corresponding to the rotational movements are given in the same figure.
The experimental area was set up only to observe motion blur caused by fast rotational movements. Tracking loss in forward motion generally occurs in textureless environments, especially in feature-based algorithms. However, ORB-SLAM2 [17] and DSO [18] algorithms were resistant to motion blur in forward movements at 1.2 m/s average speed, which was investigated in our experimental area. Several crucial remarks have been deduced from the experiment: the proposed UAV achieved fast rotational movements in the vicinity of rich feature corners at 22 deg/s or 0.384 rad/s. It has been attempted to observe motion blur-based tracking loss in the case when an RGB camera is mounted on the platform and the rapid rotational movement of the unmanned vehicle is realized. A pinhole camera, which is frequently used in nano/micro-unmanned systems and robot platforms, was selected for the experiment. The common feature of such cameras is that they have advantages in terms of both size and cost. In the experiment, a Raspberry Pi v2.1 camera was recorded at the frame rate of 20 fps in 640 × 480 resolutions. Compact and non-gimbal cameras in micro/nano-size unmanned systems are more preferred in terms of dimensions. In addition, the low-cost cameras are considered to be the most preferable ones in disposable, non-reusable vehicles in the future [2]. The reason why our own dataset is studied is the following: the vSLAM experiments have already been conducted in available datasets such as EUROC [40] and KITTI [41]. Nevertheless, the captured images were obtained with high-quality cameras and there were no fast rotational movements defined in vicinity of corners. Even though fast movements were performed in these datasets, sharp rotational movements are not included at large angles such as 90 degrees. Under these circumstances, no targeted tracking losses were observed, which is the requirement for the use of deblurring algorithms.
Drone images flying at the height of 140 cm were obtained in the experiment. The platform made a straight motion with an average speed of 1.2 m/s until it reached the corners. In the turns, especially, 90 degrees of sharp and rapid yaw movement was achieved. The dataset was created by obtaining 1121 frames in the corresponding corridor orbit with a forward-facing camera. Processing speed is crucial for the extraction of targeted features from the dataset. For different processing speeds, the motion blur level does not change, but detector and descriptor’s allocated processing time varies. For example, when the processing speed is varied from 20 to 10 fps, the allocated processing time is doubled, and thus more time can be given to complete the process in the relevant frame. Nevertheless, the corresponding exposure time was kept the same for every processing speed and the blurring effect was the same as well. The images in the studied dataset were processed at a processing speed of 20, 10 and 5 fps in ORB-SLAM2 [17] and DSO [18] algorithms. In both algorithms, tracking loss occurred in fast rotational movements at the selected processing speeds. Various objects such as coffee machine, cabinets, doors, etc., were located with distinctive features in the corridor environment. In this way, the feature extraction and triangulation processes were accomplished easily in forward movement.

3. Experimental Results

There are an excessive number of focus measurement operators used for motion blur analyses [42]. The dataset in our study was analyzed with the LAP4 method, which is one of the most suitable for real-time applications. Laplacian operators are aimed to measure the number of edges present in images through the second derivative of Laplacian. Laplacian-based operators are very sensitive to noise because of second derivative calculations [43]. By means of Variance of Laplacian analysis, the corresponding focus measure score (FMS) was relatively low in fast rotational movement cases.
It has been determined that tracking loss occurs in regions where the focus measure score is relatively low, as pointed out via red arrows in Figure 5. Frames at which the focus measure score is below 10 were sorted out from the entire dataset and merged in Figure 6. It was observed that these frames were more exposed to motion blur. In this case, the focus measure score of FMS = 10 was assigned as a threshold value of motion blur throughout the study. In order to eliminate/reduce the motion blur, previously selected deblurring techniques (PMP, SRN, SIUN, ECP, graph-based blind image deblurring, Lucy and Richardson, blind deconvolution and Wiener filter) were applied for the frames with a focus measure score of FMS < 10. Thus, pictures exposed to motion blur were restored via different deblurring techniques. Corresponding focus measure score of the restored pictures was recalculated using the LAP4 method and most of the resultant FMS values were found to be higher than the assigned threshold value of FMS = 10, which indicates the deblurring performance of the studied techniques. Finally, the framework was tested under the observation of the success of the restored images in direct and feature-based vSLAM methods.
Changes in the restored images have been measured with different metrics. The metrics used to measure the correlation between the restored images and the motion blur exposed dataset images are Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM). Such metrics are used for the restored image set, which has a focus measure score below 10.
From 1 to 10 pixels, a 360-degree full-circle scanning was performed, and the most suitable point spread function was chosen for Wiener filter, Lucy and Richardson and blind deconvolution algorithms. In addition, the PSNR and SSIM values of each method are given on the graph in Figure 7. The improvement in focus measure scores of restored images is shown in Figure 8. An increment in focus measure score was achieved by using the related methods.

3.1. PSNR and SSIM Results

As the well-known image quality metrics, PSNR and SSIM metrics are investigated for the rest of the study. Terminologically, PSNR is the ratio between the maximum power of the restored image and of its blurred input. SSIM is another indicator used for measuring the similarity between the input blurred image and its restored version. If the calculated SSIM value is close to one, that indicates the structural information of the restored image is very similar to the original blurred one. The value of SSIM is always desired to be close to unity. Corresponding PSNR as well as SSIM analyses were carried out for the selected deblurring methods and represented in Figure 7. In PSNR and SSIM analyses, it was observed that blind deconvolution and Lucy and Richardson (L&R) algorithms provide similar performance. According to the experimental measurements, the improvement in restored images is the best in these two algorithms.

3.2. Focus Measure Score Analysis

The change of focus measure scores of each deblurring method are given in the Figure 8. The most successful algorithms in terms of the focus measurement score are PMP, ECP, Wiener, L&R, BD, Graph-Based, SRN and SIUN, respectively (Table 1). The gain in focus measure score is of the lowest level for the SIUN and SRN algorithms. Other algorithms contributed to three times greater increase in FMS value compared to SIUN and SRN techniques.
The tracking performance of direct and feature-based vSLAM algorithms was observed on a dataset with restored frames, which are the output of each motion-deblurring algorithm. The restored dataset was processed at processing speeds of 5, 10 and 20 fps. The performances of ORB-SLAM2 [17] and DSO [18] algorithms, which are the state-of-the-art methods, are given in Table 2 and Table 3, respectively. A tracking score parameter was created to reveal how many different processing speeds vSLAM algorithms were successful in total. The tick symbol is used for “successful” results whereas the cross symbol reveals a “fail”.
The average change in FMS results, ΔFMS, for selected motion-deblurring algorithms is presented in Table 1. It has been observed that the PMP method has better performance in both rapid rotational movements and forward movement cases. Even though the ECP method seems to have a rather higher average ΔFMS value, it shows a worse performance, especially at sharp corners, which could be inferred from ΔFMS scores tables in Figure 8. The Wiener and L&R methods also possess relatively higher average ΔFMS value compared to other deblurring methods {BD, GB, SIUN, SRN}. An important remark from the table is that although Wiener and L&R methods have lower ΔFMS than the ECP method, both methods have better performance in the case of rapid rotational motion, which can be deduced from the comparison regarding tables in Figure 8.
Investigating the results in Figure 7 and Table 1, the calculated SSIM graph for the GB method shows fluctuating behavior with the lowest value, which implies that the restored images are not structurally matched with the original blurry images in the GB algorithm. For that reason, the GB method may not be considered as a successful candidate for deblurring process in vSLAM algorithms.
The ORB-SLAM2 algorithm was experimentally studied and the tracking loss performance were investigated for selected PMP and L&R deblurring methods. Sample frames for different methods at specified frame rate are demonstrated in Figure 9. Compared to the original ORB-SLAM2 algorithm results (without applying any deblurring methods), successful tracking was achieved for the case of PMP (L&R) with a processing speed of 20 fps (5 fps) (see Figure 9).

4. Discussion

Today, it has become possible to use robots and unmanned vehicles in many areas. Shortly, it is planned to increase the use of micro/nano-size UAVs. Localization and mapping are essential for unmanned vehicles to perform expected operations. GPS signals cannot be used by micro/nano-UAVs because the accuracy of GPS at the meter level is hundreds of times greater than the size of the platforms. Localization with this level of precision is not possible for micro/nano-UAVs. In addition, micro/nanoscale unmanned vehicles and robots are also expected to operate indoors. However, their payload can be compact, lightweight, low cost and easily calibrated monocular RGB cameras. It is also not possible to use a gimbal due to weight and size limitations. Visual SLAM (vSLAM) methods are the preferable methods for locating systems with monocular RGB cameras. Fast rotational movements of UAVs that will occur in operations may cause motion blur.
In the previous studies, the resolution of the tracking loss problem was mostly attempted by the integration of the IMU or the use of more featured cameras such as an RGB-D or stereo-camera. However, hardware-based solutions caused the platforms to increase in dimensions and weight, which hinders their usage in nano platforms. Nano mini and micro-UAVs are only sized to carry compact, light, low cost, easily calibrated monocular RGB cameras. Moreover, such platforms do not have suitable battery capacity for onboard computation. Hence, image processing is applied on the control station with transmitted video. Thus, the transmitted video can be utilized both from the operator to control the platform and for vSLAM applications to mapping and localization.
Event cameras can be an alternative to monocular RGB cameras for vSLAM applications, if developers can produce them cheaply in nanoscale. Nevertheless, an additional reconstruction phase is required in the case of event cameras to provide details of the environment. On the other hand, detailed environmental data can be easily provided by RGB cameras. Moreover, output images of RGB cameras are similar to human vision, which facilitates the operators’ control of the platform.
Our study has two major contributions to the vSLAM literature: for the first time, blur level detection was performed by including the focus measure operator in vSLAM algorithms. In this way, frames with high motion blur levels can be detected without causing tracking loss. The second contribution is to ensure that the algorithm continues to operate both in the mapping and tracking stages by correcting only the frames with high blur level.
As a general evaluation of the experimental data, several important remarks could be declared: (1) it has been observed that tracking loss can be prevented at some speeds when the original frames in the dataset are replaced with restored frames; (2) motion blur in feature-based methods negatively affects feature extraction. When the blurred image is restored via specified deblurring methods, there have been improvements in detecting the relevant features in the restored images; (3) in direct methods, it has been observed that the motion blur reduces the triangulation performance, and deblurring methods can annihilate this situation; (4) it has also been experimentally verified that the tracking performance obtained in vSLAM algorithms is not directly proportional to PSNR and SSIM values even though it is directly related to the focus measure score.
A comparative investigation of existing features such as initialization, tracking, mapping, blur detection and motion blur reduction/elimination is represented in Table 4. It can be easily explored that the proposed framework is more resistant under motion blur and applicable under the initialization stage.

5. Conclusions

In this study, a framework has been proposed to increase the tracking performance of SLAM algorithms by decreasing the motion blur-caused tracking loss rate. A focus measure operator (Variance of Laplacian) is recommended for detecting motion blur and deblurring methods (PMP, SRN, SIUN, ECP, graph-based blind image deblurring, Lucy and Richardson, blind deconvolution and Wiener filter) are applied for the frames which have a focus measure score less than 10. With our proposed method, reduction/elimination of tracking loss through blur detection and prevention has been tested. The success of the relevant framework has been demonstrated in feature-based and direct vSLAM algorithms. It has been observed experimentally that compared to feature-based and direct methods, the novel vSLAM framework is more resistant to motion blur and its mapping/tracking capability is more effective by means of blur prediction and prevention.
As a future work, our study can be implemented for real-time applications. Detection and reduction of motion blur in real time using learning-based methods may be an innovative research topic for vSLAM algorithms. Our work is focused on reducing/eliminating tracking loss due to motion blur, which stops vSLAM algorithms from working, rather than better trajectory estimation. The effects of motion blur on the trajectory estimation can also be studied in future in a dataset containing blur at a level where tracking loss will not occur. Furthermore, the proposed method also may be extended by using other types of focus measure operators for certain environmental conditions.

Author Contributions

Conceptualization, B.Ş. and H.Ş.B.; methodology, B.Ş. and H.Ş.B.; software, B.Ş.; validation, B.Ş. and H.Ş.B.; formal analysis B.Ş. and H.Ş.B.; investigation, B.Ş.; resources, B.Ş.; data curation, B.Ş.; writing—original draft preparation, B.Ş.; writing—review and editing, B.Ş. and H.Ş.B.; visualization, B.Ş.; supervision, H.Ş.B.; project administration, H.Ş.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ASELSAN A.Ş.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to security concerns.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben-Ari, M.; Francesco, M. Robots and their applications. In Elements of Robotics; Springer: Cham, Switzerland, 2018; pp. 1–20. [Google Scholar]
  2. Unmanned Systems Integrated Roadmap FY2013–2038 (Washington, DC: Department of Defense, 2013). Available online: http://archive.defense.gov/pubs/DOD-USRM-2013.pdf (accessed on 29 May 2021).
  3. Aqel Mohammad, O.A. Review of Visual Odometry: Types, Approaches, Challenges, and Applications; SpringerPlus 5.1: Cham, Switzerland, 2016; pp. 1–26. [Google Scholar]
  4. Krul, S.; Pantos, C.; Frangulea, M.; Valente, J. Visual SLAM for Indoor Livestock and Farming Using a Small Drone with a Monocular Camera: A Feasibility Study. Drones 2021, 5, 41. [Google Scholar] [CrossRef]
  5. Maimone, M.; Cheng, Y.; Matthies, L. Two years of visual odometry on the mars exploration rovers. J. Field Robot. 2007, 24, 169–186. [Google Scholar] [CrossRef] [Green Version]
  6. Zaffar, M.; Ehsan, S.; Stolkin, R.; Maier, K.M. Sensors, slam and long-term autonomy: A review. In Proceedings of the 2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS) IEEE, Edinburgh, UK, 6–9 August 2018; pp. 285–290. [Google Scholar]
  7. Chen, Y.; Zhou, Y.; Lv, Q.; Deveerasetty, K.K. A Review of V-SLAM. In Proceedings of the 2018 IEEE International Conference on Information and Automation (ICIA) IEEE, Fujian, China, 11–13 August 2018; pp. 603–608. [Google Scholar]
  8. Petricca, L.; Per, O.; Christopher, G. Micro-and nano-air vehicles: State of the art. Int. J. Aerosp. Eng. 2011, 2011, 214549. [Google Scholar] [CrossRef] [Green Version]
  9. Keennon, M.; Klingebiel, K.; Won, H. Development of the nano hummingbird: A tailless flapping wing micro air vehicle. In Proceedings of the 50th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Nashville, TN, USA, 9–12 January 2012; p. 588. [Google Scholar]
  10. Kitt, B.M.; Rehder, J.; Chambers, A.D. Monocular Visual Odometry Using a Planar Road Model to Solve Scale Ambiguity. In Proceedings of the European Conference on Mobile Robots, Örebro, Sweden, 7–9 September 2011; Örebro University: Örebro, Sweden, 2011; pp. 43–48. [Google Scholar]
  11. Yu, Y.; Pradalier, C.; Zong, G. Appearance-based monocular visual odometry for ground vehicles. In Proceedings of the IEEE/ASME International Conference on Anonymous Advanced Intelligent Mechatronics, Piscataway, NJ, USA, 3–7 July 2011; pp. 862–867. [Google Scholar]
  12. Howard, A. Real-time stereo visual odometry for autonomous ground vehicles. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008. [Google Scholar]
  13. Azartash, H.; Banai, N.; Nguyen, T.Q. An integrated stereo visual odometry for robotic navigation. Robot. Autonom. Syst. 2014, 62, 414–421. [Google Scholar] [CrossRef]
  14. Fabian, J.R.; Clayton, G.M. Adaptive visual odometry using RGB-D cameras. In Proceedings of the International Conference on Anonymous Advanced Intelligent Mechatronics, Besançon, France, 8–11 July 2014; pp. 1533–1538. [Google Scholar]
  15. Huang, A.S.; Bachrach, A.; Henry, P. Visual odometry and mapping for autonomous flight using an RGB-D camera. In Robotics Research; Springer: Cham, Switzerland, 2017; pp. 235–252. [Google Scholar]
  16. Fang, Z.; Zhang, Y. Experimental evaluation of RGB-D visual odometry methods. Int. J. Adv. Robot. Syst. 2015, 12, 26. [Google Scholar] [CrossRef] [Green Version]
  17. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  18. Engel, J.; Vladlen, K.; Daniel, C. Direct sparse odometry. IEEE Trans. Patt. Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
  19. Taketomi, T.; Hideaki, U.; Sei, I. Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ Trans. Comput. Vision Appl. 2017, 9, 1–11. [Google Scholar] [CrossRef]
  20. Sada, M.M.; Mahesh, M.G. Image Deblurring Techniques—A Detail Review. Int. J. Sci. Res. Sci. Eng. Technol. 2018, 4, 176–188. [Google Scholar]
  21. Schubert, S. Map Enhancement with Track-Loss Data in Visual SLAM. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016. [Google Scholar]
  22. Rublee, V.E.; Rabaud, K.K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  23. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  24. Rosten, E.; Drummond, T. Machine learning for high speed corner detection. In Computer Vision–ECCV 2006.ECCV 2006. Lecture Notes in Computer Science; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3951, pp. 430–443. [Google Scholar]
  25. Harris, C.; Stephens, M. A combined corner and edge detection. In Proceedings of the Fourth Alvey Vision Conference, University of Manchester, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  26. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded up robust featuresSpringer. In Computer Vision–ECCV 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  27. Servières, M. Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking. J. Sens. 2021, 2021, 2054828. [Google Scholar] [CrossRef]
  28. Pretto, A.; Emanuele, M.; Enrico, P. Reliable features matching for humanoid robots. In Proceedings of the 2007 7th IEEE-RAS International Conference on Humanoid Robots, Pittsburgh, PA, USA, 29 November–1 December 2007. [Google Scholar]
  29. Klein, G.; David, M. Improving the agility of keyframe-based SLAM. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2008; pp. 802–815. [Google Scholar]
  30. Pumarola, A.V. PL-SLAM: Real-time monocular visual SLAM with points and lines. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4503–4508. [Google Scholar]
  31. Vankawala, F.; Amit, G.; Amit, P. A survey on different image deblurring techniques. Int. J. Comput. Appl. 2015, 116, 15–18. [Google Scholar] [CrossRef]
  32. Wen, F. A simple local minimal intensity prior and an improved algorithm for blind image deblurring. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2923–2937. [Google Scholar] [CrossRef]
  33. Tao, X. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  34. Ye, M.; Dong, L.; Gengsheng, C. Scale-iterative upscaling network for image deblurring. IEEE Access 2020, 8, 18316–18325. [Google Scholar] [CrossRef]
  35. Yan, Y. Image deblurring via extreme channels prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  36. Bai, Y. Graph-based blind image deblurring from a single photograph. IEEE Trans. Image Proc. 2018, 28, 1404–1418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Fish, D.A. Blind deconvolution by means of the Richardson–Lucy algorithm. JOSA A 1995, 12, 58–65. [Google Scholar] [CrossRef] [Green Version]
  38. Kundur, D.; Hatzinakos, D. Blind image deconvolution. IEEE Sign. Proc. Mag. 1996, 13, 43–64. [Google Scholar] [CrossRef] [Green Version]
  39. Tukey, J.W. The Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications; JSTOR: New York, NY, USA, 1952; pp. 319–321. [Google Scholar]
  40. Burri, M. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  41. Geiger, A. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  42. Pertuz, S.; Domenec, P.; Miguel, A.G. Analysis of focus measure operators for shape-from-focus. Patt. Recognit. 2013, 46, 1415–1432. [Google Scholar] [CrossRef]
  43. Juneja, M.; Sandhu, P.S. Performance evaluation of edge detection techniques for images inspatial domain. Int. J. Comput. Theory Eng. 2009, 1, 614–621. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Some examples of micro/nano-UAVs. (a) Blackhornet [8]. Reprinted with permission [8], Luca Petricca et al. Micro-and nano-air vehicles: State of the art. (2011) ©2011, Int. Journal of aerospace eng. (b) Nano hummingbird [9] Image courtesy of AeroVironment, Inc.
Figure 1. Some examples of micro/nano-UAVs. (a) Blackhornet [8]. Reprinted with permission [8], Luca Petricca et al. Micro-and nano-air vehicles: State of the art. (2011) ©2011, Int. Journal of aerospace eng. (b) Nano hummingbird [9] Image courtesy of AeroVironment, Inc.
Drones 05 00121 g001
Figure 2. Chronological classification of the main vSLAM methods [27].
Figure 2. Chronological classification of the main vSLAM methods [27].
Drones 05 00121 g002
Figure 3. Proposed Framework for motion blur robust vSLAM.
Figure 3. Proposed Framework for motion blur robust vSLAM.
Drones 05 00121 g003
Figure 4. Experimental area.
Figure 4. Experimental area.
Drones 05 00121 g004
Figure 5. Focus measure score vs. frame number.
Figure 5. Focus measure score vs. frame number.
Drones 05 00121 g005
Figure 6. Focus measure score < 10 vs. frame number.
Figure 6. Focus measure score < 10 vs. frame number.
Drones 05 00121 g006
Figure 7. PSNR (a) and SSIM (b) graphs of selected algorithms.
Figure 7. PSNR (a) and SSIM (b) graphs of selected algorithms.
Drones 05 00121 g007
Figure 8. Focus measure scores of selected algorithms.
Figure 8. Focus measure scores of selected algorithms.
Drones 05 00121 g008
Figure 9. ORB-SLAM2 algorithm results after fast rotational movement. (a) Tracking loss in original ORB-SLAM2; (b) successful tracking (121 matches) in proposed framework (PMP—20 fps); (c) successful tracking (60 matches) in proposed framework (L&R—5 fps).
Figure 9. ORB-SLAM2 algorithm results after fast rotational movement. (a) Tracking loss in original ORB-SLAM2; (b) successful tracking (121 matches) in proposed framework (PMP—20 fps); (c) successful tracking (60 matches) in proposed framework (L&R—5 fps).
Drones 05 00121 g009
Table 1. Average ΔFMS value of selected motion-deblurring algorithms.
Table 1. Average ΔFMS value of selected motion-deblurring algorithms.
Deblurring MethodAverage ΔFMS (Pixel Intensity)
PMP74.53
ECP51.23
Wiener39.96
L&R37.83
BD33.82
GB32.18
SRN2.49
SIUN2.18
Table 2. Tracking performance of ORB-SLAM2 algorithm on dataset with restored images.
Table 2. Tracking performance of ORB-SLAM2 algorithm on dataset with restored images.
ORB_SLAM25 Fps10 Fps20 FpsTracking Score
PMP🗸🗸🗸3
Wiener🗸🗸🗸3
L&R🗸🗸X2
ECP🗸🗸X2
BD🗸XX1
SIUN🗸XX1
SRN🗸XX1
GBXXX0
Table 3. Tracking performance of DSO algorithm on dataset with restored images.
Table 3. Tracking performance of DSO algorithm on dataset with restored images.
DSO5 Fps10 Fps20 FpsTracking Score
PMP🗸🗸🗸3
Wiener🗸🗸X2
L&R🗸🗸X2
ECP🗸XX1
BDXXX0
SIUNXXX0
SRNXXX0
GBXXX0
Table 4. Comparison of the proposed framework with existing frameworks.
Table 4. Comparison of the proposed framework with existing frameworks.
ModulesFeature-Based MethodDirect MethodProposed Framework
InitializationApplicableApplicableApplicable
TrackingTracking loss is about to
Observe under motion blur condition
Tracking loss is about to observe under motion blur conditionMore resistant tracking
under motion blur
MappingMapping may stop under motion blur conditionMapping may stop under motion blur conditionMore resistant Mapping under motion blur
Blur DetectionNot ApplicableNot ApplicableApplicable
Motion Blur
Reduction/Elimination
Not ApplicableNot ApplicableApplicable
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Şimşek, B.; Bilge, H.Ş. A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs. Drones 2021, 5, 121. https://doi.org/10.3390/drones5040121

AMA Style

Şimşek B, Bilge HŞ. A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs. Drones. 2021; 5(4):121. https://doi.org/10.3390/drones5040121

Chicago/Turabian Style

Şimşek, Buğra, and Hasan Şakir Bilge. 2021. "A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs" Drones 5, no. 4: 121. https://doi.org/10.3390/drones5040121

APA Style

Şimşek, B., & Bilge, H. Ş. (2021). A Novel Motion Blur Resistant vSLAM Framework for Micro/Nano-UAVs. Drones, 5(4), 121. https://doi.org/10.3390/drones5040121

Article Metrics

Back to TopTop