DKB-SLAM: Dynamic RGB-D Visual SLAM with Efficient Keyframe Selection and Local Bundle Adjustment
Abstract
1. Introduction
- A Novel Hybrid Dynamic Feature Filtering Mechanism: We propose a lightweight yet robust pipeline that enables a robot to navigate reliably amidst dynamic obstacles. The system uses YOLO to quickly identify potential moving objects and then applies a combination of optical flow and statistical depth analysis. This hybrid approach efficiently removes unstable feature points caused by motion while preserving the static background, achieving a superior balance between localization accuracy and the real-time performance required for robotic platforms.
- An Adaptive, Multi-Criteria Keyframe Selection Strategy: To handle a robot’s varied motion patterns, we introduce a sophisticated keyframe selection strategy that holistically evaluates frame quality based on the robot’s motion history. It goes beyond simple parallax to incorporate map point visibility and matching quality, ensuring that selected keyframes are information-rich and geometrically stable. This prevents map degradation and improves tracking robustness, whether the robot is moving quickly, slowly, or turning.
- A Geometry-Aware Local Bundle Adjustment (BA) Scheme: We enhance the backend optimization to better leverage the structure of typical robotic operating environments. The method classifies map points as ‘planar’ or ‘edge’ categories and assigns higher weights to the more geometrically informative edge points. This geometry-aware approach improves the robot’s pose accuracy and stability, especially in structured indoor settings, by prioritizing more reliable environmental constraints.
- An Integrated and Robust RGB-D SLAM System for Robotic Applications: We integrate these modules into a coherent system, DKB-SLAM, designed for practical robotic use. The synergy between dynamic object handling, intelligent keyframe selection, and refined optimization results in superior accuracy and robustness. We validate this performance extensively on the public TUM RGB-D benchmark and, critically, on a mobile robot platform operating in challenging real-world, high-dynamic scenarios.
2. Related Works
2.1. Visual SLAM in Dynamic Environments
- (1)
- Methods based on geometric information
- (2)
- Methods based on semantic analysis
- (3)
- Methods based on clustering
2.2. Keyframe Selection
- (1)
- Motion-based methods
- (2)
- Appearance-based methods
2.3. Backend Optimization
- (1)
- Filter-based methods
- (2)
- Optimization-based methods
2.4. Summary
- Dynamic point recognition methods based on geometry or clustering perform well in low-dynamic environments, but they may lead to misidentifications when dealing with complex dynamic objects and scenes. On the other hand, dynamic point recognition methods based on semantic analysis exhibit strong performance in both low- and high-dynamic scenes; however, they face challenges in real-time execution on devices with limited computational resources.
- Keyframe selection methods based on fixed time or spatial intervals lack flexibility and are susceptible to redundancy or the loss of critical information. Several methods, including those based on motion, deep learning, and appearance, have been proposed to address these issues. Among them, methods utilizing parallax and tracking quality can significantly improve selection accuracy, while deep learning-based methods excel at handling complex features, although they often suffer from poor real-time performance. While appearance-based methods can enhance accuracy in specific situations, they are highly sensitive to external factors such as lighting changes.
- While filtering-based methods offer high computational efficiency, their application in SLAM problems is limited by their nonlinear nature, which can lead to errors, and their recursive characteristics may cause local error accumulation. Optimization-based methods, on the other hand, provide more accurate state estimations. However, due to resource limitations, current optimization techniques often trade off some accuracy and landmarks to prioritize real-time performance. Additionally, effectively utilizing environmental information to further enhance optimization remains a critical challenge that requires urgent attention.
3. System Overview
3.1. Dynamic Point Recognition and Elimination
- (1)
- Optical flow
- Constant brightness: The brightness of a given pixel remains unchanged between two consecutive frames.
- Small motion: Pixels move a small distance between two consecutive frames.
- Spatial consistency: Pixels within a local region exhibit similar motion patterns.
- (2)
- Gaussian-based Depth Distribution Analysis
3.2. Keyframe Selection Based on Parallax, Visibility, and Match Quality
- (1)
- Parallax-based current frame preprocessing
- (2)
- Keyframe judgment based on visibility and match quality
3.3. Local Bundle Adjustment Optimization with Heterogeneous Weighting of Map Point Geometry
4. Experimental Validation
4.1. Evaluation on the TUM RGB-D Dataset
4.2. Validation on Real-World Scenarios
4.3. Timing Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Sahoo, B.; Biglarbegian, M.; Melek, W. Monocular Visual Inertial Direct SLAM with Robust Scale Estimation for Ground Robots/Vehicles. Robotics 2021, 10, 23. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE/ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar]
- Mahmoud, A.; Atia, M. Improved Visual SLAM Using Semantic Segmentation and Layout Estimation. Robotics 2022, 11, 91. [Google Scholar] [CrossRef]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
- Tourani, A.; Bavle, H.; Avşar, D.I.; Sanchez-Lopez, J.L.; Munoz-Salinas, R.; Voos, H. Vision-Based Situational Graphs Exploiting Fiducial Markers for the Integration of Semantic Entities. Robotics 2024, 13, 106. [Google Scholar] [CrossRef]
- Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. OpenVINS: A research platform for visual-inertial estimation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4666–4672. [Google Scholar]
- Yang, S.; Fan, G.H.; Bai, L.L.; Zhao, C.; Li, D. Geometric constraint-based visual SLAM under dynamic indoor environment. Comput. Eng. Appl. 2021, 57, 203–212. [Google Scholar]
- Zou, D.; Tan, P. CoSLAM: Collaborative visual SLAM in dynamic environments. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 354–366. [Google Scholar] [CrossRef]
- Kim, D.H.; Kim, J.H. Effective background model-based RGB-D dense visual odometry in a dynamic environment. IEEE Trans. Robot. 2016, 32, 1565–1573. [Google Scholar] [CrossRef]
- Du, Z.J.; Huang, S.S.; Mu, T.J.; Zhao, Q.; Martin, R.R.; Xu, K. Accurate dynamic SLAM using CRF-based long-term consistency. IEEE Trans. Vis. Comput. Graph. 2020, 28, 1745–1757. [Google Scholar] [CrossRef]
- Bescos, B.; Fácil, J.M.; Civera, J.; Neira, J. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 2018, 3, 4076–4083. [Google Scholar] [CrossRef]
- Cheng, S.; Sun, C.; Zhang, S.; Zhang, D. SG-SLAM: A real-time RGB-D visual SLAM toward dynamic scenes with semantic and geometric information. IEEE Trans. Instrum. Meas. 2022, 72, 1–12. [Google Scholar] [CrossRef]
- Pirker, K.; Rüther, M.; Bischof, H. CD SLAM: Continuous localization and mapping in a dynamic world. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3990–3997. [Google Scholar]
- Wang, Y.; Tian, Y.; Chen, J.; Chen, C.; Xu, K.; Ding, X. MSSD-SLAM: Multi-feature semantic RGB-D inertial SLAM with structural regularity for dynamic environments. IEEE Trans. Instrum. Meas. 2024, 74, 5003517. [Google Scholar] [CrossRef]
- Fan, J.; Ning, Y.; Wang, J.; Jia, X.; Chai, D.; Wang, X.; Xu, Y. EMS-SLAM: Dynamic RGB-D SLAM with semantic-geometric constraints for GNSS-denied environments. Remote Sens. 2025, 17, 1691. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Leutenegger, S.; Furgale, P.; Rabaud, V.; Chli, M.; Konolige, K.; Siegwart, R. Keyframe-based visual-inertial SLAM using nonlinear optimization. In Proceedings of the Robotics: Science and Systems (RSS), Berlin, Germany, 24–28 June 2013. [Google Scholar]
- Li, P.; Qin, T.; Hu, B.; Zhu, F.; Shen, S. Monocular visual-inertial state estimation for mobile augmented reality. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Nantes, France, 9–13 October 2017; pp. 11–21. [Google Scholar]
- Liu, H.; Chen, M.; Zhang, G.; Bao, H.; Bao, Y. ICE-BA: Incremental, consistent and efficient bundle adjustment for visual-inertial SLAM. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1974–1982. [Google Scholar]
- Hu, Z.; Zhao, J.; Luo, Y.; Ou, J. Semantic SLAM based on improved DeepLabv3+ in dynamic scenarios. IEEE Access 2022, 10, 21160–21168. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Yu, C.; Liu, Z.; Liu, X.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar]
- Zhang, L.; Wei, L.; Shen, P.; Wei, W.; Zhu, G.; Song, J. Semantic SLAM based on object detection and improved octomap. IEEE Access 2018, 6, 75545–75559. [Google Scholar] [CrossRef]
- Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Yang, D.; Bi, S.; Wang, W.; Yuan, C.; Qi, X.; Cai, Y. DRE-SLAM: Dynamic RGB-D encoder SLAM for a differential-drive robot. Remote Sens. 2019, 11, 380. [Google Scholar] [CrossRef]
- Huang, J.; Yang, S.; Zhao, Z.; Lai, Y.; Hu, S.M. ClusterSLAM: A SLAM backend for simultaneous rigid body clustering and motion estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5875–5884. [Google Scholar]
- Huang, J.; Yang, S.; Mu, T.J.; Hu, S.M. ClusterVO: Clustering moving instances and estimating visual odometry for self and surroundings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2168–2177. [Google Scholar]
- Yuan, X.; Chen, S. SAD-SLAM: A visual SLAM based on semantic and depth information. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 4930–4935. [Google Scholar]
- Dong, Z.; Zhang, G.; Jia, J.; Bao, H. Keyframe-based real-time camera tracking. In Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1538–1545. [Google Scholar]
- Hsiao, M.; Westman, E.; Zhang, G.; Kaess, M. Keyframe-based dense planar SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5110–5117. [Google Scholar]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 2013, 30, 177–187. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
- Scaramuzza, D.; Siegwart, R. Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Trans. Robot. 2008, 24, 1015–1026. [Google Scholar] [CrossRef]
- Zhang, A.M.; Kleeman, L. Robust appearance based visual route following for navigation in large-scale outdoor environments. Int. J. Rob. Res. 2009, 28, 331–356. [Google Scholar] [CrossRef]
- Nourani-Vatani, N.; Borges, P.V.K. Correlation-based visual odometry for ground vehicles. J. Field Robot. 2011, 28, 742–768. [Google Scholar] [CrossRef]
- Huang, G.P.; Mourikis, A.I.; Roumeliotis, S.I. A first-estimates Jacobian EKF for improving SLAM consistency. In Proceedings of the Experimental Robotics: The 11th International Symposium, Athens, Greece, 14–18 July 2008; pp. 373–382. [Google Scholar]
- Montemerlo, M. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002. [Google Scholar]
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In Proceedings of the International Joint Conference on Artificial Intelligence, Acapulco, Mexico, 9–15 August 2003; pp. 1151–1156. [Google Scholar]
- Eraghi, H.E.; Taban, M.R.; Bahreinian, S.F. Improved unscented Kalman filter algorithm to increase the SLAM accuracy. In Proceedings of the 9th International Conference on Control, Instrumentation and Automation (ICCIA), Kavar, Iran, 27–28 December 2023; pp. 1–5. [Google Scholar]
- Chandra, K.P.B.; Gu, D.W.; Postlethwaite, I. Cubature Kalman filter based localization and mapping. IFAC Proc. Vol. 2011, 44, 2121–2125. [Google Scholar] [CrossRef]
- Servières, M.; Renaudin, V.; Dupuis, A.; Antigny, N. Visual and visual-inertial SLAM: State of the art, classification, and experimental benchmarking. J. Sens. 2021, 2021, 2054828. [Google Scholar] [CrossRef]
- Leutenegger, S. OKVIS2: Realtime scalable visual-inertial SLAM with loop closure. arXiv 2022, arXiv:2202.09199. [Google Scholar] [CrossRef]
- Qian, S.; Xu, Z.; Liu, W.; Zou, J.Z.; Chen, H. Visual simultaneous localization and mapping algorithm for dim dynamic scenes. Exp. Technol. Manag. 2024, 41, 16–25. [Google Scholar]
- Azimi, A.; Ahmadabadian, A.H.; Remondino, F. PKS: A photogrammetric key-frame selection method for visual-inertial systems built on ORB-SLAM3. ISPRS J. Photogramm. Remote Sens. 2022, 191, 18–32. [Google Scholar] [CrossRef]
- Szeliski, R. Computer Vision: Algorithms and Applications, 2nd ed.; Springer: London, UK, 2022. [Google Scholar]
Sequence | Edge Point Weight (w) | ||||||||
---|---|---|---|---|---|---|---|---|---|
1.1 | 1.2 | 1.3 | 1.4 | 1.5 | 1.6 | 1.7 | 1.8 | 1.9 | |
walk-rpy | 0.0360 | 0.0342 | 0.0315 | 0.0301 | 0.0293 | 0.0302 | 0.0321 | 0.0348 | 0.0365 |
walk-static | 0.0072 | 0.0069 | 0.0065 | 0.0058 | 0.0059 | 0.0063 | 0.0066 | 0.0070 | 0.0073 |
walk-xyz | 0.0138 | 0.0135 | 0.0128 | 0.0123 | 0.0120 | 0.0124 | 0.0129 | 0.0136 | 0.0140 |
sitting-static | 0.0065 | 0.0062 | 0.0058 | 0.0056 | 0.0053 | 0.0057 | 0.0060 | 0.0063 | 0.0066 |
Sequence | ORB-SLAM3 | DKB-SLAM-BA | DKB-SLAM-KF | DKB-SLAM-BA-KF | ||||
---|---|---|---|---|---|---|---|---|
RMSE (m) | Mean (m) | RMSE (m) | Mean (m) | RMSE (m) | Mean (m) | RMSE (m) | Mean (m) | |
fr1/360 | 0.1315 | 0.1185 | 0.0947 | 0.0880 | 0.1172 | 0.1046 | 0.0865 | 0.0772 |
fr1/room | 0.0823 | 0.0728 | 0.0664 | 0.0580 | 0.0568 | 0.0492 | 0.0522 | 0.0462 |
fr2/large_no_loop | 0.2928 | 0.2638 | 0.1695 | 0.1588 | 0.2238 | 0.2047 | 0.1064 | 0.0940 |
fr2/large_with_loop | 0.1841 | 0.1724 | 0.0884 | 0.0796 | 0.0929 | 0.0834 | 0.0833 | 0.0738 |
fr2/pioneer_360 | 0.0968 | 0.0831 | 0.0872 | 0.0666 | 0.0867 | 0.0744 | 0.0832 | 0.0719 |
Sequence | ORB-SLAM3 | DKB-SLAM-BA | DKB-SLAM-KF | DKB-SLAM-BA-KF | ||||
---|---|---|---|---|---|---|---|---|
RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | |
fr1/360 | 0.2494 | 0.2401 | 0.2051 | 0.1922 | 0.2921 | 0.2858 | 0.1940 | 0.1893 |
fr1/room | 0.0997 | 0.0898 | 0.0839 | 0.0766 | 0.0681 | 0.0638 | 0.0686 | 0.0588 |
fr2/large_no_loop | 0.2533 | 0.2399 | 0.2443 | 0.2424 | 0.2037 | 0.1905 | 0.1276 | 0.1236 |
fr2/large_with_loop | 0.1346 | 0.1310 | 0.0852 | 0.0839 | 0.0851 | 0.0836 | 0.0711 | 0.0690 |
fr2/pioneer_360 | 0.1079 | 0.0961 | 0.0868 | 0.0799 | 0.0878 | 0.0832 | 0.0887 | 0.0846 |
Sequence | ORB-SLAM3 | MSSD-SLAM | DRG-SLAM | Crowd-SLAM | SG-SLAM | DS-SLAM | DKB-SLAM-DR | DKB-SLAM-KF-DR | DKB-SLAM-BA-DR | DKB-SLAM |
---|---|---|---|---|---|---|---|---|---|---|
RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | RMSE (m) | |
walk-rpy | 0.1533 | 0.0328 | 0.0424 | – | 0.0367 | 0.4462 | 0.0386 | 0.0322 | 0.0386 | 0.0293 |
walk-half | 0.3265 | 0.0173 | 0.0258 | 0.0539 | 0.0203 | 0.0315 | 0.0196 | 0.0176 | 0.0236 | 0.0155 |
walk-static | 0.0237 | 0.0179 | 0.0111 | 0.0071 | 0.0078 | 0.0070 | 0.0109 | 0.0061 | 0.0064 | 0.0061 |
walk-xyz | 0.2801 | 0.0136 | 0.0217 | 0.0165 | 0.0154 | 0.0323 | 0.0144 | 0.0144 | 0.0139 | 0.0125 |
sitting-half | 0.0221 | 0.0354 | 0.0734 | 0.0247 | 0.0209 | 0.0151 | 0.0240 | 0.0216 | 0.0185 | 0.0176 |
sitting-static | 0.0074 | 0.0066 | 0.0072 | 0.0132 | 0.0077 | 0.0116 | 0.0065 | 0.0062 | 0.0057 | 0.0055 |
Sequence | ORB-SLAM3 | Crowd-SLAM | SG-SLAM | DKB-SLAM-DR | DKB-SLAM-KF-DR | DKB-SLAM-BA-DR | DKB-SLAM | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | RMSE (°) | Mean (°) | |
walk-rpy | 2.5574 | 2.5574 | – | – | 0.0982 | 0.0972 | 0.1007 | 0.0998 | 0.0931 | 0.0922 | 0.1144 | 0.1138 | 0.0934 | 0.0929 |
walk-half | 1.1078 | 1.0991 | 0.0345 | 0.0334 | 0.0235 | 0.0216 | 0.0226 | 0.0212 | 0.0260 | 0.0235 | 0.0337 | 0.0314 | 0.0152 | 0.0137 |
walk-static | 0.4913 | 0.4908 | 0.1612 | 0.1612 | 0.2682 | 0.2682 | 0.2087 | 0.2086 | 0.1505 | 0.1505 | 0.1246 | 0.1246 | 0.1167 | 0.1166 |
walk-xyz | 1.3309 | 1.3198 | 0.0202 | 0.0191 | 0.0146 | 0.0120 | 0.0180 | 0.0617 | 0.0184 | 0.0170 | 0.0152 | 0.0130 | 0.0144 | 0.0127 |
sitting-half | 0.0269 | 0.0254 | 0.0671 | 0.0666 | 0.0282 | 0.0276 | 0.0287 | 0.0269 | 0.0225 | 0.0204 | 0.0187 | 0.0171 | 0.0152 | 0.0135 |
sitting-static | 0.1055 | 0.1054 | 0.1019 | 0.1019 | 0.1366 | 0.1366 | 0.0796 | 0.0795 | 0.0893 | 0.0893 | 0.0912 | 0.0911 | 0.0918 | 0.0917 |
Systems | Average Processing Time per Frame (ms) |
---|---|
ORB-SLAM3 | 18.18 |
DS-SLAM | 87.84 |
DynaSLAM | 240.91 |
Crowd-SLAM | 27.68 |
SG-SLAM | 35.72 |
DKB-SLAM (Proposed) | 30.97 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Q.; Xu, Z.; Li, Y.; Zhang, Y.; Ye, F. DKB-SLAM: Dynamic RGB-D Visual SLAM with Efficient Keyframe Selection and Local Bundle Adjustment. Robotics 2025, 14, 134. https://doi.org/10.3390/robotics14100134
Sun Q, Xu Z, Li Y, Zhang Y, Ye F. DKB-SLAM: Dynamic RGB-D Visual SLAM with Efficient Keyframe Selection and Local Bundle Adjustment. Robotics. 2025; 14(10):134. https://doi.org/10.3390/robotics14100134
Chicago/Turabian StyleSun, Qian, Ziqiang Xu, Yibing Li, Yidan Zhang, and Fang Ye. 2025. "DKB-SLAM: Dynamic RGB-D Visual SLAM with Efficient Keyframe Selection and Local Bundle Adjustment" Robotics 14, no. 10: 134. https://doi.org/10.3390/robotics14100134
APA StyleSun, Q., Xu, Z., Li, Y., Zhang, Y., & Ye, F. (2025). DKB-SLAM: Dynamic RGB-D Visual SLAM with Efficient Keyframe Selection and Local Bundle Adjustment. Robotics, 14(10), 134. https://doi.org/10.3390/robotics14100134