A Motion Segmentation Dynamic SLAM for Indoor GNSS-Denied Environments
Abstract
1. Introduction
- (1)
- This study introduces OS-SLAM, a dynamic environment SLAM system that utilizes optical flow motion segmentation. The system combines motion and instance segmentation, enabling the accurate segmentation of non-rigid dynamic objects and the reconstruction of dense static scenes.
- (2)
- To tackle challenges related to complex feature extraction of dynamic objects and imprecise long-distance motion estimation, we introduce a multi-scale optical flow network framework. This framework comprises a multi-scale feature extraction module and a multi-scale adaptive update module. Our approach aims to enhance the precision of estimating moving objects over long distances while ensuring computational efficiency.
- (3)
- In order to mitigate the influence of non-rigid motion on segmentation precision, we present a segmentation framework that incorporates motion semantics. This framework includes a feature pyramid aggregator and a separable dynamic decoder for panoramic kernel generation. Additionally, it employs multi-head cross attention via separable dynamic convolution to effectively differentiate non-rigid moving objects from stationary backgrounds. This approach enhances the resilience of the SLAM system in dynamic settings.
2. Related Work
2.1. Dynamic SLAM System Based on Deep Learning
2.2. Dynamic SLAM System Based on Optical Flow
2.3. Dynamic SLAM System Based on Semantic Segmentation
3. Method
3.1. OS-SLAM Framework
3.2. Optical Flow Network Structure
3.2.1. Multi-Scale Feature Extractor
3.2.2. Multi-Scale Update Module
3.2.3. Multi-Scale Loss Function
3.3. Fusion Mechanism
Algorithm 1: Joint Segmentation Fusion |
Input: //Rigidmask segment mask // YOLO-fastest segment mask //Image sequence Output: //moving object mask 1: Initialize: , 2 2: While do 3: //Rigidmask masks and predicts objects 4: // YOLO-fastest masks 5: for do 6: for do 7: //object prediction center point 8: While do 9: //mask marker closest to the center point access 10: If points marked by the mask return 11: else 12: for adjacent point that has not been accessed do 13: 14: end 15: end 16: If then //inclusion rate 17: for adjacent point that has not been accessed do 18: 19: 20: 21: end 22: end 23: end 24: end 25: return |
3.4. Mapping
4. Experiment and Results
4.1. Optical Flow Dataset Description and Training Strategy
4.2. Optical Flow Experiment Evaluation Criteria
4.3. Optical Flow Comparison Experiment
4.4. Ablation Experiment
4.5. SLAM Dataset Description
4.6. Error Evaluation
4.7. Experimental Comparison and Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SLAM | Simultaneous Localization and Mapping |
GNSS | Global Navigation Satellite System |
OS-SLAM | Optical flow motion segmentation-based SLAM |
VO | Visual Odometry |
BA | Bundle Adjustment |
AR | Augmented Reality |
UAV | Unmanned Aerial Vehicle |
IMU | Inertial Measurement Unit |
ANN | Asymmetric Non-local Neural network |
YOLO | You Only Look Once |
DCN | Deformable Convolutional Network |
ConvGRU | Convolutional Gated Recurrent Unit |
BFS | Breadth-First Search |
SOR | Statistical Outlier Removal |
KD-Tree | K-Dimensional Tree |
TUM | Technical University of Munich |
KITTI | Karlsruhe Institute of Technology and Toyota Technological Institute |
MPI-Sintel | Max Planck Institute Sintel |
APE | Absolute Position Error |
RPE | Relative Position Error |
EPE | Endpoint Error |
RMSE | Root Mean Square Error |
SSE | Sum of Squared Errors |
STD | Standard Deviation |
ORB | Oriented FAST and Rotated BRIEF |
References
- Filipenko, M.; Afanasyev, I. Comparison of various slam systems for mobile robot in an indoor environment. In Proceedings of the International Conference on Intelligent Systems (IS), Funchal, Portugal, 25–27 September 2018; pp. 400–407. [Google Scholar]
- von Stumberg, L.; Usenko, V.; Engel, J.; Stückler, J.; Cremers, D. From monocular SLAM to autonomous drone exploration. In Proceedings of the European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–8. [Google Scholar]
- Cheng, J.; Zhang, L.; Chen, Q.; Hu, X.; Cai, J. A review of visual SLAM methods for autonomous driving vehicles. Eng. Appl. Artif. Intell. 2022, 114, 104992. [Google Scholar] [CrossRef]
- Taheri, H.; Xia, Z.C. SLAM; definition and evolution. Eng. Appl. Artif. Intell. 2021, 97, 104032. [Google Scholar] [CrossRef]
- Zou, Q.; Sun, Q.; Chen, L.; Nie, B.; Li, Q. A comparative analysis of LiDAR SLAM-based indoor navigation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6907–6921. [Google Scholar] [CrossRef]
- Kazerouni, I.A.; Fitzgerald, L.; Dooly, G.; Toal, D.J.F. A survey of state-of-the-art on visual SLAM. Expert Syst. Appl. 2022, 205, 117734. [Google Scholar] [CrossRef]
- Xu, K.; Hao, Y.; Yuan, S.; Wang, C.; Xie, L. Airslam: An efficient and illumination-robust point-line visual slam system. IEEE Trans. Robot. 2025, 41, 1673–1692. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Pumarola, A.; Vakhitov, A.; Agudo, A.; Sanfeliu, A.; Moreno-Noguer, F. PL-SLAM: Real-time monocular visual SLAM with points and lines. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4503–4508. [Google Scholar]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
- Wang, R.; Schworer, M.; Cremers, D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3903–3911. [Google Scholar]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
- Henein, M.; Zhang, J.; Mahony, R.; Ila, V. Dynamic SLAM: The need for speed. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2123–2129. [Google Scholar]
- Wang, C.; Luo, B.; Zhang, Y.; Zhao, Q.; Yin, L.; Wang, W.; Su, X.; Wang, Y.; Li, C. DymSLAM: 4D dynamic scene reconstruction based on geometrical motion segmentation. IEEE Robot. Autom. Lett. 2020, 6, 550–557. [Google Scholar] [CrossRef]
- Ai, Y.; Rui, T.; Yang, X.; He, J.-L.; Fu, L.; Li, J.-B.; Lu, M. Visual SLAM in dynamic environments based on object detection. Def. Technol. 2021, 17, 1712–1721. [Google Scholar] [CrossRef]
- Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar]
- Bescos, B.; Fácil, J.M.; Civera, J.; Neira, J. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 2018, 3, 4076–4083. [Google Scholar] [CrossRef]
- Wang, X.; Zhuang, Y.; Cao, X.; Huai, J.; Zhang, Z.; Zheng, Z.; El-Sheimy, N. GAT-LSTM: A feature point management network with graph attention for feature-based visual SLAM in dynamic environments. ISPRS J. Photogramm. Remote Sens. 2025, 224, 75–93. [Google Scholar] [CrossRef]
- Fan, Y.; Zhang, Q.; Tang, Y.; Liu, S.; Han, H. Blitz-SLAM: A semantic SLAM in dynamic environments. Pattern Recognit. 2022, 121, 108225. [Google Scholar] [CrossRef]
- Esparza, D.; Flores, G. The STDyn-SLAM: A stereo vision and semantic segmentation approach for VSLAM in dynamic outdoor environments. IEEE Access 2022, 10, 18201–18209. [Google Scholar] [CrossRef]
- Qin, Z.; Yin, M.; Li, G.; Yang, F. SP-Flow: Self-supervised optical flow correspondence point prediction for real-time SLAM. Comput. Aided Geom. Design 2020, 82, 101928. [Google Scholar] [CrossRef]
- Wang, W.; Hu, Y.; Scherer, S. Tartanvo: A generalizable learning-based vo. In Proceedings of the Conference on Robot Learning, Urumqi, China, 18–20 October 2021; pp. 1761–1772. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.Y.; Kautz, J. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8934–8943. [Google Scholar]
- Shen, S.; Cai, Y.; Wang, W.; Scherer, S. Dytanvo: Joint refinement of visual odometry and motion segmentation in dynamic environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 4048–4055. [Google Scholar]
- Liu, Y.; Miura, J. RDMO-SLAM: Real-time visual SLAM for dynamic environments using semantic label prediction with optical flow. IEEE Access 2021, 9, 106981–106997. [Google Scholar] [CrossRef]
- Liu, Y.; Miura, J. RDS-SLAM: Real-time dynamic SLAM using semantic segmentation methods. IEEE Access 2021, 9, 23772–23785. [Google Scholar] [CrossRef]
- Wang, H.; Ko, J.Y.; Xie, L. Multi-modal semantic slam for complex dynamic environments. arXiv 2022, arXiv:2205.04300. [Google Scholar]
- Zheng, Z.; Lin, S.; Yang, C. RLD-SLAM: A robust lightweight VI-SLAM for dynamic environments leveraging semantics and motion information. IEEE Trans. Ind. Electron. 2024, 71, 14328–14338. [Google Scholar] [CrossRef]
- Yuan, C.; Xu, Y.; Zhou, Q. PLDS-SLAM: Point and line features SLAM in dynamic environment. Remote Sens. 2023, 15, 1893. [Google Scholar] [CrossRef]
- Peng, Y.; Xv, R.; Lu, W.; Wu, X.; Xv, Y.; Wu, Y.; Chen, Q. A high-precision dynamic RGB-D SLAM algorithm for environments with potential semantic segmentation network failures. Measurement 2025, 256, 118090. [Google Scholar] [CrossRef]
- Li, F.; Fu, C.; Wang, J.; Sun, D. Dynamic Semantic SLAM Based on Panoramic Camera and LiDAR Fusion for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2025, 99, 1–14. [Google Scholar] [CrossRef]
- Yang, G.; Ramanan, D. Learning to segment rigid motions from two frames. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1266–1275. [Google Scholar]
- Dog-qiuqiu/YOLO-FastesttV2: Based on YOLO’s Low-Power, Ultra-Lightweight Universal Target Detection Algorithm. The Parameter is Only 250k, and the Speed of the Smart Phone Mobile Terminal Can Reach ~300fps+. 2022. Available online: https://github.com/dog-qiuqiu/Yolo-FastestV2 (accessed on 14 July 2023).
- Teed, Z.; Deng, J. Raft: Recurrent all-pairs field transforms for optical flow. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Part II 16. Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 402–419. [Google Scholar]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Butler, D.; Wulff, J.; Stanley, G.; Black, M. MPI-Sintel optical flow benchmark: Supplemental material. In MPI-IS-TR-006, MPI for Intelligent Systems (2012); Citeseer: State College, PA, USA, 2012. [Google Scholar]
- Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4040–4048. [Google Scholar]
- Menze, M.; Geiger, A. Object scene flow for autonomous vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3061–3070. [Google Scholar]
- Zhao, S.; Sheng, Y.; Dong, Y.; Chang, E.I.; Xu, Y. Maskflownet: Asymmetric feature matching with learnable occlusion mask. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6278–6287. [Google Scholar]
- Kondermann, D.; Nair, R.; Honauer, K.; Krispin, K.; Andrulis, J.; Brock, A.; Güssefeld, B.; Rahimimoghaddam, M.; Hofmann, S.; Brenner, C.; et al. The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 19–28. [Google Scholar]
- Jiang, S.; Lu, Y.; Li, H.; Hartley, R. Learning optical flow from a few matches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 16592–16600. [Google Scholar]
- Butler, D.J.; Wulff, J.; Stanley, G.B.; Black, M.J. A naturalistic open source movie for optical flow evaluation. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Part VI 12. Springer: Berlin/Heidelberg, Germany, 2012; pp. 611–625. [Google Scholar]
- Jiang, S.; Campbell, D.; Lu, Y.; Li, H.; Hartley, R. Learning to estimate hidden motions with global motion aggregation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 9772–9781. [Google Scholar]
- Long, L.; Lang, J. Detail preserving residual feature pyramid modules for optical flow. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 2100–2108. [Google Scholar]
Method | Sintel Clean (Test) | Sintel Final (Test) | KITTI (Test) | |||||
---|---|---|---|---|---|---|---|---|
All | Mat | Unmat | All | Mat | Unmat | F1-All | Fl-Noc | |
FlowNet2.0 | 1.92 | 0.91 | 12.59 | 3.84 | 1.81 | 24.00 | 11.48 | - |
PWC-Net+ | 1.71 | 0.55 | 11.07 | 3.45 | 1.65 | 17.40 | 7.72 | - |
SCV [42] | 1.72 | 0.57 | 11.08 | 3.60 | 1.70 | 19.14 | 6.17 | - |
RAFT [43] | 1.61 | 0.62 | 9.65 | 2.86 | 1.41 | 14.68 | 5.10 | 3.07 |
GMA [44] | 1.39 | 0.58 | 7.96 | 2.47 | 1.24 | 12.50 | 4.93 | 2.90 |
RFPM [45] | 1.41 | 0.49 | 8.88 | 2.90 | 1.33 | 15.69 | 4.79 | 2.85 |
OURS | 1.40 | 0.52 | 8.75 | 2.66 | 1.23 | 14.70 | 4.92 | 2.82 |
Improvement over RAFT | +13.1% | +16.1% | +9.3% | +6.9% | +12.7% | −0.1% | +3.5% | +8.1% |
Pre-Trained on Chairs and Things | Sintel (Train) | |
---|---|---|
Clean | Final | |
1. Single-Scale RAFT: Finest Scale/Recurrent Iterations | ||
1/8x(h,w), 12 iter (RAFT) | 1.40 | 2.67 |
1/8x(h,w), 18 iter | 1.45 | 2.70 |
1/4x(h,w), 12 iter | 1.58 | 3.10 |
1/4x(h,w), 18 iter | 1.52 | 3.08 |
2. Multi-Scale RAFT: Resolution Scales/Look-Up Levels | ||
2 scales/3 levels | 1.16 | 3.07 |
2 scales/4 levels | 1.14 | 2.64 |
2 scales/5 levels | 1.11 | 2.66 |
3 scales/2 levels (ours) | 1.13 | 2.60 |
3 scales/3 levels | 1.15 | 2.66 |
3 scales/4 levels | 1.14 | 2.70 |
3. Update Module: Multi-scale Update vs. Standard | ||
Multi-scale Update (ours) | 1.12 | 2.61 |
Standard | 1.22 | 2.67 |
4. Multi-Scale Features: U-Net-style vs. Standard | ||
U-Net-style (ours) | 1.13 | 2.60 |
Standard | 1.11 | 2.68 |
5.Multi-Scale Loss: Single-Scale vs. Multi-Scale | ||
Multi-scale loss (ours) | 1.13 | 2.60 |
Single-scale loss | 2.26 | 4.09 |
APE | ||||
---|---|---|---|---|
Error | ORB-SLAM3 | OS-SLAM | ||
fr3_sitting_xyz | fr3_walking_xyz | fr3_sitting_xyz | fr3_walking_xyz | |
Max | 0.0467 | 0.9316 | 0.0833 | 0.0527 |
Mean | 0.0141 | 0.3498 | 0.0125 | 0.0133 |
Median | 0.0122 | 0.2235 | 0.0119 | 0.0117 |
Min | 0.0014 | 0.0564 | 0.0013 | 0.0007 |
Rmse | 0.0156 | 0.4089 | 0.0137 | 0.0151 |
Sse | 0.3073 | 142.8564 | 0.2129 | 0.1863 |
Std | 0.0064 | 0.2044 | 0.0055 | 0.0069 |
RPE | ||||||||
---|---|---|---|---|---|---|---|---|
Error | ORB-SLAM3 | OS-SLAM | ||||||
fr3_sitting_xyz | fr3_walking_xyz | fr3_sitting_xyz | fr3_walking_xyz | |||||
T-P | R-P | T-P | R-P | T-P | R-P | T-P | R-P | |
Max | 0.0377 | 0.0278 | 0.1894 | 0.1295 | 0.0637 | 0.0288 | 0.0551 | 0.1223 |
Mean | 0.0068 | 0.0061 | 0.0169 | 0.0107 | 0.0077 | 0.0064 | 0.0095 | 0.0063 |
Median | 0.0061 | 0.0053 | 0.0121 | 0.0072 | 0.0067 | 0.0054 | 0.0079 | 0.0052 |
Min | 0.0006 | 0.0004 | 0.0008 | 0.0007 | 0.0002 | 0.0003 | 0.0007 | 0.0004 |
Rmse | 0.0081 | 0.0074 | 0.0232 | 0.0139 | 0.0093 | 0.0077 | 0.0113 | 0.0092 |
Sse | 0.0890 | 0.0735 | 0.4603 | 0.1660 | 0.1153 | 0.0787 | 0.1080 | 0.0798 |
Std | 0.0043 | 0.0041 | 0.0155 | 0.0093 | 0.0049 | 0.0043 | 0.0059 | 0.0061 |
Sequences | ORB-SLAM2 | ORB-SLAM3 | DynaSLAM | DS-SLAM | DM-SLAM | OS-SLAM |
---|---|---|---|---|---|---|
fr3_walking_xyz | 0.7830 | 0.4019 | 0.0154 | 0.2460 | 0.0125 | 0.0133 |
fr3_walking_static | 0.3851 | 0.0671 | 0.0063 | 0.0079 | 0.0139 | 0.0138 |
fr3_walking_halfsphere | 0.4652 | 0.4228 | 0.0284 | 0.0297 | 0.0266 | 0.0272 |
fr3_walking_rpy | 0.7831 | 0.6525 | 0.0341 | 0.4372 | 0.0376 | 0.0365 |
Sequences | ORB-SLAM2 | ORB-SLAM3 | DynaSLAM | DS-SLAM | DM-SLAM | OS-SLAM |
---|---|---|---|---|---|---|
fr3_walking_xyz | 0.0423 | 0.0224 | 0.0205 | 0.0321 | 0.0233 | 0.0116 |
fr3_walking_static | 0.0297 | 0.0207 | 0.0086 | 0.0105 | 0.0077 | 0.0042 |
fr3_walking_halfsphere | 0.0483 | 0.0202 | 0.0361 | 0.0241 | 0.0319 | 0.0139 |
fr3_walking_rpy | 0.1695 | 0.0273 | 0.0436 | 0.0461 | 0.0233 | 0.0217 |
Method | Average Tracking Times (ms) | Average Segmentation Times (ms) | CPU | GPU |
---|---|---|---|---|
Normal state | / | / | 3% | 2% |
ORB-SLAM3 | >100 | / | 30% | 3% |
DynaSLAM | >100 | 192.00 | 61% | 35% |
DS-SLAM | 67.30 | 55.15 | 53% | 32% |
YOLOv5-SLAM | 79.52 | 47.63 | 40% | 23% |
YOLOv8-SLAM | 51.66 | 22.32 | 44% | 27% |
OS-SLAM | 35.32 | 18.10 | 37% | 23% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Y.; Zhang, Z.; Chen, H.; Li, J. A Motion Segmentation Dynamic SLAM for Indoor GNSS-Denied Environments. Sensors 2025, 25, 4952. https://doi.org/10.3390/s25164952
Wu Y, Zhang Z, Chen H, Li J. A Motion Segmentation Dynamic SLAM for Indoor GNSS-Denied Environments. Sensors. 2025; 25(16):4952. https://doi.org/10.3390/s25164952
Chicago/Turabian StyleWu, Yunhao, Ziyao Zhang, Haifeng Chen, and Jian Li. 2025. "A Motion Segmentation Dynamic SLAM for Indoor GNSS-Denied Environments" Sensors 25, no. 16: 4952. https://doi.org/10.3390/s25164952
APA StyleWu, Y., Zhang, Z., Chen, H., & Li, J. (2025). A Motion Segmentation Dynamic SLAM for Indoor GNSS-Denied Environments. Sensors, 25(16), 4952. https://doi.org/10.3390/s25164952