Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization
Abstract
1. Introduction
- An integrated framework combining a hybrid VIO approach employing deep networks with GNSS measurements is proposed. The enhanced consistency constraint is applied to the predicted optical flow to selectively extract high-confidence measurements, which are then incorporated into the VIO framework. The framework is applied to real sensor-based datasets, enhancing scalability through sensor fusion while maintaining general applicability.
- A filter-based multi-sensor fusion strategy is proposed to enhance the localization accuracy of the vehicle. The proposed strategy can be applied to large-scale outdoor environments and prevent the accumulation of errors over time during long-term driving.
- The proposed method is tested on real-world data and compared with leading approaches. The results demonstrate the superiority of the proposed method in terms of localization accuracy.
2. Related Work
2.1. Representative VO/VIO Methods
2.2. Global-Aware Multi-Sensor Fusion Methods
3. Methodology
3.1. Overview of Proposed System
3.2. Deep Optical Flow-Based VIO
3.2.1. IMU Measurement Propagation
3.2.2. Enhanced Corresponding Optical Flow from Deep Network
3.2.3. Visual Measurement Model
3.3. GNSS Measurement Model
4. Experimental Results
4.1. Experimental Setup
4.2. Results
5. Discussion and Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
GNSS | Global navigation satellite system |
VIO | Visual-inertial odometry |
IMU | Inertial measurement units |
VO | Visual odometry |
EKF | Extended Kalman filter |
SLAM | Simultaneous localization and mapping |
CNNs | Convolutional neural networks |
RNN | Recurrent neural network |
LSTM | Long-short-term memory |
ECEF | Earth-centered earth-fixed |
RMSE | Root mean square error |
ATE | Absolute trajectory error |
RPE | Relative pose error |
References
- Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
- Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixao, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
- Liu, Y.; Luo, Q.; Zhou, Y. Deep learning-enabled fusion to bridge GPS outages for INS/GPS integrated navigation. IEEE Sens. J. 2022, 22, 8974–8985. [Google Scholar] [CrossRef]
- Zhang, T.; Yuan, M.; Wang, L.; Tang, H.; Niu, X. A robust and efficient IMU array/GNSS data fusion algorithm. IEEE Sens. J. 2024, 24, 26278–26289. [Google Scholar] [CrossRef]
- Zhang, Y. A fusion methodology to bridge GPS outages for INS/GPS integrated navigation system. IEEE Access 2019, 7, 61296–61306. [Google Scholar] [CrossRef]
- Meng, X.; Tan, H.; Yan, P.; Zheng, Q.; Chen, G.; Jiang, J. A GNSS/INS integrated navigation compensation method based on CNN-GRU+ IRAKF hybrid model during GNSS outages. IEEE Trans. Instrum. Meas. 2024, 73, 2510015. [Google Scholar] [CrossRef]
- Zhang, H.; Xiong, H.; Hao, S.; Yang, G.; Wang, M.; Chen, Q. A novel multidimensional hybrid position compensation method for INS/GPS integrated navigation systems during GPS outages. IEEE Sens. J. 2023, 24, 962–974. [Google Scholar] [CrossRef]
- Liu, T.; Liu, J.; Wang, J.; Zhang, H.; Zhang, B.; Ma, Y.; Sun, M.; Lv, Z.; Xu, G. Pseudolites to support location services in smart cities: Review and prospects. Smart Cities 2023, 6, 2081–2105. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; IEEE: New York, NY, USA, 2007; pp. 225–234. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Zhan, H.; Weerasekera, C.S.; Bian, J.-W.; Garg, R.; Reid, I. DF-VO: What should be learnt for visual odometry? arXiv 2021, arXiv:2103.00933. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardos, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Yang, N.; Wang, R.; Gao, X.; Cremers, D. Challenges in monocular visual odometry: Photometric calibration, motion bias, and rolling shutter effect. IEEE Robot. Autom. Lett. 2018, 3, 2878–2885. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: New York, NY, USA, 2012; pp. 3354–3361. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: New York, NY, USA, 2011; pp. 2564–2571. [Google Scholar]
- Tomasi, C.; Kanade, T. Detection and tracking of point. Int. J. Comput. Vis. 1991, 9, 3. [Google Scholar]
- Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the IJCAI’81: 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 2, pp. 674–679. [Google Scholar]
- Rosten, E.; Drummond, T. Fusing points and lines for high performance tracking. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; IEEE: New York, NY, USA, 2005; Volume 2, pp. 1508–1515. [Google Scholar]
- Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; IEEE: New York, NY, USA, 2007; pp. 3565–3572. [Google Scholar]
- Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; IEEE: New York, NY, USA, 2015; pp. 298–304. [Google Scholar]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef]
- Wu, Z.; Zhu, Y. Swformer-VO: A Monocular Visual Odometry Model Based on Swin Transformer. IEEE Robot. Autom. Lett. 2024, 9, 4766–4773. [Google Scholar] [CrossRef]
- Wang, S.; Clark, R.; Wen, H.; Trigoni, N. Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; IEEE: New York, NY, USA, 2017; pp. 2043–2050. [Google Scholar]
- Franccani, A.O.; Maximo, M.R.O. Transformer-based model for monocular visual odometry: A video understanding approach. IEEE Access 2025, 13, 13959–13971. [Google Scholar] [CrossRef]
- Cimarelli, C.; Bavle, H.; Sánchez-López, J.L.; Voos, H. Raum-vo: Rotational adjusted unsupervised monocular visual odometry. Sensors 2022, 22, 2651. [Google Scholar] [CrossRef]
- Han, S.; Li, M.; Tang, H.; Song, Y.; Tong, G. UVMO: Deep unsupervised visual reconstruction-based multimodal-assisted odometry. Pattern Recognit. 2024, 153, 110573. [Google Scholar] [CrossRef]
- Li, R.; Wang, S.; Long, Z.; Gu, D. Undeepvo: Monocular visual odometry through unsupervised deep learning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; IEEE: New York, NY, USA, 2018; pp. 7286–7291. [Google Scholar]
- Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. Number 1. [Google Scholar]
- Zhan, H.; Weerasekera, C.S.; Bian, J.-W.; Reid, I. Visual odometry revisited: What should be learnt? In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: New York, NY, USA, 2020; pp. 4203–4210. [Google Scholar]
- Cho, H.M.; Kim, E. Dynamic object-aware visual odometry (VO) estimation based on optical flow matching. IEEE Access 2023, 11, 11642–11651. [Google Scholar] [CrossRef]
- Kang, J. Deep depth-flow odometry with inertial sensor fusion. Electron. Lett. 2025, 61, e70409. [Google Scholar] [CrossRef]
- Teed, Z.; Deng, J. Recurrent all-pairs field transforms for optical flow. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 402–419. [Google Scholar]
- Zhang, C.; Huang, T.; Zhang, R.; Yi, X. PLD-SLAM: A new RGB-D SLAM method with point and line features for indoor dynamic scene. ISPRS Int. J. Geo-Inf. 2021, 10, 163. [Google Scholar] [CrossRef]
- Wu, X.; Huang, F.; Wang, Y.; Jiang, H. A vins combined with dynamic object detection for autonomous driving vehicles. IEEE Access 2022, 10, 91127–91136. [Google Scholar] [CrossRef]
- Surber, J.; Teixeira, L.; Chli, M. Robust visual-inertial localization with weak GPS priors for repetitive UAV flights. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; IEEE: New York, NY, USA, 2017; pp. 6300–6306. [Google Scholar]
- D’ippolito, F.; Garraffa, G.; Sferlazza, A.; Zaccarian, L. A hybrid observer for localization from noisy inertial data and sporadic position measurements. Nonlinear Anal. Hybrid Syst. 2023, 49, 101360. [Google Scholar] [CrossRef]
- Niu, X.; Tang, H.; Zhang, T.; Fan, J.; Liu, J. IC-GVINS: A robust, real-time, INS-centric GNSS-visual-inertial navigation system. IEEE Robot. Autom. Lett. 2022, 8, 216–223. [Google Scholar] [CrossRef]
- Cao, S.; Lu, X.; Shen, S. GVINS: Tightly coupled GNSS-visual-inertial fusion for smooth and consistent state estimation. IEEE Trans. Robot. 2022, 38, 2004–2021. [Google Scholar] [CrossRef]
- Yu, Y.; Gao, W.; Liu, C.; Shen, S.; Liu, M. A gps-aided omnidirectional visual-inertial state estimator in ubiquitous environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3 November 2019; pp. 7750–7755. [Google Scholar]
- Dai, H.-F.; Bian, H.-W.; Wang, R.-Y.; Ma, H. An INS/GNSS integrated navigation in GNSS denied environment using recurrent neural network. Def. Technol. 2020, 16, 334–340. [Google Scholar] [CrossRef]
- Yusefi, A.; Durdu, A.; Aslan, M.F.; Sungur, C. LSTM and filter based comparison analysis for indoor global localization in UAVs. IEEE Access 2021, 9, 10054–10069. [Google Scholar] [CrossRef]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Jiang, S.; Campbell, D.; Lu, Y.; Li, H.; Hartley, R. Learning to estimate hidden motions with global motion aggregation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 10–17 October 2021; pp. 9772–9781. [Google Scholar]
- Kang, J.M.; Sjanic, Z.; Hendeby, G. Optical Flow Revisited: How good is dense deep learning based optical flow? In Proceedings of the 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI), Bonn, Germany, 27–29 November 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar]
- Sun, D.; Yang, X.; Liu, M.-Y.; Kautz, J. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8934–8943. [Google Scholar]
- Ilg, E.; Mayer, N.; Saikia, T.; Keuper, M.; Dosovitskiy, A.; Brox, T. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2462–2470. [Google Scholar]
- Bleser, G.; Hendeby, G. Using optical flow for filling the gaps in visual-inertial tracking. In Proceedings of the 2010 18th European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; IEEE: New York, NY, USA, 2010; pp. 1836–1840. [Google Scholar]
- Kang, J.M.; Sjanic, Z.; Hendeby, G. Visual-Inertial Odometry Using Optical Flow from Deep Learning. In Proceedings of the 2024 27th International Conference on Information Fusion (FUSION), Venice, Italy, 8–11 July 2024; pp. 1–8. [Google Scholar]
- Grewal, M.S.; Weill, L.R.; Andrews, A.P. Global Positioning Systems, Inertial Navigation, and Integration; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/forum?id=BJJsrmfCZ (accessed on 28 September 2025).
- Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; Van Der Smagt, P.; Cremers, D.; Brox, T. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2758–2766. [Google Scholar]
- Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; Brox, T. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4040–4048. [Google Scholar]
- Butler, D.J.; Wulff, J.; Stanley, G.B.; Black, M.J. A naturalistic open source movie for optical flow evaluation. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Proceedings, Part VI 12; Springer: Berlin/Heidelberg, Germany, 2012; pp. 611–625. [Google Scholar]
- Zhan, H.; Garg, R.; Weerasekera, C.S.; Li, K.; Agarwal, H.; Reid, I. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 340–349. [Google Scholar]
- Bian, J.; Li, Z.; Wang, N.; Zhan, H.; Shen, C.; Cheng, M.-M.; Reid, I. Unsupervised scale-consistent depth and ego-motion learning from monocular video. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019); NeurIPS: San Diego, CA, USA, 2019. [Google Scholar]
- Zheng, Z.; Lin, S.; Yang, C. RLD-SLAM: A robust lightweight VI-SLAM for dynamic environments leveraging semantics and motion information. IEEE Trans. Ind. Electron. 2024, 71, 14328–14338. [Google Scholar] [CrossRef]
- Adham, M.; Chen, W.; Li, Y.; Liu, T. Towards Robust Global VINS: Innovative SemanticAware and Multi-Level Geometric Constraints Approach for Dynamic Feature Filtering in Urban Environments. IEEE Trans. Intell. Veh. 2024. early access. [Google Scholar] [CrossRef]
Method | Category | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Depth-VO-Feat [57] | Deep Learning-based VO | 64.45 | 203.44 | 85.13 | 21.34 | 3.12 | 22.15 | 14.31 | 15.35 | 29.53 | 52.12 | 24.70 |
SC-SFMLearner [58] | Deep Learning-based VO | 93.04 | 85.90 | 70.37 | 10.21 | 2.98 | 40.56 | 12.56 | 21.01 | 56.15 | 15.02 | 20.19 |
ORB-SLAM2 (w/o LC) [12] | Geometry-based V-SLAM | 40.65 | 502.20 | 47.82 | 0.94 | 1.30 | 29.95 | 40.82 | 16.04 | 43.09 | 38.77 | 5.42 |
ORB-SLAM2 (w LC) [12] | Geometry-based V-SLAM | 6.03 | 508.34 | 14.76 | 1.02 | 1.57 | 4.04 | 11.16 | 2.19 | 38.85 | 8.39 | 6.63 |
DF-VO [11] | Hybrid VO | 12.17 | 342.71 | 17.59 | 1.96 | 0.70 | 4.94 | 3.73 | 1.06 | 6.96 | 7.59 | 4.21 |
RLD-SLAM [59] | VI-SLAM | 1.16 | 0.90 | 0.70 | 0.46 | 1.28 | 1.50 | 0.17 | 2.34 | 1.01 | 1.49 | 0.74 |
GNSS-VINS [60] | GNSS-VIO | 0.97 | 0.86 | 0.58 | 0.82 | 1.12 | 1.20 | 0.17 | 2.29 | 1.11 | - | 1.66 |
Proposed (w/o EFC) | GNSS-VIO | 0.76 | 0.63 | 0.58 | 0.62 | 0.66 | 0.89 | 0.11 | 1.50 | 0.78 | 0.48 | 0.58 |
Proposed | GNSS-VIO | 0.65 | 0.54 | 0.50 | 0.55 | 0.53 | 0.80 | 0.10 | 1.31 | 0.67 | 0.41 | 0.51 |
00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | |
---|---|---|---|---|---|---|---|---|---|---|---|
RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | RPE RPE | |
(m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | (m) (°) | |
SC-SFMLearner [58] | 0.14 0.13 | 0.89 0.08 | 0.09 0.09 | 0.06 0.07 | 0.07 0.06 | 0.07 0.07 | 0.07 0.07 | 0.08 0.07 | 0.09 0.07 | 0.10 0.10 | 0.11 0.11 |
ORB-SLAM2 (w/o LC) [12] | 0.17 0.08 | 2.97 0.10 | 0.17 0.07 | 0.03 0.06 | 0.08 0.08 | 0.14 0.06 | 0.24 0.06 | 0.11 0.05 | 0.19 0.06 | 0.13 0.06 | 0.05 0.07 |
ORB-SLAM2 (w LC) [12] | 0.21 0.09 | 3.04 0.09 | 0.22 0.08 | 0.04 0.06 | 0.08 0.08 | 0.29 0.06 | 0.73 0.05 | 0.51 0.05 | 0.16 0.07 | 0.34 0.06 | 0.05 0.07 |
DF-VO [11] | 0.04 0.06 | 1.55 0.05 | 0.06 0.05 | 0.03 0.04 | 0.05 0.03 | 0.02 0.04 | 0.03 0.03 | 0.02 0.03 | 0.04 0.04 | 0.05 0.04 | 0.04 0.04 |
Dynamic VO [33] | 0.03 0.06 | 1.71 0.67 | 0.04 0.06 | 0.02 0.04 | 0.04 0.04 | 0.02 0.05 | 0.03 0.05 | 0.02 0.04 | 0.03 0.05 | 0.06 0.05 | 0.05 0.06 |
Proposed (w/o EFC) | 0.03 0.05 | 0.86 0.05 | 0.04 0.04 | 0.03 0.04 | 0.02 0.03 | 0.02 0.04 | 0.01 0.03 | 0.02 0.03 | 0.04 0.04 | 0.04 0.04 | 0.03 0.03 |
Proposed | 0.03 0.04 | 0.75 0.05 | 0.03 0.03 | 0.03 0.04 | 0.02 0.03 | 0.02 0.03 | 0.01 0.02 | 0.01 0.03 | 0.03 0.04 | 0.03 0.04 | 0.02 0.03 |
Method | RMSE | Mean | Std | Max | Min |
---|---|---|---|---|---|
GNSS | 3.2766 | 2.9888 | 1.3429 | 5.6606 | 0.0164 |
Proposed | 0.5703 | 0.5019 | 0.4846 | 3.6810 | 0.0059 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, J. Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization. Sensors 2025, 25, 6050. https://doi.org/10.3390/s25196050
Kang J. Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization. Sensors. 2025; 25(19):6050. https://doi.org/10.3390/s25196050
Chicago/Turabian StyleKang, Jeongmin. 2025. "Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization" Sensors 25, no. 19: 6050. https://doi.org/10.3390/s25196050
APA StyleKang, J. (2025). Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization. Sensors, 25(19), 6050. https://doi.org/10.3390/s25196050