Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM
Abstract
1. Introduction
2. Contrast Maximization: Principles and Edge-Oriented Analysis
2.1. Rotational Ego-Motion Estimation via Contrast Maximization
2.2. Edge-Oriented Cost Decomposition of CMAX
3. Coarse-to-Fine Contrast Maximization (CCMAX)
- Coarse-to-Fine IWE Construction progressively increases the IWE grid resolution across optimization stages, directly reducing the image-domain cost in early stages.
- Coarse-Grid Event Subsampling reduces redundant event contributions at coarse resolutions by selecting representative events within coarse spatial bins, reducing the event-domain cost during coarse stages.
3.1. Coarse-to-Fine IWE Construction for Reducing Image-Domain Cost
3.2. Coarse-Grid Event Subsampling for Reducing Event-Domain Cost
| Algorithm 1: Coarse-grid event subsampling |
![]() |
3.3. Evaluation of CCMAX Configurations
- Evaluation metric and setup.
- Iteration budget and configuration notation.
- Effect of coarse-to-fine IWE construction.
- Effect of coarse-grid event subsampling.
4. Experimental Evaluation
4.1. Compute Analysis for Edge Deployment
4.2. Energy Validation on a Prototype Edge SoC
5. Discussion
6. Conclusions
- 1.
- We analyzed the computational inefficiency of contrast maximization (CMAX) for iterative rotational ego-motion estimation and identified two dominant cost components: event-domain processing that scales with the number of events, and image-domain IWE processing that scales with IWE resolution.
- 2.
- We proposed coarse-to-fine contrast maximization (CCMAX), which aligns computational fidelity with the coarse-to-fine convergence behavior of CMAX via (i) coarse-to-fine IWE construction and (ii) coarse-grid event subsampling, while explicitly retaining a final full-resolution refinement stage.
- 3.
- Experiments on standard event-camera benchmarks with IMU ground truth show that properly designed schedules achieve accuracy comparable to the full-resolution baseline under a fixed iteration budget.
- 4.
- CCMAX reduces floating-point operations by up to 42% and achieves up to 87% lower energy consumption for the iterative CMAX pipeline on a custom RISC-V–based edge SoC prototype, demonstrating suitability for real-time edge SLAM front-end deployment under tight compute and power constraints.
- 5.
- Limitations and future work: The proposed approach focuses on rotational ego-motion under a depth-independent warping model. Extensions to translation-including motion, which introduce depth dependence and parallax, as well as improved robustness to non-ideal real-world conditions (e.g., jitter, shocks, and dynamic objects), remain important directions for future research.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
- Macario Barros, A.; Michel, M.; Moline, Y.; Corre, G.; Carrel, F. A Comprehensive Survey of Visual SLAM Algorithms. Robotics 2022, 11, 24. [Google Scholar] [CrossRef]
- Chen, L.; Li, G.; Xie, W.; Tan, J.; Li, Y.; Pu, J.; Chen, L.; Gan, D.; Shi, W. A Survey of Computer Vision Detection, Visual SLAM Algorithms, and Their Applications in Energy-Efficient Autonomous Systems. Energies 2024, 17, 5177. [Google Scholar] [CrossRef]
- Jia, G.; Li, X.; Zhang, D.; Xu, W.; Lv, H.; Shi, Y.; Cai, M. Visual-SLAM Classical Framework and Key Techniques: A Review. Sensors 2022, 22, 4582. [Google Scholar] [CrossRef] [PubMed]
- Wang, B.; Song, X.; Lu, K.; Yang, L. Research and Application of SLAM Algorithm for Mobile Robots in Indoor Dynamic Scene. In Proceedings of the International Conference on Control and Intelligent Robotics, Tianjin, China, 20–22 June 2025; Association for Computing Machinery: New York, NY, USA, 2025; pp. 45–52. [Google Scholar] [CrossRef]
- Sahili, A.R.; Hassan, S.; Sakhrieh, S.M.; Mounsef, J.; Maalouf, N.; Arain, B.; Taha, T. A Survey of Visual SLAM Methods. IEEE Access 2023, 11, 139643–139677. [Google Scholar] [CrossRef]
- Cimarelli, C.; Millan-Romera, J.A.; Voos, H.; Sanchez-Lopez, J.L. Hardware, Algorithms, and Applications of the Neuromorphic Vision Sensor: A Review. Sensors 2025, 25, 6208. [Google Scholar] [CrossRef]
- Choi, J.; Choi, E.; Choi, S.; Lee, W. E-BTS: A low-power Event-driven Blink Tracking System with hardware-software co-optimized design for real-time driver drowsiness detection. Alex. Eng. J. 2025, 128, 867–877. [Google Scholar] [CrossRef]
- Rebecq, H.; Horstschaefer, T.; Gallego, G.; Scaramuzza, D. EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time. IEEE Robot. Autom. Lett. 2017, 2, 593–600. [Google Scholar] [CrossRef]
- Gallego, G.; Scaramuzza, D. Accurate Angular Velocity Estimation with an Event Camera. IEEE Robot. Autom. Lett. 2017, 2, 632–639. [Google Scholar] [CrossRef]
- Gallego, G.; Rebecq, H.; Scaramuzza, D. A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3867–3876. [Google Scholar] [CrossRef]
- Shiba, S.; Klose, Y.; Aoki, Y.; Gallego, G. Secrets of Event-Based Optical Flow, Depth and Ego-Motion Estimation by Contrast Maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 7742–7759. [Google Scholar] [CrossRef]
- Guo, S.; Gallego, G. CMax-SLAM: Event-Based Rotational-Motion Bundle Adjustment and SLAM System Using Contrast Maximization. IEEE Trans. Robot. 2024, 40, 2442–2461. [Google Scholar] [CrossRef]
- Yamaki, R.; Shiba, S.; Guillermo, G.; Aoki, Y. Iterative Event-based Motion Segmentation by Variational Contrast Maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Nashville, TN, USA, 11–15 June 2025; pp. 4957–4966. [Google Scholar]
- Lichtsteiner, P.; Posch, C.; Delbruck, T. A 128× 128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor. IEEE J. Solid State Circuits 2008, 43, 566–576. [Google Scholar] [CrossRef]
- Posch, C.; Serrano-Gotarredona, T.; Linares-Barranco, B.; Delbruck, T. Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras with Spiking Output. Proc. IEEE 2014, 102, 1470–1484. [Google Scholar] [CrossRef]
- Gallego, G.; Delbrück, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-Based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 154–180. [Google Scholar] [CrossRef]
- Zhu, A.Z.; Yuan, L.; Chaney, K.; Daniilidis, K. EV-FlowNet: Self-supervised optical flow estimation for event-based cameras. arXiv 2018, arXiv:1802.06898. [Google Scholar]
- Zhu, A.Z.; Yuan, L.; Chaney, K.; Daniilidis, K. Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Tian, Y.; Andrade-Cetto, J. Egomotion from event-based SNN optical flow. In Proceedings of the 2023 International Conference on Neuromorphic Systems, Santa Fe, NM, USA, 1–3 August 2023; Association for Computing Machinery: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
- Qu, D.; Yan, C.; Wang, D.; Yin, J.; Chen, Q.; Xu, D.; Zhang, Y.; Zhao, B.; Li, X. Implicit Event-RGBD Neural SLAM. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 19584–19594. [Google Scholar]
- Li, W.; Liao, B.; Zhou, Y.; Xu, Q.; Wan, P.; Liu, P. E-MoFlow: Learning Egomotion and Optical Flow from Event Data via Implicit Regularization. arXiv 2025, arXiv:2510.12753. [Google Scholar] [CrossRef]
- Zhu, A.Z.; Atanasov, N.; Daniilidis, K. Event-Based Visual Inertial Odometry. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5816–5824. [Google Scholar] [CrossRef]
- Mueggler, E.; Gallego, G.; Rebecq, H.; Scaramuzza, D. Continuous-Time Visual-Inertial Odometry for Event Cameras. IEEE Trans. Robot. 2018, 34, 1425–1440. [Google Scholar] [CrossRef]
- Wang, K.; Zhao, K.; Lu, W.; You, Z. Stereo Event-Based Visual–Inertial Odometry. Sensors 2025, 25, 887. [Google Scholar] [CrossRef]
- Kueng, B.; Mueggler, E.; Gallego, G.; Scaramuzza, D. Low-latency visual odometry using event-based feature tracks. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 16–23. [Google Scholar] [CrossRef]
- Yang, L. Ego-motion Estimation Based on Fusion of Images and Events. arXiv 2022, arXiv:2207.05588. [Google Scholar] [CrossRef]
- Jawaid, M.; Märtens, M.; Chin, T.J. Event-RGB fusion for spacecraft pose estimation under harsh lighting. Aerosp. Sci. Technol. 2026, 168, 111039. [Google Scholar] [CrossRef]
- Kim, H.; Kim, H.J. Real-Time Rotational Motion Estimation with Contrast Maximization Over Globally Aligned Events. IEEE Robot. Autom. Lett. 2021, 6, 6016–6023. [Google Scholar] [CrossRef]
- Zhu, A.Z.; Atanasov, N.; Daniilidis, K. Event-based feature tracking with probabilistic data association. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4465–4470. [Google Scholar] [CrossRef]
- Mitrokhin, A.; Fermüller, C.; Parameshwara, C.; Aloimonos, Y. Event-Based Moving Object Detection and Tracking. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Zhou, Y.; Gallego, G.; Lu, X.; Liu, S.; Shen, S. Event-Based Motion Segmentation with Spatio-Temporal Graph Cuts. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4868–4880. [Google Scholar] [CrossRef]
- Nah, S.; Baik, S.; Hong, S.; Moon, G.; Son, S.; Timofte, R.; Mu Lee, K. Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1996–2005. [Google Scholar]
- Mueggler, E.; Rebecq, H.; Gallego, G.; Delbruck, T.; Scaramuzza, D. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. Int. J. Robot. Res. 2017, 36, 142–149. [Google Scholar] [CrossRef]
- Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
- Polak, E.; Ribière, G. Note sur la convergence de méthodes de directions conjuguées. Revue Française D’Informatique et de Recherche Opérationnelle. Série Rouge 1969, 3, 35–43. Available online: https://www.numdam.org/item/M2AN_1969__3_1_35_0/ (accessed on 26 January 2026).
- Gehrig, D.; Scaramuzza, D. Are High-Resolution Event Cameras Really Needed? arXiv 2022, arXiv:2203.14672. [Google Scholar] [CrossRef]
- Araghi, H.; van Gemert, J.; Tomen, N. Making Every Event Count: Balancing Data Efficiency and Accuracy in Event Camera Subsampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Nashville, TN, USA, 11–15 June 2025; pp. 5083–5093. [Google Scholar]
- Allen, J. Short term spectral analysis, synthesis, and modification by discrete Fourier transform. IEEE Trans. Acoust. Speech Signal Process. 2003, 25, 235–238. [Google Scholar] [CrossRef]
- Roth, W.; Schindler, G.; Klein, B.; Peharz, R.; Tschiatschek, S.; Fröning, H.; Pernkopf, F.; Ghahramani, Z. Resource-efficient neural networks for embedded systems. J. Mach. Learn. Res. 2024, 25, 1–51. [Google Scholar]
- Somvanshi, S.; Islam, M.M.; Chhetri, G.; Chakraborty, R.; Mimi, M.S.; Shuvo, S.A.; Islam, K.S.; Javed, S.; Rafat, S.A.; Dutta, A.; et al. From Tiny Machine Learning to Tiny Deep Learning: A Survey. ACM Comput. Surv. 2025, 58, 168. [Google Scholar] [CrossRef]
- Han, K.; Lee, S.; Oh, K.I.; Bae, Y.; Jang, H.; Lee, J.J.; Lee, W.; Pedram, M. Developing TEI-Aware Ultralow-Power SoC Platforms for IoT End Nodes. IEEE Internet Things J. 2021, 8, 4642–4656. [Google Scholar] [CrossRef]
- Park, J.; Han, K.; Choi, E.; Lee, J.J.; Lee, K.; Lee, W.; Pedram, M. Designing Low-Power RISC-V Multicore Processors with a Shared Lightweight Floating Point Unit for IoT Endnodes. IEEE Trans. Circuits Syst. I Regul. Pap. 2024, 71, 4106–4119. [Google Scholar] [CrossRef]
- Jeon, S.; Lee, K.; Lee, K.; Lee, W. Dynamic Performance and Power Optimization with Heterogeneous Processing-in-Memory for AI Applications on Edge Devices. Micromachines 2024, 15, 1222. [Google Scholar] [CrossRef] [PubMed]
- Lee, K.; Jeon, S.; Lee, K.; Lee, W.; Pedram, M. Radar-PIM: Developing IoT Processors Utilizing Processing-in-Memory Architecture for Ultrawideband-Radar-Based Respiration Detection. IEEE Internet Things J. 2025, 12, 515–530. [Google Scholar] [CrossRef]
- SiFIVE. Available online: https://github.com/chipsalliance/rocket-chip (accessed on 26 January 2026).
- Han, K.; Lee, J.J.; Lee, J.; Lee, W.; Pedram, M. TEI-NoC: Optimizing Ultralow Power NoCs Exploiting the Temperature Effect Inversion. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2018, 37, 458–471. [Google Scholar] [CrossRef]
- Xilinx. Available online: https://www.amd.com/en/products/adaptive-socs-and-fpgas/fpga/kintex-7.html (accessed on 26 January 2026).
- NCSU. FreePDK45. Available online: https://eda.ncsu.edu/freepdk/freepdk45 (accessed on 26 January 2026).
- Xilinx. Vivado. Available online: https://www.xilinx.com/support/download.html (accessed on 26 January 2026).
- Synopsys. Design Compiler. Available online: https://www.synopsys.com/implementation-and-signoff/rtl-synthesis-test/dc-ultra.html (accessed on 26 January 2026).
- Stoffregen, T.; Kleeman, L. Event Cameras, Contrast Maximization and Reward Functions: An Analysis. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12292–12300. [Google Scholar] [CrossRef]











| IPs | LUTs | FFs | |
|---|---|---|---|
| Developed Processor | 32,092 | 27,563 | 57.01 |
| ⌞ RISC-V Rocket Core | 14,862 | 10,042 | 6.77 |
| ⌞ Rocket Core Interface | 165 | 201 | 0.62 |
| ⌞ External I/O | 3108 | 2479 | 5.62 |
| ⌞ Main Memory | 183 | 328 | 1.24 |
| ⌞ System Interconnect | 4754 | 6571 | 6.37 |
| ⌞ DDR Controller | 8043 | 7434 | 34.5 |
| ⌞ DMA | 977 | 508 | 1.89 |
| Config. | (IWE Pixels) | (Events) | Norm. Energy |
|---|---|---|---|
| Fine (baseline) | 1 | 1 | 100.0% |
| C1 | 35.47% | ||
| C2 | 12.97% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Min, K.; Choi, J.; Lee, W. Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM. Micromachines 2026, 17, 176. https://doi.org/10.3390/mi17020176
Min K, Choi J, Lee W. Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM. Micromachines. 2026; 17(2):176. https://doi.org/10.3390/mi17020176
Chicago/Turabian StyleMin, Kyeongpil, Jongin Choi, and Woojoo Lee. 2026. "Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM" Micromachines 17, no. 2: 176. https://doi.org/10.3390/mi17020176
APA StyleMin, K., Choi, J., & Lee, W. (2026). Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM. Micromachines, 17(2), 176. https://doi.org/10.3390/mi17020176


