A Novel Shallow Neural Network-Augmented Pose Estimator Based on Magneto-Inertial Sensors for Reference-Denied Environments
Abstract
1. Introduction
- 1.
- A complete learning framework is developed that produces a computationally efficient pose estimation model suitable for implementation on microcontroller-based embedded systems. The core element of this framework is an SNN that achieves high estimation performance with low computational cost.
- 2.
- A robot–sensor model driven by real-world measurements is constructed to generate the training dataset. This model provides reference data for network training together with realistic MARG sensor readings, forming a comprehensive database for MARG-only pose estimation approaches.
- 3.
- A novel CFNN architecture with two hidden layers is proposed. The results demonstrate that this topology represents the most suitable configuration among the SNNs, effectively integrating useful information from various input sensor signals (e.g., raw MARG data and orientation estimates) to learn the complex nonlinear mapping required for estimating the true acceleration.
- 4.
- A comprehensive performance evaluation is conducted, including different MARG data configurations within NNs and a comparison with established attitude estimation filters such as EKF-based and gradient-descent algorithms, with the results used to validate the proposed approach.
2. Proposed Method
- 1.
- First, a physical low-cost quadcopter was used to generate various motions. Multiple trajectories were executed by the robot and the raw sensor data were simultaneously registered on an SD card. This process comprised more than 42 min of quadcopter flight with various maneuvers (i.e., random flights with different acceleration, loop trajectories, flip maneuvers, and dedicated rotations).
- 2.
- Next, multiple filtration steps were executed to roughly estimate the pose of the quadcopter during the executed flights. The filtration steps included simple low- and high-pass filters and their combination, with the cut-off frequency adapted to the dynamics of the real system. The applied filters were used to both combine the raw sensor data and remove bias, thereby providing approximated trajectories. The aim of this rough estimation of position and orientation is to enable the generation of real-world scenarios with the proposed simulation model. This method ensured that the NN teaching was based on realistic maneuvers and that the generated signals had the same spectral characteristics as the real-world motions. The measurements included the accelerometer, gyroscope, magnetometer, barometer, and estimated attitude results, from which rough trajectory curves were obtained. These rough trajectories were only approximations (as the real trajectories executed by the quadcopter were unknown); however, the dynamics, shapes, and frequencies included in the obtained trajectories were the same as those in the real flight maneuvers.
- 3.
- These roughly estimated pose data were employed within the simulation environment introduced in [49] to generate realistic sensor data along with the ground truth poses; this was a key step, since the ground truth pose was unknown during the real quadcopter flights. Based on the raw sensor data, attitude estimation was executed in two state-of-the-art filter frameworks to obtain the orientation of the quadcopter.
- 4.
- After the database was generated, the NN architecture was set up and the NN teaching process was executed. Finally, the obtained NN was evaluated for acceleration, velocity, and pose estimation. The advantage of shallow networks against deep networks lies their simplicity, reduced need for training data, lower susceptibility to overlearning, and easier applicability in embedded systems (with calculation requirements similar to a traditional EKF). The proposed environment executes orientation estimation with EKF and gradient descent (GRD) algorithms, which were used as input signals in the NN teaching process. This partly regularizes the network via physics (similarly to physics-enhanced NNs); moreover, it reduces the required network complexity, since the NN is supplied by compressed and semi-processed data. Additionally, statistical signals (e.g., magnitude) were utilized for similar reasons.
2.1. Accessibility Experiments with a Real Quadcopter System
2.2. Rough Estimation of Pose Data
2.3. Simulation Environment
2.4. Model Dynamics
2.5. Error Analysis
- (A)
- The graph shows the estimated position components based on the original sensor data (Xb, Yb, Zb) and modeled sensor data (Xc, Yc, Zc) using the same rough estimation algorithm.
- (B)
- The graph shows the regression analysis of the obtained position signals, where the title contains important metrics such as the Pearson correlation (r = 0.894), root mean square (rms = 18.5 m), and number of used samples (N = 598125).
- (C)
- The graph shows the estimated velocities based on original sensor data (VXb, VYb, VZb) and modeled sensor data (VXc, VYc, VZc) using the same rough estimation algorithm. This represents rather the maneuver characteristics as stated before.
- (D)
- The graph shows the spectrogram of velocity signals, where the rows of spectrogram are listed in the following order: VXb, VXc, VYb, VYc, VZb, VZc. This graph provides a three-dimensional report in which the horizontal axis represents time, the vertical axis represents the frequency, and the amplitude is color-coded. The first two spectrogram rows are the X channels, the third and fourth rows highlight the Y channels, and last two rows show the Z channels. The similarity of these colorful strips confirms the similar spectral characteristics of the pairwise signals.
2.6. Attitude and External Acceleration Estimation
2.6.1. Extended Kalman Filter
2.6.2. Gradient Descent-Based Orientation Filter
2.6.3. External Acceleration Estimation
2.7. Neural Network Model
- 1.
- CFNN1—Cascade-forward NN with one hidden layer, which provides lower nonlinear capabilities. No time delay.
- 2.
- CFNN2—Cascade-forward NN with two hidden layers, with the aim of providing higher nonlinear capabilities. No time delay.
- 3.
- FTDNN—Focused time-delay NN with three time delays (1:3) and one hidden layer; for example, this network artificially adds an extra 48 input neurons for the 16 real input channels, which results in 64 neurons altogether.
2.8. Dataset Splitting and Performance Evaluation
2.9. Network Initialization and Training Method
- 1.
- Dataset size: Our dataset was sufficiently large, which often mitigates the need for extensive hyperparameter tuning. With a large dataset, NNs tend to exhibit robustness to variations in hyperparameters.
- 2.
- Convergence: We observed satisfactory convergence of the NN training process with the default parameters. This convergence indicates that the default parameters were suitable for our dataset and task.
- 3.
- Low variability: The variability in the performance of the NN outputs was minimal, indicating that the default parameters provided both stable and consistent results across different runs.
- Learning rate:
- Activation function of hidden layers: logistic sigmoid
- Activation function of output layer: linear activation function
- Training algorithm: Levenberg–Marquardt backpropagation
- Mini-batch size: entire training dataset
- Performance function: mean squared error (MSE)
- Regularization: no regularization
- Normalization: no normalization
- Stop criteria
- -
- Maximum number of epochs: 1000
- -
- Maximum time: infinity (no limit)
- -
- Performance goal: 0 (perfect case)
- -
- Maximum gradient:
- -
- Damping parameter ():
- -
- Training stops: if
- -
- Maximum validation checks: 6
3. Results
3.1. First Performance Comparison—Raw Sensor Data and Orientation Angles
- 1.
- The reference baseline group shows the baseline method estimation results without NN elaboration. In the evaluated cases, the orientation is estimated with the EKF or GRD method based on the accelerometer, gyroscope, and magnetometer measurements (Acc + Gyr + Mag).
- 2.
- The applicable NN group represents the elaborated solutions that can be applied in real mobile robot systems. The input fully relies on the accelerometer, gyroscope, and magnetometer measurements (Acc + Gyr+ Mag).
- 3.
- The reference NN group uses the ground truth orientation instead of the estimated orientation (ANG), which proves the theoretical maximum performance in the case of a perfect attitude estimation.
3.2. Second Performance Comparison—Additional Features
3.3. Third Performance Comparison—Number of Neurons
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| ACC (Acc) | Accelerometer signal |
| ANGe | Orientation estimation in Euler format |
| ANGq | Orientation estimation in Quaternion format |
| CFNN | Cascade-Forward Neural Network (a specific type of shallow network) |
| CFNN1H18 | CFNN with 1 hidden layer and 18 neurons |
| CFNN2H34 | CFNN with 2 hidden layers and neurons |
| CNN | Convolutional Neural Network (not shallow network) |
| DNN | Deep Neural Network (i.e., not a shallow network) |
| DOF | Degrees of Freedom |
| EKF | Extended Kalman Filter |
| EKFe | EKF-based orientation signal in Euler format |
| EKFq | EKF-based orientation signal in Quaternion format |
| FTDNN | Focused Time-Delay Neural Network |
| GPS | Global Positioning System |
| GRD | Gradient Descent algorithm |
| GRDe | GRD-based orientation signal in Euler format |
| GRDq | GRD-based orientation signal in Quaternion format |
| GYR (Gyr) | Gyroscope signal |
| IMU | Inertial Measurement Unit |
| MAG (Mag) | Magnetometer signal |
| MARG | Magnetic, Angular Rate, and Gravity |
| NN | Neural Network |
| RPY | Roll–Pitch–Yaw signal |
| SNN | Shallow Neural Network |
Appendix A. Effect of Sampling Rate

Appendix B. Channel Importance Test
- 1
- The X output channel highly relies on the AZ, AX, and MX input channels.
- 2
- The Y output channel highly relies on the AY, AZ, and MY input channels.
- 3
- The Z output channel highly relies on the AZ channel.

References
- Wang, J.; Zhang, C.; Wu, J.; Liu, M. An Improved Invariant Kalman Filter for Lie Groups Attitude Dynamics with Heavy-Tailed Process Noise. Machines 2021, 9, 182. [Google Scholar] [CrossRef]
- Wu, J. MARG Attitude Estimation Using Gradient-Descent Linear Kalman Filter. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1777–1790. [Google Scholar] [CrossRef]
- Wu, J.; Zhou, Z.; Fourati, H.; Cheng, Y. A super fast attitude determination algorithm for consumer-level accelerometer and magnetometer. IEEE Trans. Consum. Electron. 2018, 64, 375–381. [Google Scholar] [CrossRef]
- Feng, K.; Li, J.; Zhang, X.; Shen, C.; Bi, Y.; Zheng, T.; Liu, J. A new quaternion-based Kalman filter for real-time attitude estimation using the two-step geometrically-intuitive correction algorithm. Sensors 2017, 17, 2146. [Google Scholar] [CrossRef] [PubMed]
- Hashim, H.A.; Eltoukhy, A.E. Nonlinear filter for simultaneous localization and mapping on a matrix lie group using imu and feature measurements. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 2098–2109. [Google Scholar] [CrossRef]
- Hashim, H.A. A geometric nonlinear stochastic filter for simultaneous localization and mapping. Aerosp. Sci. Technol. 2021, 111, 106569. [Google Scholar] [CrossRef]
- Hashim, H.A.; Abouheaf, M.; Abido, M.A. Geometric stochastic filter with guaranteed performance for autonomous navigation based on IMU and feature sensor fusion. Control Eng. Pract. 2021, 116, 104926. [Google Scholar] [CrossRef]
- Liu, H.; Zhou, Z.; Yu, L. Maneuvering Acceleration Estimation Algorithm Using Doppler Radar Measurement. Math. Probl. Eng. 2018, 2018, 4984186. [Google Scholar] [CrossRef]
- Zhang, X.; Zheng, K.; Lu, C.; Wan, J.; Liu, Z.; Ren, X. Acceleration estimation using a single GPS receiver for airborne scalar gravimetry. Adv. Space Res. 2017, 60, 2277–2288. [Google Scholar] [CrossRef]
- Hosseinyalamdary, S. Deep Kalman filter: Simultaneous multi-sensor integration and modelling; A GNSS/IMU case study. Sensors 2018, 18, 1316. [Google Scholar] [CrossRef]
- Caruso, M.; Sabatini, A.M.; Laidig, D.; Seel, T.; Knaflitz, M.; Croce, U.D.; Cereatti, A. Analysis of the Accuracy of Ten Algorithms for Orientation Estimation Using Inertial and Magnetic Sensing under Optimal Conditions: One Size Does Not Fit All. Sensors 2021, 21, 2543. [Google Scholar] [CrossRef]
- Hashim, H.A. GPS-denied Navigation: Attitude, Position, Linear Velocity, and Gravity Estimation with Nonlinear Stochastic Observer. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May 2021; pp. 1146–1151. [Google Scholar]
- Poulose, A.; Eyobu, O.S.; Han, D.S. An indoor position-estimation algorithm using smartphone IMU sensor data. IEEE Access 2019, 7, 11165–11177. [Google Scholar] [CrossRef]
- Candan, B.; Soken, H.E. Robust Attitude Estimation Using IMU-Only Measurements. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
- Candan, B.; Soken, H.E. Estimation of attitude using robust adaptive Kalman filter. In Proceedings of the 2021 IEEE 8th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Naples, Italy, 23–25 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 159–163. [Google Scholar]
- Kecskés, I.; Odry, P.; Odry, P. Uncertainties in the movement and measurement of a hexapod robot. In Perspectives in Dynamical Systems I: Mechatronics and Life Sciences; Springer: Cham, Switzerland, 2021. [Google Scholar]
- Ok, M.; Ok, S.; Park, J.H. Estimation of Vehicle Attitude, Acceleration and Angular Velocity Using Convolutional Neural Network and Dual Extended Kalman Filter. Sensors 2021, 21, 1282. [Google Scholar] [CrossRef] [PubMed]
- De Marina, H.G.; Pereda, F.J.; Giron-Sierra, J.M.; Espinosa, F. UAV attitude estimation using unscented Kalman filter and TRIAD. IEEE Trans. Ind. Electron. 2011, 59, 4465–4474. [Google Scholar] [CrossRef]
- Han, X.; Li, H.; Hui, N.; Zhang, J.; Yue, G. A Multi-Sensor Fusion-Based Localization Method for a Magnetic Adhesion Wall-Climbing Robot. Sensors 2025, 25, 5051. [Google Scholar] [CrossRef]
- Yang, M.; Han, K.; Sun, T.; Tian, K.; Lian, C.; Zhao, Y.; Wang, Z.; Huang, Q.; Chen, M.; Li, W.J. A multi-sensor fusion approach for centimeter-level indoor 3D localization of wheeled robots. Meas. Sci. Technol. 2025, 36, 046304. [Google Scholar] [CrossRef]
- Teng, X.; Shen, Z.; Huang, L.; Li, H.; Li, W. Multi-sensor fusion based wheeled robot research on indoor positioning method. Results Eng. 2024, 22, 102268. [Google Scholar] [CrossRef]
- Tran, Q.K.; Ryoo, Y.J. Multi-Sensor Fusion Framework for Reliable Localization and Trajectory Tracking of Mobile Robot by Integrating UWB, Odometry, and AHRS. Biomimetics 2025, 10, 478. [Google Scholar] [CrossRef]
- Huang, G.; Huang, H.; Zhai, Y.; Tang, G.; Zhang, L.; Gao, X.; Huang, Y.; Ge, G. Multi-Sensor Fusion for Wheel-Inertial-Visual Systems Using a Fuzzification-Assisted Iterated Error State Kalman Filter. Sensors 2024, 24, 7619. [Google Scholar] [CrossRef]
- Zhang, L.; Wu, X.; Gao, R.; Pan, L.; Zhang, Q. A multi-sensor fusion positioning approach for indoor mobile robot using factor graph. Measurement 2023, 216, 112926. [Google Scholar] [CrossRef]
- Cheng, B.; He, X.; Li, X.; Zhang, N.; Song, W.; Wu, H. Research on positioning and navigation system of greenhouse mobile robot based on multi-sensor fusion. Sensors 2024, 24, 4998. [Google Scholar] [CrossRef] [PubMed]
- Rodríguez-Abreo, O.; Castillo Velásquez, F.A.; Zavala de Paz, J.P.; Martínez Godoy, J.L.; Garcia Guendulain, C. Sensorless Estimation Based on Neural Networks Trained with the Dynamic Response Points. Sensors 2021, 21, 6719. [Google Scholar] [CrossRef] [PubMed]
- Rodríguez-Abreo, O.; Garcia-Guendulain, J.M.; Hernández-Alvarado, R.; Flores Rangel, A.; Fuentes-Silva, C. Genetic Algorithm-Based Tuning of Backstepping Controller for a Quadrotor-Type Unmanned Aerial Vehicle. Electronics 2020, 9, 1735. [Google Scholar] [CrossRef]
- Svacha, J.; Paulos, J.; Loianno, G.; Kumar, V. Imu-based inertia estimation for a quadrotor using newton-euler dynamics. IEEE Robot. Autom. Lett. 2020, 5, 3861–3867. [Google Scholar] [CrossRef]
- Martínez-León, A.S.; Jatsun, S.; Emelyanova, O. Investigation of Wind Effects on UAV Adaptive PID Based MPC Control System. Enfoque UTE 2024, 15, 36–47. [Google Scholar] [CrossRef]
- Jeong, H.; Suk, J.; Kim, S. Control of quadrotor UAV using variable disturbance observer-based strategy. Control Eng. Pract. 2024, 150, 105990. [Google Scholar] [CrossRef]
- Izadi, M.; Faieghi, R. High-gain disturbance observer for robust trajectory tracking of quadrotors. Control Eng. Pract. 2024, 145, 105854. [Google Scholar] [CrossRef]
- Bianchi, D.; Di Gennaro, S.; Di Ferdinando, M.; Acosta Lua, C. Robust control of uav with disturbances and uncertainty estimation. Machines 2023, 11, 352. [Google Scholar] [CrossRef]
- Gu, W.; Zhao, J.; Rizzo, A. Learning uncertainties online for quadrotor flight control: A comparative study. J. Intell. Robot. Syst. 2025, 111, 98. [Google Scholar] [CrossRef]
- Macdonald, J.; Leishman, R.; Beard, R.; McLain, T. Analysis of an improved IMU-based observer for multirotor helicopters. J. Intell. Robot. Syst. 2014, 74, 1049–1061. [Google Scholar] [CrossRef]
- Kuang, J.; Niu, X.; Chen, X. Robust pedestrian dead reckoning based on MEMS-IMU for smartphones. Sensors 2018, 18, 1391. [Google Scholar] [CrossRef] [PubMed]
- Seel, T.; Raisch, J.; Schauer, T. IMU-based joint angle measurement for gait analysis. Sensors 2014, 14, 6891–6909. [Google Scholar] [CrossRef] [PubMed]
- Bai, N.; Tian, Y.; Liu, Y.; Yuan, Z.; Xiao, Z.; Zhou, J. A high-precision and low-cost IMU-based indoor pedestrian positioning technique. IEEE Sens. J. 2020, 20, 6716–6726. [Google Scholar] [CrossRef]
- Liu, W.; Caruso, D.; Ilg, E.; Dong, J.; Mourikis, A.I.; Daniilidis, K.; Kumar, V.; Engel, J. TLIO: Tight learned inertial odometry. IEEE Robot. Autom. Lett. 2020, 5, 5653–5660. [Google Scholar] [CrossRef]
- Brossard, M.; Barrau, A.; Bonnabel, S. AI-IMU dead-reckoning. IEEE Trans. Intell. Veh. 2020, 5, 585–595. [Google Scholar] [CrossRef]
- Esfahani, M.A.; Wang, H.; Wu, K.; Yuan, S. AbolDeepIO: A novel deep inertial odometry network for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1941–1950. [Google Scholar] [CrossRef]
- Silva do Monte Lima, J.P.; Uchiyama, H.; Taniguchi, R.i. End-to-end learning framework for imu-based 6-dof odometry. Sensors 2019, 19, 3777. [Google Scholar] [CrossRef]
- Kim, W.Y.; Seo, H.I.; Seo, D.H. Nine-Axis IMU-based Extended inertial odometry neural network. Expert Syst. Appl. 2021, 178, 115075. [Google Scholar] [CrossRef]
- Hernandez Sanchez, S.; Fernández Pozo, R.; Hernandez Gomez, L.A. Estimating vehicle movement direction from smartphone accelerometers using deep neural networks. Sensors 2018, 18, 2624. [Google Scholar] [CrossRef]
- Eren, H.; Makinist, S.; Akin, E.; Yilmaz, A. Estimating driving behavior by a smartphone. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 234–239. [Google Scholar]
- Mhaskar, H.; Liao, Q.; Poggio, T. When and why are deep networks better than shallow ones? In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31.
- Nuswantoro, F.M.; Sudarsono, A.; Santoso, T.B. Abnormal Driving Detection Based on Accelerometer and Gyroscope Sensor on Smartphone using Artificial Neural Network (ANN) Algorithm. In Proceedings of the 2020 International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 356–363. [Google Scholar]
- Warsito, B.; Santoso, R.; Suparti; Yasin, H. Cascade forward neural network for time series prediction. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2018; Volume 1025, p. 012097. [Google Scholar]
- Kecskés, I.; Odry, Á.; Tadić, V.; Odry, P. Simultaneous calibration of a hexapod robot and an IMU sensor model based on raw measurements. IEEE Sens. J. 2021, 21, 14887–14898. [Google Scholar] [CrossRef]
- Odry, Á. An Open-Source Test Environment for Effective Development of MARG-Based Algorithms. Sensors 2021, 21, 1183. [Google Scholar] [CrossRef]
- Odry, Á. Flight Maneuvers Data Sets. 2025. Available online: https://github.com/akosodry/quadcopter_meas (accessed on 6 November 2025).
- Betaflight. A Popular Open-Source Flight Controller Software Used in FPV (First Person View) Drones. 2025. Available online: https://github.com/betaflight/betaflight.com (accessed on 6 November 2025).
- Mahony, R.; Hamel, T.; Pflimlin, J.M. Nonlinear complementary filters on the special orthogonal group. IEEE Trans. Autom. Control 2008, 53, 1203–1218. [Google Scholar] [CrossRef]
- Odry, A.; Kecskes, I.; Csik, D.; Hashim, H.A.; Sarcevic, P. Adaptive gradient-descent extended Kalman filter for pose estimation of mobile robots with sparse reference signals. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 4010–4017. [Google Scholar]
- Odry, Á.; Kecskes, I.; Sarcevic, P.; Vizvari, Z.; Toth, A.; Odry, P. A novel fuzzy-adaptive extended kalman filter for real-time attitude estimation of mobile robots. Sensors 2020, 20, 803. [Google Scholar] [CrossRef]
- Madgwick, S.O.; Harrison, A.J.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–7. [Google Scholar]
- Jeyaprabha, T.J.; Kavya, E.; Maurya, V.; Nitish, K. Robot Automation Algorithms. IOSR J. Electron. Commun. Eng. 2021, 16, 28–32. [Google Scholar]
- Peddinti, V.; Povey, D.; Khudanpur, S. A time delay neural network architecture for efficient modeling of long temporal contexts. In Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association, Dresden, Germany, 6–10 September 2015. [Google Scholar]
- Simonetti, E.; Bergamini, E.; Vannozzi, G.; Bascou, J.; Pillet, H. Estimation of 3D Body Center of Mass Acceleration and Instantaneous Velocity from a Wearable Inertial Sensor Network in Transfemoral Amputee Gait: A Case Study. Sensors 2021, 21, 3129. [Google Scholar] [CrossRef]
- The MathWorks, Inc. Trainlm — Levenberg-Marquardt Backpropagation. 2025. Available online: https://www.mathworks.com/help/deeplearning/ref/trainlm.html (accessed on 6 November 2025).
- Brownlee, J. Why Initialize a Neural Network with Random Weights. 2021. Available online: https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights (accessed on 6 November 2025).













| Sensor | Type | Key Parameter | Range | Resolution/Noise |
|---|---|---|---|---|
| MPU6050 | 3-Axis Gyroscope | Angular Velocity | User-programmable: ±250 deg/s to ±2000 deg/s | 16-bit ADC Noise: ∼0.005 deg/s/ |
| 3-Axis Accelerometer | Acceleration | User-programmable: ±2 g to ±16 g | 16-bit ADC | |
| HMC5883L | 3-Axis Magnetometer | Magnetic Field | ±8 Gauss (full-scale) | 12-bit ADC Field resolution: 2 milli-Gauss Noise Floor: 2 milli-Gauss |
| BMP180 | Barometric pressure, temperature, and altitude sensor | Pressure | 300 to 1100 hPa (+9000 m to −500 m relative to sea level) | 16 to 19-bit output Noise (High-Res Mode): 0.02 hPa (∼0.17 m) |
| Architecture | Input Dimension (Real Channels) | No. of Hidden Layers | Neurons per Hidden Layer | Total Hidden Neurons | Time Delay |
|---|---|---|---|---|---|
| CFNN1 | 16 | 1 | H (variable, 10–50 tested) | H | No |
| CFNN2 | 16 | 2 | H (variable, 10–50 tested) | 2 × H | No |
| FTDNN | 16 | 1 | H (variable, 10–50 tested) | H | Yes (1:3 delays) |
| Source | Channel | Symbol | Unit | Fs (Hz) | Role |
|---|---|---|---|---|---|
| Sensors on quadcopter and Beta flight calculations | 3D accelerometer sensor signal | ACCb | m/s2 | 1000 | Input measurements for database generation based on the 6-DOF test environment |
| 3D gyroscope sensor signal | GYRb | deg/s | 1000 | ||
| 3D magnetometer sensor signal | MAGb | uT | 1000 | ||
| Barometer (altitude) sensor signal | BARb | m | 1000 | ||
| 3D attitude (orientation) | RPYb | rad | 1000 | ||
| Rough estimation of quadcopter pose | Rought 3D position | XYZ∼ | m | 1000 | |
| Rought 3D attitude (orientation) | RPY∼ | rad | 1000 | ||
|
Output of 6-DOF test environment | Reference 3D acceleration | True Acc (Ground truth) | m/s2 | 1000 | Target for NN training |
| Reference 3D position | True XYZ (Ground truth) | m | 1000 | 3D position validation | |
| Noisy 3D accelerometer sensor signal | Acc | m/s2 | 1000 | NN inputs | |
| Noisy 3D gyroscope sensor signal | Gyr | deg/s | 1000 | ||
| Noisy 3D magnetometer sensor signal | Mag | uT | 1000 | ||
| Filter algorithms | Orientation estimation by EKF | EKFe, EKFq | rad | 100 | NN inputs (RPY^ in Figure 2) |
| Orientation estimation by GRD | GRDe, GRDq | rad | 100 | ||
| 3D acceleration estimation based on EKF orientation results | EKFa | m/s2 | 100 | Baseline methods (Acc^ in Figure 2) | |
| 3D acceleration estimation based on GRD orientation results | GRDa | m/s2 | 100 | ||
|
Additional features (IMUf) | Magnitude of 3D accelerometer signal | ACCmag | m/s2 | 100 | NN inputs |
| Magnitude of 3D gyroscope signal | GYRmag | deg/s | 100 | ||
| Moving average of accelerometer signal applied on the last 20 samples | ACCma20 | m/s2 | 100 | ||
| Moving average of accelerometer derivatives applied on the last 20 samples | ACCdma20 | m/s2 | 100 | ||
| NN output and derived signals | Estimated 3D acceleration | Acc* | m/s2 | 100 | NN Output |
| Estimated 3D velocity | Vel* | m/s | 100 | Derived output | |
| Estimated 3D position | XYZ* | m | 100 | Validation output |
| Group | Model Parameters | Performance (Correlation) | ||||
|---|---|---|---|---|---|---|
| Model and Channels | Input Dim | X | Y | Z | All | |
| Reference—baseline | Sensor acceleration (Acc) | 3 | 0.422 | 0.205 | 0.891 | 0.265 |
| GRD on AccGyrMag (GRDa) | 9 | 0.178 | 0.354 | 0.782 | 0.491 | |
| EKF on AccGyrMag (EKFa) | 9 | 0.548 | 0.479 | 0.933 | 0.725 | |
| Applicable NN (CFNN1H18) | NN on Acc | 3 | 0.538 | 0.210 | 0.933 | 0.674 |
| NN on AccGyr | 6 | 0.570 | 0.565 | 0.940 | 0.758 | |
| NN on AccMag | 6 | 0.709 | 0.611 | 0.946 | 0.803 | |
| NN on AccEKFe | 6 | 0.676 | 0.562 | 0.943 | 0.783 | |
| NN on AccEKFq | 7 | 0.650 | 0.575 | 0.941 | 0.779 | |
| NN on AccGRDe | 6 | 0.573 | 0.412 | 0.935 | 0.721 | |
| NN on AccGRDq | 7 | 0.567 | 0.414 | 0.933 | 0.717 | |
| NN on AccGyrEKFe | 9 | 0.697 | 0.639 | 0.947 | 0.808 | |
| NN on AccGyrGRDe | 9 | 0.593 | 0.602 | 0.941 | 0.772 | |
| NN on AccGyrMag | 9 | 0.728 | 0.703 | 0.946 | 0.830 | |
| NN on AccGyrMagEKFe | 12 | 0.751 | 0.702 | 0.948 | 0.837 | |
| NN on AccGyrMagGRDe | 12 | 0.779 | 0.728 | 0.945 | 0.848 | |
| NN on AccGyrMagEKFeEKFa | 15 | 0.745 | 0.722 | 0.949 | 0.841 | |
| NN on AccGyrMagEKFeGRDa | 15 | 0.766 | 0.730 | 0.949 | 0.848 | |
| NN on AccGyrMagGRDeGRDa | 15 | 0.762 | 0.729 | 0.946 | 0.845 | |
| NN on AccGyrMagGRDeEKFa | 15 | 0.778 | 0.722 | 0.948 | 0.848 | |
| Reference NN (CFNN1H18) with ground truth angles | NN on AccANGe | 6 | 0.969 | 0.955 | 0.959 | 0.957 |
| NN on AccGyrANGe | 9 | 0.977 | 0.973 | 0.960 | 0.964 | |
| NN on AccGyrANGq | 9 | 0.959 | 0.956 | 0.960 | 0.956 | |
| NN on AccGyrMagANGe | 12 | 0.977 | 0.973 | 0.960 | 0.964 | |
| Name [Unit] | Description | Equation |
|---|---|---|
| ACCmag [m/s2] | Magnitude of 3D accelerometer signal | |
| GYRmag [deg/s] | Magnitude of 3D gyroscope signal | |
| ACCma20 [m/s2] | Moving average on accelerometer applied on the last 20 samples | |
| ACCdma20 [m/s2] | Moving average on accelerometer derivatives applied on the last 20 samples |
| Group | Model Parameters | Performance (Correlation) | ||||
|---|---|---|---|---|---|---|
| Model and Channels | Input Dim | X | Y | Z | All | |
| Best from Table 1 | NN on AccGyrMagGRDe | 12 | 0.779 | 0.728 | 0.945 | 0.848 |
| Applicable NN (CFNN2H15) | NN on AccGyrMag | 9 | 0.764 | 0.771 | 0.952 | 0.860 |
| NN on AccGyrMagIMUf | 13 | 0.776 | 0.767 | 0.953 | 0.862 | |
| NN on AccGyrMagEKFe | 12 | 0.771 | 0.769 | 0.952 | 0.861 | |
| NN on AccGyrMagGRDe | 12 | 0.801 | 0.783 | 0.951 | 0.871 | |
| NN on AccGyrMagEKFeIMUf | 16 | 0.786 | 0.774 | 0.954 | 0.867 | |
| NN on AccGyrMagGRDeIMUf | 16 | 0.805 | 0.805 | 0.952 | 0.878 | |
| NN on AccGyrMagEKFeEKFa | 15 | 0.777 | 0.768 | 0.952 | 0.862 | |
| NN on AccGyrMagGRDeGRDa | 15 | 0.809 | 0.798 | 0.952 | 0.877 | |
| CFNN2H15 with ground truth angles | NN on AccGyrMagANGe | 12 | 0.977 | 0.973 | 0.960 | 0.964 |
| NN on AccGyrMagANGeIMUf | 16 | 0.977 | 0.972 | 0.962 | 0.965 | |
| Time | X [m] | Y [m] | Z [m] | X [m] | Y [m] | Z [m] | X [m] | Y [m] | Z [m] |
| 5 s | 7.23 | 4.97 | 1.59 | 4.17 | 3.75 | 1.45 | 1.51 | 1.15 | 0.61 |
| 10 s | 54.69 | 13.42 | 6.37 | 11.22 | 11.78 | 5.23 | 4.26 | 3.55 | 1.89 |
| 15 s | 142.11 | 35.20 | 13.73 | 19.16 | 18.33 | 9.44 | 10.04 | 7.37 | 4.59 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Odry, A.; Sarcevic, P.; Carbone, G.; Odry, P.; Kecskes, I. A Novel Shallow Neural Network-Augmented Pose Estimator Based on Magneto-Inertial Sensors for Reference-Denied Environments. Sensors 2025, 25, 6864. https://doi.org/10.3390/s25226864
Odry A, Sarcevic P, Carbone G, Odry P, Kecskes I. A Novel Shallow Neural Network-Augmented Pose Estimator Based on Magneto-Inertial Sensors for Reference-Denied Environments. Sensors. 2025; 25(22):6864. https://doi.org/10.3390/s25226864
Chicago/Turabian StyleOdry, Akos, Peter Sarcevic, Giuseppe Carbone, Peter Odry, and Istvan Kecskes. 2025. "A Novel Shallow Neural Network-Augmented Pose Estimator Based on Magneto-Inertial Sensors for Reference-Denied Environments" Sensors 25, no. 22: 6864. https://doi.org/10.3390/s25226864
APA StyleOdry, A., Sarcevic, P., Carbone, G., Odry, P., & Kecskes, I. (2025). A Novel Shallow Neural Network-Augmented Pose Estimator Based on Magneto-Inertial Sensors for Reference-Denied Environments. Sensors, 25(22), 6864. https://doi.org/10.3390/s25226864

