# Sensor Data Fusion for a Mobile Robot Using Neural Networks

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Ultrasonic Sensor

#### Kalman Filter

#### 2.2. Stereo Camera

#### 2.3. LiDAR

#### 2.4. Homogeneous Transformation Matrices

#### 2.5. Data Fusion

#### 2.6. Deep Feed forward Neural Networks

#### 2.6.1. Gradient Stochastic Descent Optimizer

#### 2.6.2. Neural Network Configuration

#### 2.7. Occupancy Grid Map

## 3. Experimental Setup

#### 3.1. Hardware

^{®}running Ubuntu Server 20.04.1 on a Raspberry Pi 4 8 Gb and a Toshiba Satellite L845 Laptop running Ubuntu 20.04 Desktop. The LiDAR used is a Robotis LDS-01 2D $360{}^{\xb0}$ which uses an LDS2USB module as a data adapter to easily connect the LiDAR to the raspberry. The ultrasonic sensors are HC-SR04, connected to a voltage divider so that the 5V at which the sensor operates does not damage the $3.3\mathrm{V}$ tolerant pins on the raspberry. The stereo camera is connected to the raspberry via a USB-C to USB 3.0 interface. Given the power consumption nature of the devices involved, a fully charged $1800\mathrm{mAh}$ LiPo battery is enough for $~20$ min of usage. The stereo camera and ultrasonic sensors were added to the Turtlebot3 in strategic positions so that each sensor has an unobstructed FoV as shown in Figure 6. Occupying the central position on top of the robot is the LiDAR. The ultrasonic sensors are in the front right and left corners of the robot. Lastly, the RealSense camera with the aluminum enclosure is placed in the middle front section of the robot.

^{®}website.

#### 3.2. Proving Ground

#### 3.3. Software

#### 3.3.1. Remote Server (Raspberry Pi 4)

#### 3.3.2. Main Processing Unit (Toshiba Satellite L845 Laptop)

#### 3.4. Training and Running the Network

## 4. Results

#### 4.1. Scenario 1

#### 4.2. Scenario 2

#### 4.3. Scenario 3

## 5. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Acknowledgments

## Conflicts of Interest

## References

- Data Fusion Lexicon “The Data Fusion Subpanel of the Joint Directors of Laboratories, Technical Panel for C3 v}. 1991. Available online: https://apps.dtic.mil/sti/pdfs/ADA529661.pdf (accessed on 23 March 2020).
- Castanedo, F. A Review of Data Fusion Techniques. Sci. World J.
**2013**, 2013, 704504. [Google Scholar] [CrossRef] [PubMed] - Xu, J.; Yang, G.; Sun, Y.; Picek, S. A Multi- Sensor Information Fusion Method Based on Factor Graph for Integrated Navigation System. IEEE Access
**2021**, 9, 12044–12054. [Google Scholar] [CrossRef] - Kubelka, V.; Reinstein, M.; Svoboda, T. Improving multimodal data fusion for mobile robots by trajectory smoothing. Robot. Auton. Syst.
**2016**, 84, 88–96. [Google Scholar] [CrossRef] - Wu, J.K.; Wong, Y.F. Bayesian approach for data fusion in sensor networks. In Proceedings of the 2006 9th International Conference on Information Fusion, Florence, Italy, 10–13 July 2006; Volume 1. Available online: https://www.proceedings.com/content/001/001016webtoc.pdf (accessed on 20 February 2020). [CrossRef]
- Buonocore, L.; Almeida, A.D.; Neto, C.L.N. A sensor data fusion algorithm for indoor slam using a low-cost mobile robot. Adapt. Mob. Robot.
**2012**, 2012, 762–769. [Google Scholar] - Wei, P.; Cagle, L.; Reza, T.; Ball, J.; Gafford, J. LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi- Sensor Collision Avoidance System. Electronics
**2018**, 7, 84. [Google Scholar] [CrossRef] [Green Version] - Yi, Z.; Yuan, L. Application of fuzzy neural networks in data fusion for mobile robot wall-following. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 6575–6579. [Google Scholar]
- Parasuraman, S. Sensor Fusion for Mobile Robot Navigation: Fuzzy Associative Memory. Procedia Eng.
**2012**, 41, 251–256. [Google Scholar] [CrossRef] - Asvadi, A.; Garrote, L.; Premebida, C.; Peixoto, P.; Nunes, U.J. Multimodal vehicle detection: Fusing 3D-LIDAR and color camera data. Pattern Recognit. Lett.
**2018**, 115, 20–29. Available online: https://www.researchgate.net/publication/320089205_Multimodal_vehicle_detection_Fusing_3D-LIDAR_and_color_camera_data (accessed on 20 February 2020). [CrossRef] - VPopov, L.; Ahmed, S.A.; Topalov, A.V.; Shakev, N.G. Development of Mobile Robot Target Recognition and Following Behaviour Using Deep Convolutional Neural Network and 2D Range Data. IFAC-PapersOnLine
**2018**, 51, 210–215. Available online: https://www.researchgate.net/publication/329148363_Development_of_Mobile_Robot_Target_Recognition_and_Following_Behaviour_Using_Deep_Convolutional_Neural_Network_and_2D_Range_Data (accessed on 20 February 2020). [CrossRef] - Lv, M.; Xu, W.; Chen, T. A hybrid deep convolutional and recurrent neural network for complex activity recognition using multimodal sensors. Neurocomputing
**2019**, 362, 33–40. Available online: https://www.researchgate.net/publication/334538447_A_Hybrid_Deep_Convolutional_and_Recurrent_Neural_Network_for_Complex_Activity_Recognition_Using_Multimodal_Sensors (accessed on 13 October 2019). [CrossRef] - Dobrev, Y.; Gulden, P.; Vossiek, M. An Indoor Positioning System Based on Wireless Range and Angle Measurements Assisted by Multi-Modal Sensor Fusion for Service Robot Applications. IEEE Access
**2018**, 6, 69036–69052. [Google Scholar] [CrossRef] - Jing, L.; Wang, T.; Zhao, M.; Wang, P. An adaptive multi-sensor data fusion method based on deep convolutional neural networks for fault diagnosis of planetary gearbox. Sensors
**2017**, 17, 414. [Google Scholar] [CrossRef] [Green Version] - Campos-Taberner, M. Processing of Extremely High-Resolution Li- DAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest-Part A: 2-D Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2016**, 9, 5547–5559. [Google Scholar] [CrossRef] [Green Version] - Hua, B.; Hossain, D.; Capi, G.; Jindai, M.; Yoshida, I. Human-like Artificial Intelligent Wheelchair Robot Navigated by Multi-Sensor Models in Indoor Environments and Error Analysis. Procedia Comput. Sci.
**2017**, 105, 14–19. [Google Scholar] [CrossRef] - Mancini, A.; Frontoni, E.; Zingaretti, P. Embedded Multisensor System for Safe Point-to-Point Navigation of Impaired Users. In Proceedings of the IEEE Transactions on Intelligent Transportation Systems; IEEE: Piscataway, NJ, USA, 2015; Volume 16, pp. 3543–3555. Available online: https://www.researchgate.net/publication/283473016_Embedded_Multisensor_System_for_Safe_Point-to-Point_Navigation_of_Impaired_Users (accessed on 12 October 2020). [CrossRef]
- Li, J.; He, X.; Li, J. 2D LiDAR and camera fusion in 3D modeling of indoor environment. In Proceedings of the IEEE National Aerospace Electronics Conference, NAECON, Dayton, OH, USA, 15–19 June 2015; pp. 379–383. Available online: https://www.researchgate.net/publication/304012423_2D_LiDAR_and_Camera_Fusion_in_3D_Modeling_of_Indoor_Environment (accessed on 21 November 2019). [CrossRef]
- Kellalib, B.; Achour, N.; Coelen, V.; Nemra, A. Towards simultaneous localization and mapping tolerant to sensors and software faults: Application to omnidirectional mobile robot. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng.
**2021**, 235, 269–288. [Google Scholar] [CrossRef] - Sasiadek, J.Z.; Hartana, P. Odometry and Sonar Data Fusion for Mobile Robot Navigation. IFAC Proc. Vol.
**2020**, 33, 411–416. [Google Scholar] [CrossRef] - Popov, V.L.; Shakev, N.G.; Topalov, A.V.; Ahmed, S.A. Detection and Following of Moving Target by an Indoor Mobile Robot using Multi-sensor Information. IFAC-PapersOnLine
**2021**, 54, 357–362. [Google Scholar] [CrossRef] - Tibebu, H.; Roche, J.; de Silva, V.; Kondoz, A. Lidar-based glass detection for improved occupancy grid mapping. Sensors
**2021**, 21, 2263. [Google Scholar] [CrossRef] - Wang, X.; Liu, W.; Deng, Z. Robust weighted fusion Kalman estimators for multi-model multisensor systems with uncertain-variance multiplicative and linearly correlated additive white noises. Signal Processing
**2017**, 137, 339–355. [Google Scholar] [CrossRef] - Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. Available online: https://www.researchgate.net/publication/269935079_Adam_A_Method_for_Stochastic_Optimization (accessed on 19 April 2021).
- Xu, N. Dual-Stream Recurrent Neural Network for Video Captioning. IEEE Trans. Circuits Syst. Video Technol.
**2018**, 29, 2482–2493. [Google Scholar] [CrossRef] - Uddin, M.Z.; Hassan, M.M.; Alsanad, A.; Savaglio, C. A body sensor data fusion and deep recurrent neural network-based behavior recognition approach for robust healthcare. Inf. Fusion
**2020**, 55, 105–115. [Google Scholar] [CrossRef] - Tahtawi, A.R.A. Kalman Filter Algorithm Design for HC-SR04 Ultrasonic Sensor Data Acquisition System. IJITEE
**2018**, 2, 2–6. Available online: https://www.researchgate.net/publication/330540064_Kalman_Filter_Algorithm_Design_for_HC-SR04_Ultrasonic_Sensor_Data_Acquisition_System (accessed on 5 April 2021). [CrossRef] - Bischoff, O.; Wang, X.; Heidmann, N.; Laur, R.; Paul, S. Implementation of an ultrasonic distance measuring system with kalman filtering in wireless sensor networks for transport logistics. Procedia Eng.
**2010**, 5, 196–199. [Google Scholar] [CrossRef] [Green Version] - Yadav, R.K. PSO-GA based hybrid with Adam Optimization for ANN with application in Medical Diagnosis. Cogn. Syst. Res.
**2020**, 64, 191–199. [Google Scholar] [CrossRef] - Kumar, R.; Aggarwal, J.S.R.K. Comparison of regression and artificial neural network models for estimation of global solar radiations. Renew. Sustain. Energy Rev.
**2015**, 52, 1294–1299. [Google Scholar] [CrossRef] - Chang, Z.; Zhang, Y.; Chen, W. Electricity proce prediction based on hybrid model of adam optimizer LSTM neural network and wavelet transform. Energy
**2019**, 187, 115804. [Google Scholar] [CrossRef] - Hassanipour, S.; Ghaem, H.; Arab-Zozani, M. Comparison of artificial neural networks and logistic regression models for prediction of outcomes in trauma patients. Injury
**2019**, 50, 244–250. [Google Scholar] [CrossRef] - Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans. Robot.
**2007**, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]

**Figure 3.**Basic structure of the DFFNN used where each input corresponds to the distance data of each sensor per degree.

**Figure 12.**The neural network prediction working principle in which the data is obtained, fed to the NN, the distance output is generated and published into the ROS ecosystem for other services to use.

**Figure 13.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR generated map.

**Figure 14.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Sonar fusion generated map.

**Figure 16.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Camara-Sonar fusion generated map.

**Figure 19.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR generated map.

**Figure 20.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Sonar fusion generated map.

**Figure 22.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Camara-Sonar fusion generated map.

**Figure 26.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR generated map.

**Figure 27.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Sonar fusion generated map with glass panel.

**Figure 28.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Sonar fusion generated map with insect door mesh in front of the glass.

**Figure 29.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Sonar fusion generated map with dense curtain mesh in front of the glass.

**Figure 31.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Camara-Sonar fusion generated map with glass as main obstacle.

**Figure 32.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Camara-Sonar fusion generated map with insect/bug mesh as main obstacle.

**Figure 33.**Occupancy grid maps showing the: (

**a**) Ground truth (reference map); (

**b**) LiDAR-Camara-Sonar fusion generated map with mesh curtain as main obstacle.

**Figure 34.**Pre and post fusion readings for LiDAR-Sonar-Camera fusion using bug mesh as main obstacle.

**Figure 36.**Analytical performance evaluation of the neural network by comparing the ground truth distance and fused distance: (

**a**) LiDAR-Sonar model; (

**b**) LiDAR-Sonar-Camera model.

**Figure 37.**Validation of neural network with randomized data for: (

**a**) LiDAR-Sonar model; (

**b**) LiDAR-Sonar-Camera model.

Variable Name | Representation | Initial Value |
---|---|---|

State vector | ${x}_{k}\in {\mathbb{R}}^{n}$ | - |

Feedback | ${u}_{k}\in {\mathbb{R}}^{l}$ | - |

Measurement | ${z}_{k}\in {\mathbb{R}}^{m}$ | - |

Constant Matrix State | $F\in {\mathbb{R}}^{nxn}$ | [1] |

Feedback | $B\in {\mathbb{R}}^{nxl}$ | [0] |

Measurement | $H\in {\mathbb{R}}^{mxn}$ | [1] |

Noise covariance of matrix | $Q$ | 0.1 |

Covariance measurement. | R | 0.4 |

A priori state estimation | ${\widehat{x}}_{k}{}^{(-)}$ | [1] |

Covariance matrix a priori | ${P}_{k}{}^{(-)}$ | [1] |

No. Criteria | Detail | Topology |
---|---|---|

1. | Relation between the input data sources. | Complementary. |

Redundant. | ||

Cooperative. | ||

2. | Type of employed data | Raw measurements. |

Signals. | ||

Characteristics or decisions. | ||

3. | Architecture type | Centralized. |

Decentralized. | ||

Distributed. |

Angular Position of Distance Measurement | True Distance (Using Opaque Masking Tape) | Detected Distance (Not Using Opaque Electric Tape) | ||
---|---|---|---|---|

LiDAR | LiDAR | Sonar | Camera | |

$65{}^{\xb0}$ | 20 cm | 200 cm | 22 cm | 190 cm |

$66{}^{\xb0}$ | 23 cm | 190 cm | 22 cm | 180 cm |

$67{}^{\xb0}$ | 25 cm | 24 cm | 22 cm | 21 cm |

$68{}^{\xb0}$ | 30 cm | 31 cm | 22 cm | 23 cm |

… | … | … | … | … |

$117{}^{\xb0}$ | 39 cm | 38 cm | 40 cm | 38 cm |

$118{}^{\xb0}$ | 40 cm | 39 cm | 40 cm | 40 cm |

$119{}^{\xb0}$ | 41 cm | 120 cm | 40 cm | 42 cm |

$120{}^{\xb0}$ | 43 cm | 170 cm | 40 cm | 43 cm |

ADAM Parameter | Description | Value | |
---|---|---|---|

Lidar-Camera-Sonar | Lidar-Sonar | ||

$\alpha $ | Step size | −0.008 | −0.0015 |

${\beta}_{1}$ | Exponential decay rate for 1st moment estimate | $\frac{2}{epochs}=0.013$ | $\frac{2}{epochs}=0.057$ |

${\beta}_{2}$ | Exponential decay rate for 2nd moment estimate | $\frac{2}{epochs}=0.013$ | $\frac{2}{epochs}=0.057$ |

$f\left(\theta \right)$ | Stochastic objective function with parameters $\theta $ | MSE | |

${\theta}_{0}$ | Initial parameter vector | Zeros | |

${m}_{0}$ | Initialize 1st moment vector | Zeros | |

${v}_{0}$ | Initialize 2nd moment vector | Zeros | |

$t$ | Initialize timestep | 0 |

ANN Parameter | LiDAR-Sonar-Camera | LiDAR-Sonar |
---|---|---|

Inputs | 3 | 2 |

Outputs | 1 | 1 |

Hidden Layers | 50 | 6 |

Neuron per Hidden Layer | 80 | 60 |

Optimizer | Adam | Adam |

Activation Function | ReLu | ReLu |

Epochs | 150 | 35 |

Loss function | MSE | MSE |

Metric of Loss function | mse | mse |

Batch size | 4 | 2 |

kernel_initializer | he_uniform | he_uniform |

BIAS_initializer | zeros | zeros |

Color | Legend |
---|---|

Green | Live sensory data |

Blue | Glass panels |

Black | Known obstacles |

White | Empty area |

Gray | Unknown area |

Data | LiDAR-Sonar | LiDAR-Sonar-Camera |
---|---|---|

Train Dataset | 0.03289 m | 0.026143 m |

Test Dataset | 0.03567 m | 0.029696 m |

LiDAR Dataset | 2.12175 m | 2.121759 m |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Barreto-Cubero, A.J.; Gómez-Espinosa, A.; Escobedo Cabello, J.A.; Cuan-Urquizo, E.; Cruz-Ramírez, S.R.
Sensor Data Fusion for a Mobile Robot Using Neural Networks. *Sensors* **2022**, *22*, 305.
https://doi.org/10.3390/s22010305

**AMA Style**

Barreto-Cubero AJ, Gómez-Espinosa A, Escobedo Cabello JA, Cuan-Urquizo E, Cruz-Ramírez SR.
Sensor Data Fusion for a Mobile Robot Using Neural Networks. *Sensors*. 2022; 22(1):305.
https://doi.org/10.3390/s22010305

**Chicago/Turabian Style**

Barreto-Cubero, Andres J., Alfonso Gómez-Espinosa, Jesús Arturo Escobedo Cabello, Enrique Cuan-Urquizo, and Sergio R. Cruz-Ramírez.
2022. "Sensor Data Fusion for a Mobile Robot Using Neural Networks" *Sensors* 22, no. 1: 305.
https://doi.org/10.3390/s22010305