# An Indoor Positioning Approach Based on Fusion of Cameras and Infrared Sensors

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Method Description

**Errors in $\mathit{x}$,$\mathit{y}$ axes:**for every position in the test grid, the uncertainty in the x and y axis is assessed by the standard deviation in x or y respectively. This information is contained in the covariance matrix. This applies to Monte Carlo simulations or real measurements.**Maximum and minimum elliptical errors:**the eigenvalues of the covariance matrix are the squares of the lengths of the 1-sigma confidence ellipsoid axis, and can be easily computed by a singular-value decomposition (SVD) of this covariance matrix. Hence, given a generic covariance matrix ${\mathbf{\Sigma}}_{\mathit{\theta}}$ of a set of observations in ${\mathbb{R}}^{2}$ with coordinates x,y, after this SVD decomposition the two eigenvalues ${\lambda}_{1},{\lambda}_{2}$ are obtained in a matrix ${\mathbf{\Sigma}}_{\mathit{\theta}}^{{}^{\prime}}$. This matrices have the form:$${\mathbf{\Sigma}}_{\mathit{\theta}}=\left[\begin{array}{cc}{\sigma}_{x}^{2}& {\sigma}_{xy}\\ {\sigma}_{yx}& {\sigma}_{y}^{2}\end{array}\right];\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{\Sigma}}_{\mathit{\theta}}^{{}^{\prime}}=\left[\begin{array}{cc}{\lambda}_{1}& 0\\ 0& {\lambda}_{2}\end{array}\right]=\left[\begin{array}{cc}{\sigma}_{1}^{2}& 0\\ 0& {\sigma}_{2}^{2}\end{array}\right]$$The deviations with respect to these axes provide more spatial information about the uncertainty in each position. We will refer to these deviations as elliptical deviations, or elliptical errors. The complete spatial information of the confidence ellipsoid would include also the rotation angle of its axis with respect to, for instance, x axis. This is also easily computed, if needed, through the aforementioned SVD decomposition. In our case, the axis length is enough to assess the dimensions of the uncertainty ellipsoid and its level of circularity.

## 3. Infrared Estimation Model

## 4. Camera Estimation Model

**Calibration**: it is carried out, customarily, by means of a calibration pattern, in order to obtain the intrinsic and extrinsic camera parameters. Once these parameters are known, the coordinates of the detected target from the scene can be obtained in the image. Calibration has a big impact on errors in the final positioning. In any case, this is a standard stage in any camera-based landmark recognition application.**Homography parameters computation**: a homography is established between the image and scene planes (as already mentioned, the scene is also a plane of known height). It is a reciprocal transformation from image to scene. The coefficients of the matrix $\mathit{H}$ for such transformation are obtained by means of specific encoded markers (named H-markers hereafter). This process (with low computational and time cost) allows for having a bijective relation between camera and navigation planes. This way, positioning can be easily carried out with one single camera. As explained in Section 2, the uncertainty in the image capture is propagated to uncertainty in the position in the scene through $\mathit{H}$. For setup characterization purposes, we have carried out this process off line but, in real performance, it can normally be implemented on line.**Target detection**(Projection): a landmark placed on the target is detected by image processing so that the position of the target is obtained in the image plane in pixel coordinates. For simplicity, we will refer hereafter to projection when considering the projection path from image to scene (with matrix $\mathit{H}$) and to back projection in the opposite sense (scene to image, with ${\mathit{H}}^{-\mathbf{1}}$), as represented in Figure 4.**Back projection**: the back projection stage projects the position of $\mathit{T}$ scene to the camera image by means of the inverse homography matrix ${\mathit{H}}^{-\mathbf{1}}$ matrix. Back projection is used for simulation of the camera estimation model.

## 5. Fusion of Camera and IR Sensors

## 6. Results

#### 6.1. Setup

#### 6.2. Infrared Measurements

#### 6.3. Camera Measurements

#### 6.4. Fusion Results

## 7. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Lee, J.H.; Morioka, K.; Ando, N.; Hashimoto, H. Cooperation of Distributed Intelligent Sensors in Intelligent Environment. IEEE ASME Trans. Mechatron.
**2004**, 9, 535–543. [Google Scholar] [CrossRef] - Brscic, D.; Sasaki, T.; Hashimoto, H. Acting in intelligent space—Mobile robot control based on sensors distributed in space. In Proceedings of the 2007 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Zurich, Switzerland, 4–7 September 2007; pp. 1–6. [Google Scholar] [CrossRef]
- Guan, K.; Ma, L.; Tan, X.; Guo, S. Vision-based indoor localization approach based on SURF and landmark. In Proceedings of the 2016 International Wireless Communications and Mobile Computing Conference (IWCMC), Paphos, Cyprus, 5–9 September 2016; pp. 655–659. [Google Scholar]
- Schloemann, J.; Dhillon, H.S.; Buehrer, R.M. Toward a Tractable Analysis of Localization Fundamentals in Cellular Networks. IEEE Trans. Wirel. Commun.
**2016**, 15, 1768–1782. [Google Scholar] [CrossRef] - He, S.; Chan, S.H.G. Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons. IEEE Commun. Surv. Tutor.
**2016**, 18, 466–490. [Google Scholar] [CrossRef] - Luo, R.C.; Chen, O. Indoor robot/human localization using dynamic triangulation and wireless Pyroelectric Infrared sensory fusion approaches. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1359–1364. [Google Scholar] [CrossRef]
- Qi, J.; Liu, G.P. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network. Sensors
**2017**, 17, 2554. [Google Scholar] [CrossRef] [PubMed] - Kim, S.J.; Kim, B.K. Dynamic Ultrasonic Hybrid Localization System for Indoor Mobile Robots. IEEE Trans. Ind. Electron.
**2013**, 60, 4562–4573. [Google Scholar] [CrossRef] - Elloumi, W.; Latoui, A.; Canals, R.; Chetouani, A.; Treuillet, S. Indoor Pedestrian Localization with a Smartphone: A Comparison of Inertial and Vision-Based Methods. IEEE Sens. J.
**2016**, 16, 5376–5388. [Google Scholar] [CrossRef] - Alarifi, A.; Al-Salman, A.; Alsaleh, M.; Alnafessah, A.; Al-Hadhrami, S.; Al-Ammar, M.; Al-Khalifa, H. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances. Sensors
**2016**, 16, 707. [Google Scholar] [CrossRef] - Raharijaona, T.; Mawonou, R.; Nguyen, T.V.; Colonnier, F.; Boyron, M.; Diperi, J.; Viollet, S. Local Positioning System Using Flickering Infrared LEDs. Sensors
**2017**, 17, 2518. [Google Scholar] [CrossRef] - Garcia, E.; Poudereux, P.; Hernandez, A.; Urenya, J.; Gualda, D. A robust UWB indoor positioning system for highly complex environments. In Proceedings of the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015; pp. 3386–3391. [Google Scholar] [CrossRef]
- Tiemann, J.; Schweikowski, F.; Wietfeld, C. Design of an UWB indoor-positioning system for UAV navigation in GNSS-denied environments. In Proceedings of the 2015 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Banff, AB, Canada, 13–16 October 2015; pp. 1–7. [Google Scholar] [CrossRef]
- Paredes, J.A.; Álvarez, F.J.; Aguilera, T.; Villadangos, J.M. 3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras. Sensors
**2018**, 18, 89. [Google Scholar] [CrossRef] [PubMed] - Zhang, W.; Kavehrad, M. A 2-D indoor localization system based on visible light LED. In Proceedings of the 2012 IEEE Photonics Society Summer Topical Meeting Series, Seattle, WA, USA, 9–11 July 2012; pp. 80–81. [Google Scholar] [CrossRef]
- Pizarro, D.; Mazo, M.; Santiso, E.; Marron, M.; Jimenez, D.; Cobreces, S.; Losada, C. Localization of Mobile Robots Using Odometry and an External Vision Sensor. Sensors
**2010**, 10, 3655–3680. [Google Scholar] [CrossRef][Green Version] - Martín-Gorostiza, E.; Lázaro-Galilea, J.L.; Meca-Meca, F.J.; Salido-Monzú, D.; Espinosa-Zapata, F.; Pallarés-Puerto, L. Infrared sensor system for mobile-robot positioning in intelligent spaces. Sensors
**2011**, 11, 5416–5438. [Google Scholar] [CrossRef] - Erogluy, Y.S.; Guvency, I.; Palay, N.; Yukselz, M. AOA-based localization and tracking in multi-element VLC systems. In Proceedings of the 2015 IEEE 16th Annual Wireless and Microwave Technology Conference (WAMICON), Cocoa Beach, FL, USA, 13–15 April 2015; pp. 1–5. [Google Scholar] [CrossRef]
- Mitchell, H. Multi-Sensor Data Fusion—An Introduction; Springer: Berlin, Germany, 2007. [Google Scholar] [CrossRef]
- Xu, S.; Chou, W.; Dong, H. A Robust Indoor Localization System Integrating Visual Localization Aided by CNN-Based Image Retrieval with Monte Carlo Localization. Sensors
**2019**, 19, 249. [Google Scholar] [CrossRef] - Chen, Y.; Chen, R.; Liu, M.; Xiao, A.; Wu, D.; Zhao, S. Indoor Visual Positioning Aided by CNN-Based Image Retrieval: Training-Free, 3D Modeling-Free. Sensors
**2018**, 18, 2692. [Google Scholar] [CrossRef] - Montero, A.S.; Sekkati, H.; Lang, J.; Laganière, R.; James, J. Framework for Natural Landmark-based Robot Localization. In Proceedings of the 2012 Ninth Conference on Computer and Robot Vision, Toronto, ON, Canada, 28–30 May 2012; pp. 131–138. [Google Scholar]
- Yang, G.; Saniie, J. Indoor navigation for visually impaired using AR markers. In Proceedings of the 2017 IEEE International Conference on Electro Information Technology (EIT), Lincoln, NE, USA, 14–17 May 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Sani, M.F.; Karimian, G. Automatic navigation and landing of an indoor AR drone quadrotor using ArUco marker and inertial sensors. In Proceedings of the International Conference on Computer and Drone Applications (IConDA), Kuching, Malaysia, 9–11 November 2017; pp. 102–107. [Google Scholar] [CrossRef]
- Losada, C.; Mazo, M.; Palazuelos, S.; Pizarro, D.; Marrón, M. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots. Sensors
**2010**, 10, 3261–3279. [Google Scholar] [CrossRef] [PubMed][Green Version] - Mautz, R.; Tilch, S. Survey of optical indoor positioning systems. In Proceedings of the 2011 International Conference on Indoor Positioning and Indoor Navigation, Guimaraes, Portugal, 21–23 September 2011; pp. 1–7. [Google Scholar] [CrossRef]
- Xu, D.; Han, L.; Tan, M.; Li, Y.F. Ceiling-Based Visual Positioning for an Indoor Mobile Robot With Monocular Vision. IEEE Trans. Ind. Electron.
**2009**, 56, 1617–1628. [Google Scholar] [CrossRef] - Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.; Medina-Carnicer, R. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognit.
**2015**, 51. [Google Scholar] [CrossRef] - Alatise, M.B.; Hancke, G.P. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter. Sensors
**2017**, 17, 2164. [Google Scholar] [CrossRef] - Duraisamy, B.; Gabb, M.; Vijayamohnan Nair, A.; Schwarz, T.; Yuan, T. Track level fusion of extended objects from heterogeneous sensors. In Proceedings of the 2016 19th International Conference on Information Fusion (FUSION), Heidelberg, Germany, 5–8 July 2016; pp. 876–885. [Google Scholar]
- Mohebbi, P.; Stroulia, E.; Nikolaidis, I. Sensor-Data Fusion for Multi-Person Indoor Location Estimation. Sensors
**2017**, 17, 2377. [Google Scholar] [CrossRef] - Wang, Y.T.; Peng, C.C.; Ravankar, A.; Ravankar, A. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm. Sensors
**2018**, 18, 1294. [Google Scholar] [CrossRef] [PubMed] - Jung, S.Y.; Hann, S.; Park, C.S. TDOA-based optical wireless indoor localization using LED ceiling lamps. IEEE Trans. Consum. Electron.
**2011**, 57, 1592–1597. [Google Scholar] [CrossRef] - Wang, K.; Nirmalathas, A.; Lim, C.; Alameh, K.; Li, H.; Skafidas, E. Indoor infrared optical wireless localization system with background light power estimation capability. Opt. Express
**2017**, 25, 22923–22931. [Google Scholar] [CrossRef] [PubMed] - Kumar, G.A.; Kumar, A.; Patil, R.; Sill Park, S.; Ho Chai, Y. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification. Sensors
**2017**, 17, 1268. [Google Scholar] [CrossRef] [PubMed] - Lee, Y.J.; Yim, B.D.; Song, J.B. Mobile robot localization based on effective combination of vision and range sensors. Int. J. Control. Autom. Syst.
**2009**, 7, 97–104. [Google Scholar] [CrossRef] - Lee, S. Use of infrared light reflecting landmarks for localization. Ind. Robot. Int. J.
**2009**, 36, 138–145. [Google Scholar] [CrossRef] - Nakazawa, Y.; Makino, H.; Nishimori, K.; Wakatsuki, D.; Komagata, H. Indoor positioning using a high-speed, fish-eye lens-equipped camera in Visible Light Communication. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, 28–31 October 2013; pp. 1–8. [Google Scholar] [CrossRef]
- Kuo, Y.S.; Pannuto, P.; Hsiao, K.J.; Dutta, P. Luxapose: Indoor positioning with mobile phones and visible light. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking—MobiCom ’14, Maui, HI, USA, 7–11 September 2014; ACM Press: New York, NY, USA, 2014; pp. 447–458. [Google Scholar] [CrossRef]
- Rüeger, J.M. Electronic Distance Measurement: An Introduction; Springer: Berlin, Germany, 2012. [Google Scholar]
- Salido-Monzú, D.; Martín-Gorostiza, E.; Lázaro-Galilea, J.L.; Martos-Naya, E.; Wieser, A. Delay tracking of spread-spectrum signals for indoor optical ranging. Sensors
**2014**, 14, 23176–23204. [Google Scholar] [CrossRef] - Del Castillo Vazquez, M.; Puerta-Notario, A. Self-orienting receiver for indoor wireless infrared links at high bit rates. In Proceedings of the 57th IEEE Semiannual Vehicular Technology Conference, 2003—VTC 2003, Jeju, Korea, 22–25 April 2003; Volume 3, pp. 1600–1604. [Google Scholar] [CrossRef]
- Zhu, R.; Gan, X.; Li, Y.; Zhang, H.; Li, S.; Huang, L. An Indoor Location Method Based on Optimal DOP of displacement Vector Components and Weighting Factor adjustment with Multiple Array Pseudolites. In Proceedings of the 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services (UPINLBS), Wuhan, China, 22–23 March 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Li, X.; Zhang, P.; Guo, J.; Wang, J.; Qiu, W. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning. Sensors
**2017**, 17, 921. [Google Scholar] [CrossRef] [PubMed] - Martin-Gorostiza, E.; Meca-Meca, F.J.; Lazaro-Galilea, J.L.; Salido-Monzu, D.; Martos-Naya, E.; Wieser, A. Infrared local positioning system using phase differences. In Proceedings of the 2014 Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), Corpus Christ, TX, USA, 20–21 November 2014; pp. 238–247. [Google Scholar] [CrossRef]
- Martín-Gorostiza, E.; Meca-Meca, F.J.; Lázaro-Galilea, J.L.; Martos-Naya, E.; Naranjo-Vega, F.; Esteban-Martínez, O. Coverage-mapping method based on a hardware model for mobile-robot positioning in intelligent spaces. IEEE Trans. Instrum. Meas.
**2010**, 59, 266–282. [Google Scholar] [CrossRef] - Salido-Monzú, D.; Martín-Gorostiza, E.; Lázaro-Galilea, J.L.; Domingo-Pérez, F.; Wieser, A. Multipath mitigation for a phase-based infrared ranging system applied to indoor positioning. In Proceedings of the 2013 International Conference on Indoor Positioning and Indoor Navigation, Montbeliard-Belfort, France, 28–31 October 2013; pp. 1–10. [Google Scholar] [CrossRef]
- Salido-Monzú, D.; Meca-Meca, F.; Martín-Gorostiza, E.; Lázaro-Galilea, J. SNR Degradation in Undersampled Phase Measurement Systems. Sensors
**2016**, 16, 1772. [Google Scholar] [CrossRef] - Aster, R.C.; Borchers, B.; Thurber, C. Parameter Estimation and Inverse Problems; Elsevier Academic Press: Amsterdam, The Netherlands, 2004. [Google Scholar]
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis, 2nd ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2003. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer: Berlin, Germany, 2010. [Google Scholar]

**Figure 2.**Infrared link representing a generic anchor ${\mathit{A}}_{\mathit{i}}$ (receiver) and the target $\mathit{T}$ (emitter) in the IR-BLC. ${\mathit{A}}_{\mathit{r}}$ is the common reference in the basic locating cell (BLC).

**Figure 3.**IR errors. Red markers indicate anchor projections on grid plane (red square is reference).

**Figure 5.**Camera errors (x,y and elliptical deviations), green square represents camera projection on grid plane.

**Figure 11.**Elliptic deviations of IR, camera and fusion estimations from real measurements at locations A (top) and B (bottom). Left: maximum elliptical deviation; right: minimum elliptical deviation.

**Figure 13.**Errors for IR, camera (includes bias) and fusion estimations from real measurements at locations A (top) and B (bottom). Left: maximum elliptical deviation; right: minimum elliptical deviation.

**Figure 14.**Simulation errors for IR, camera and fusion estimations, emulating locations A (top) and B (bottom). Left: maximum elliptical deviation; right: minimum elliptical deviation.

**Table 1.**Indoor positioning review. Cost: Low (L), Low-Medium (L-M), Medium (M), Medium-High (M-H) and High (H).

Ref. | Application | Acc. | Technology | Cost | Pros. | Cons. |
---|---|---|---|---|---|---|

Jun [7] | Autonomous robots | 1 cm | US TOF + RF link for sync | M | Accurate | Small (validated) coverage |

Jung [33] | Not mentioned | 1 cm | Optical, TDOA | M | Accurate | Only simulation |

Raharijaona [11] | Autonomous robots | 2 cm | Optical, AOA, CDMA | M | Accurate | Needs dense lighting infrastructure |

Wang [34] | H-speed indoor communications | $2.5$ cm | Optical, AOA + RSS | M | More accurate than standard VLC based | Ad-hoc receiver |

Zhang [15] | Not mentioned | 5 cm | Optical, RSS | L | Accurate | Only simulation |

Lee [36] | Augmented reality | 1–5 cm | Vision, ad-hoc IR reflecting landmarks | L-M | Independent of illumination | Poor validation |

Sani [24] | Nav. and landing drone | 6 cm | Cooperation Cam + IMU. ArUco markers + PnP, IMU + Kalman Filter | L | Works without cam | Controlled from ground station |

Zhu [43] | Not specified | <10 cm | Pseudolites. GNSS-like ranging (code/phase tracking) + complex correction algorithm | M-H | Accurate | Only simulation results |

Yun-Ting [32] | Autonomous robots | <10 cm | 2D laser scanning + feature matching with map | M | No infrastructure needed | Requires previous acquisition and classification of map features |

Kumar [35] | UAVs | <10 cm | 2 × 2D laser scanners + IMU (for heading estimation). SLAM | M | No infrastructure needed | Cost |

Paredes [14] | UAV | <10 cm | US TOF (CDMA) + TOF camera for initialization | M-H | Accurate, fast | Expensive |

Kuo [39] | LBS | 10 cm | Vision + optical (AOA from leds on camera, ID from leds with CDMA) | L | Position + orientation. Low cost, good validation, smart use of camera for demodulating | Not clear how dense the infrastructure should be |

Nakazawa [38] | LBS, human navigation | 10 cm | Vision + optical (AOA from leds on camera, ID from leds with CDMA) | M | Good trade-off accuracy VS range | Ad-hoc receiver |

Garcia [12] | Loc. in complex environments | 10 cm | UWB TOF + SW multipath mitigation | M | Good scalability | Nodes are expensive |

Tiemann [13] | UAVs | 10 cm | UWB TOF + SW multipath mitigation | M | Good scalability | Nodes are expensive |

Montero [22] | Robots localization | 5–13 cm | Phone cam, Landmark + Fern descriptor | L | Non invasive | Synthetics data |

Alatise [29] | Autonomous robots | 5–14 cm | Cam(SURF + RANSAC) + IMU. Fusion(EKF) | M | Accurate | Field of view is limited |

Pizarro [16] | Autonomous robots | <20 cm | Reconstruction (structure-from-motion) + odometry | M | Non-supervised method | No multiple robots tested |

Xin [44] | Not specified | <20 cm | Pseudolites. GNSS-like ranging (code/phase tracking) + ambiguity resolution of carrier phase for enhanced accuracy | M-H | Good trade-off accuracy vs range | Requires independent initialization |

Xu [27] | Autonomous robots | <25 cm | Cooperation. Edges of regular ceiling + Hough + LMS + RANSAC + Odometry | M | Non invasive | Cumulative errors |

Losada [25] | Autonomous robots | <30 cm | Multi-camera sensor. Background model + Generalized Principal Components Analysis | M-H | Localization of multiple mobile robots | No real time performance |

Lee [37] | Autonomous robots | <35 cm | Vision (natural landmarks) + IR ranging | L | Robustness | Only simulation |

Xu [20] | Autonomous robots | <40 cm | Cooperation. Cam + CNN (coarse loc.), LIDAR (fine loc.) | H | Recovery from localization failures | Pre-trained Network |

Duraisamy [30] | Autonomous driving | <0.6 m | Fusion stereo cam + Radar + Lidar. Weighted sum of the covariances | H | Tested in real traffic condition | Fusion accuracy dependent on the sensor inputs |

Luo [6] | Robot/human localization | $0.6$ m | Fusion multi WiFi PIR + Cramér–Rao Bound + triangulation(RSSI) | L | Integrated wireless and PIR sensor (WPIR) | Requires at least three sensor nodes |

Chen [21] | Robot/human localization | 0.25–1 m | RGB-D cam + CNN (coarse loc.), ORB-Features (fine loc.) | M | Indoor and outdoor | Needs geotagged images |

Guan [3] | Human localization | <1 m | Phone cam, Landmark + SURF (offline.), SURF + Match + Homography (online) | L | Reduces latency | Needs offline image database |

Mohebbi [31] | Loc. for multiple occupants | $1.8$ m | Motion sensors + BLE beacon, fusion using a weighted sum | L | Recognizes activities | Low accuracy |

IR System | Camera | BLC and Test Conditions | |||
---|---|---|---|---|---|

Emitter | IRED: SFH 4231 (OSRAM) | Sensor | IMX 219 PQ CMOS (Sony) | Dimensions | $4\times 3$ m |

Emitted Power (Pe) | 100 mW/sr | Resolution | $3280\times 2464$ | Test-Grid (${X}_{1}$ to ${X}_{63}$) | $9\times 7$ target positions in 50 cm grid steps |

Emitted signal | IMDD IR signal modulated at 4 MHz | Transfer rate | 10 fps | IR-anchors | 5 anchors in $3\times 3$ IR deployment (4 at corners, reference in the center) |

Detector | Photodiode: PIN100-11-31-221 (API) | Lens | $1/{4}^{\u2033}$, $24\times 25\times 9$ mm | Camera Locations | 4 locations ($C{L}_{1}$ to $C{L}_{4}$) |

Sensitive Area (As) | 5 mm${}^{2}$ | HW | Raspberry Pi 3 Model B | H-markers positions | 16 ArUco markers |

Responsivity (R) | $0.65$ A/W | Algorithm | Corner detection (Shi-Tomasi) plus centroid search | Number of observations per position | 200 |

i-v conversion gain (${G}_{A}$) | $33\times {10}^{3}$ V/A | Landmark ${X}_{i}$ | illumination conditions for camera | 4 illumination levels | |

BP filter gain (${K}_{F}$) and I/Q gain (${K}_{I/Q}$) | 1 (V/V) | $13\times 13$ cm H-markers (ArUco) | |||

Noise Power | $6\times {10}^{-14}$ V${}^{2}$/Hz | ||||

Noise eq. BW | $30\times \mathsf{\pi}/2$ Hz |

Shape Indicators | Dispersion Indicators (95% Confidence Ellipsoid) | |||||
---|---|---|---|---|---|---|

${\mathit{DI}}_{\mathit{x}}$ | ${\mathit{DI}}_{\mathit{y}}$ | $\overline{\mathit{CI}}$ | $\underset{\mathit{BLC}}{\mathrm{max}}(\mathbf{2}{\mathit{\sigma}}_{{\mathit{ellip}}_{\mathit{max}}})$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | $\underset{\mathit{BLC}}{\mathrm{max}}(\mathbf{2}{\mathit{\sigma}}_{{\mathit{ellip}}_{\mathit{min}}})$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{min}}}$ |

$7.2\%$ | $5.7\%$ | $86.2\%$ | $3.5$ cm | $2.8$ cm | $3.4$ cm | $2.5$ cm |

Shape Indicators | Dispersion Indicators (95% Confidence Ellipsoid) | ||||||
---|---|---|---|---|---|---|---|

Cam. Locations | ${\mathit{DI}}_{\mathit{x}}$ | ${\mathit{DI}}_{\mathit{y}}$ | $\overline{\mathit{CI}}$ | $\underset{\mathit{BLC}}{\mathrm{max}}(\mathbf{2}{\mathit{\sigma}}_{{\mathit{ellip}}_{\mathit{max}}})$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | $\underset{\mathit{BLC}}{\mathrm{max}}(\mathbf{2}{\mathit{\sigma}}_{{\mathit{ellip}}_{\mathit{min}}})$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{min}}}$ |

1 | $47.5\%$ | $30.6\%$ | $50.8\%$ | $1.3$ cm | $0.7$ cm | $0.7$ cm | $0.5$ cm |

2 (A) | $38.6\%$ | $26.8\%$ | $60.1\%$ | $2.3$ cm | $0.9$ cm | $1.5$ cm | $0.7$ cm |

3 | $49.9\%$ | $18.9\%$ | $45.3\%$ | $2.7$ cm | $1.4$ cm | $1.0$ cm | $0.8$ cm |

4 (B) | $31.8\%$ | $7.1\%$ | $43.9\%$ | $3.4$ cm | $1.6$ cm | $1.4$ cm | $0.9$ cm |

Cam. Locations | $min({\mathit{\sigma}}_{{\mathit{x}}_{\mathit{p}}})$ | $\overline{{\mathit{\sigma}}_{{\mathit{x}}_{\mathit{p}}}}$ | $max({\mathit{\sigma}}_{{\mathit{x}}_{\mathit{p}}})$ | $min({\mathit{\sigma}}_{{\mathit{y}}_{\mathit{p}}})$ | $\overline{{\mathit{\sigma}}_{{\mathit{y}}_{\mathit{p}}}}$ | $max({\mathit{\sigma}}_{{\mathit{y}}_{\mathit{p}}})$ |
---|---|---|---|---|---|---|

1 | $1.859$ | $3.013$ | $4.023$ | $1.854$ | $2.375$ | $4.788$ |

2 (A) | $0.738$ | $3.021$ | $4.066$ | $1.472$ | $3.104$ | $4.491$ |

3 | $1.055$ | $2.628$ | $3.636$ | $1.162$ | $2.004$ | $5.560$ |

4 (B) | $0.625$ | $2.550$ | $4.146$ | $1.272$ | $2.573$ | $3.870$ |

IR | Camera | Fusion | ||||
---|---|---|---|---|---|---|

Cam. Locations | $\underset{\mathit{BLC}}{\mathrm{max}}(\mathbf{2}{\mathit{\sigma}}_{{\mathit{ellip}}_{\mathit{max}}})$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | ||

2 (A) | $4.7$ cm | $2.4$ cm | $4.9$ cm | $1.7$ cm | $1.9$ cm | $1.0$ cm |

4 (B) | $4.7$ cm | $2.4$ cm | $6.7$ cm | $2.7$ cm | $2.5$ cm | $1.4$ cm |

IR | Camera | Fusion | ||||
---|---|---|---|---|---|---|

Cam. Locations | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | ${\overline{\mathbf{2}\mathit{\sigma}}}_{{\mathit{ellip}}_{\mathit{max}}}$ | |||

2 (A) | $3.7$ cm | $2.7$ cm | $3.2$ cm | $1.8$ cm | $2.3$ cm | $1.4$ cm |

4 (B) | $3.7$ cm | $2.7$ cm | $4.2$ cm | $2.4$ cm | $2.5$ cm | $1.6$ cm |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Martín-Gorostiza, E.; García-Garrido, M.A.; Pizarro, D.; Salido-Monzú, D.; Torres, P.
An Indoor Positioning Approach Based on Fusion of Cameras and Infrared Sensors. *Sensors* **2019**, *19*, 2519.
https://doi.org/10.3390/s19112519

**AMA Style**

Martín-Gorostiza E, García-Garrido MA, Pizarro D, Salido-Monzú D, Torres P.
An Indoor Positioning Approach Based on Fusion of Cameras and Infrared Sensors. *Sensors*. 2019; 19(11):2519.
https://doi.org/10.3390/s19112519

**Chicago/Turabian Style**

Martín-Gorostiza, Ernesto, Miguel A. García-Garrido, Daniel Pizarro, David Salido-Monzú, and Patricia Torres.
2019. "An Indoor Positioning Approach Based on Fusion of Cameras and Infrared Sensors" *Sensors* 19, no. 11: 2519.
https://doi.org/10.3390/s19112519