# Hand Tracking and Gesture Recognition Using Lensless Smart Sensors

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Rambus Lensless Smart Sensor

## 3. Methodology

#### 3.1. Physical Setup

- (i)
- the longitudinal distance along the Z-axis between the LED and the sensor plane goes from 40 cm to 100 cm;
- (ii)
- the distance between the right and left sensors (Sen_R and Sen_L respectively, as shown in Figure 1) measured on the central points of the sensors, called baseline (b) is 30 cm;
- (iii)
- the combined FoV of 80°.

#### 3.2. Constraints for Multiple Points Tracking

- (i)
- Discrimination—When the distance between two light sources along the X- (or Y-) axis is less than 2 cm, the two sources will merge to a single point in the image frame, which makes them undistinguishable.

- (ii)
- Occlusion—Occlusions occur when the light source is moved away from the sensor focal point along the X-axis. In Figure 3, at extreme lateral positions when the FoV ≥ 40°, even when the LEDs are 3 cm away, one LED is occluded by the other.

#### 3.3. Placement of Light Points

#### 3.4. Hardware Setup

#### 3.5. Multiple Points Tracking

#### 3.5.1. Calibration Phase

_{ref}) for the tracking phase. When the five detected points are correctly identified during the process, every point is labeled in order to assign each to the right part of the hand. The reconstructed frames have their origin in the top-left corner, with a resolution of 320 × 480, which is half the size along the X-direction and the full size along the Y-direction compared to the original image frames, as shown in Figure 6b.

- The middle finger point (M) has the lowest row coordinate in both images.
- The lower palm point (LP) has the highest row coordinate in both images.
- The first upper palm point (UP1) has the lowest column coordinate in both images.
- The second upper palm point (UP2) has the second column coordinate in both images.
- The thumb point (T) has the highest column coordinate in both images.

#### 3.5.2. Tracking Phase

_{ref}, which was saved during the calibration phase. The tracking phase is developed in such a way that it is impossible to start tracking if at least three LEDs are not visible for the first 10 frames, and the number of points in the left and right frames is not equal, in order to always maintain a consistent accuracy. As such, if mL and mR represent the detected maxima for sen_L and sen_R respectively, the following different decisions are taken according to the explored image:

- |mL| ≠ |mR| and |mL|, |mR| > 3: a matrix of zeros is saved and a flag = 0 is set encoding “Unsuccessful Detection”.
- |mL|, |mR| < 3: a matrix of zeros is saved and a flag = 0 is set encoding “Unsuccessful Detection”.
- |mL| = |mR| and |mL|, |mR| > 3: the points are correctly identified, flag = 1 is set encoding “Successful Detection”, and the 2D coordinates of LED positions are saved.

_{0,i}, y

_{0,i}, z

_{0,i}] i = M, LP, T, UP1, UP2

_{−10}… t

_{−1}]

_{−10,i}, …, x

_{−1,i}], [y

_{−10,i}, …, y

_{−1,i}], [z

_{−10,i}, …, z

_{−1,i}]}

_{polyp,j}= a

_{p,i}t

^{2}+ b

_{p,i}t + c

_{p,i}, p = [x, y, z]

_{poly}estimates the coordinates of all five LED positions related to the current iteration, based on the last 10 iterations, as shown in Equation (7):

_{i}, where i = L, R, as the number of maxima detected by both sensors, the combinations without repetitions of three previously ranged points are computed. Indeed, if five points are detected, there will be 10 combinations: four combinations for four detected points, and one combination for three detected points. Thus, triple combinations of C points are generated. The matrix of relative distances for each candidate combination of points, as k ∈ [1, …, |C|], is calculated using Equation (1). The sum of squares of relative distances for each matrix is then determined by Equations (8) and (9) for both the calibration and tracking Phases, as k ∈ [1, …, |C|]:

_{k}and Sum

_{ref}can be associated with the palm coordinates. To avoid inaccurate results caused by environmental noise and possible failures in the local maxima detection, this difference is compared to a threshold to make sure that the estimate is sufficiently accurate. Several trials and experiments were carried out to find a suitable threshold to the value th

_{distance}= 30 cm. The closest candidate $\widehat{k}$ was then found using Equation (12):

_{plane}is the normal vector to the palm plane. It is calculated using the normalized cross-product of the 3D palm coordinates of the points using the right-hand rule, as in Equation (14):

_{1}and UL

_{2}onto the palm plane are calculated by finding the inner product of X

_{plane}and (${X}_{U{L}_{i}}$ − ${X}_{UP2}$). It is then multiplied by X

_{plane}and subtracted from the unlabeled LED positions, as shown in Equation (15):

_{i}, which exposes the minimum angle, while the thumb is selected as the maximum one, as shown in Equations (21) and (22):

_{UL}and the segment X

_{ref}in the same way, as shown in Equation (19). The decision is made according to an empirically designed threshold th

_{angle}= 30°, as shown in Equation (23):

_{distance}constraint given in Equation (12), proceed in the same way as that of Section 3.5.2(a). Thus, the proposed novel multiple points live tracking algorithm labels and tracks all of the LEDs placed on the hand.

#### 3.5.3. Orientation Estimation

_{plane}in Equation (14) and the direction given by the middle finger. If the palm orientation, the distance between initial and final segments (S

_{0}and S

_{3}) of middle finger (d), and the segment lengths (${S}_{il}$, i = 0,1,2,3) are assigned as known variables, the orientation of all of the segments can be estimated using a pentagon approximation model, as shown in Figure 7.

_{plane}, as previously calculated in Equation (14), and the middle finger segment (${X}_{UP2}$ − X

_{M}), is calculated as shown in Equation (24). The angle between S

_{0}and d (=α) is its complementary angle is shown in Equation (25).

_{0}and d is equal to the angle between S

_{3}and d. Thus, knowing that the sum of the internal angles of a pentagon is 540°, the value of the other angles (=β) is also computed assuming that they are equal to each other, as shown in Equation (26).

_{plane}is then computed as shown in Equation (27), and the rotation matrix R is built using angle α around the derived vector according to Euler’s rotation theorem [49]. This rotation matrix is used to calculate the orientation of segment S

_{0}in Equation (28) and its corresponding 2D location in Equation (29).

_{1}, S

_{2}, and S

_{3}), as well as their 2D locations, is calculated. The same method is applied for finding the orientation of all of the thumb segments using the segment lengths and 2D location of lower palm LED (X

_{LP}), but with a trapezoid approximation for estimating the angles.

#### 3.6. 3D Rendering

#### 3.7. Gesture Recognition

## 4. Results and Discussion

#### 4.1. Validation of Rambus LSS

_{i}) are plotted. The corresponding plots are shown in Figure 9a,b, respectively. The results are provided in Table 3.

#### 4.2. Validation of Multiple Points Tracking

_{3}and d (=α) is estimated and plotted with respect to the actual angles in Figure 13d. Here also, the possible three readings are plotted, based on the visibility of the LED fitted on the middle finger. The other joint angles are derived from this angle itself, as explained in Section 3.5.3.

#### 4.3. Validation of Latency Improvements

#### 4.4. Validation of Gesture Recognition

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Taylor, C.L.; Schwarz, R.J. The Anatomy and Mechanics of the Human Hand. Artif. Limbs
**1995**, 2, 22–35. [Google Scholar] - Rautaray, S.S.; Agrawal, A. Vision Based Hand Gesture Recognition for Human Computer Interaction: A Survey. Springer Trans. Artif. Intell. Rev.
**2012**, 43, 1–54. [Google Scholar] [CrossRef] - Garg, P.; Aggarwal, N.; Sofat, S. Vision Based Hand Gesture Recognition. World Acad. Sci. Eng. Technol.
**2009**, 3, 972–977. [Google Scholar] - Yang, D.D.; Jin, L.W.; Yin, J.X. An effective robust fingertip detection method for finger writing character recognition system. In Proceedings of the 4th International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; pp. 4191–4196. [Google Scholar]
- Oka, K.; Sato, Y.; Koike, H. Real time Tracking of Multiple Fingertips and Gesture Recognition for Augmented Desk Interface Systems. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition (FGR.02), Washington, DC, USA, 21 May 2002; pp. 411–416. [Google Scholar]
- Quek, F.K.H. Finger Mouse: A Free hand Pointing Computer Interface. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, 26–28 June 1995; pp. 372–377. [Google Scholar]
- Crowley, J.; Berard, F.; Coutaz, J. Finger Tacking as an Input Device for Augmented Reality. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, 26–28 June 1995; pp. 195–200. [Google Scholar]
- Hung, Y.P.; Yang, Y.S.; Chen, Y.S.; Hsieh, B.; Fuh, C.S. Free-Hand Pointer by Use of an Active Stereo Vision System. In Proceedings of the 14th International Conference on Pattern Recognition (ICPR), Brisbane, Australia, 20 August 1998; pp. 1244–1246. [Google Scholar]
- Stenger, B.; Thayananthan, A.; Torr, P.H.; Cipolla, R. Model-based hand tracking using a hierarchical bayesian filter. IEEE Trans. Pattern Anal. Mach. Intell.
**2006**, 28, 1372–1384. [Google Scholar] [CrossRef] [PubMed] - Elmezain, M.; Al-Hamadi, A.; Niese, R.; Michaelis, B. A robust method for hand tracking using mean-shift algorithm and kalman filter in stereo color image sequences. Int. J. Inf. Technol.
**2010**, 6, 24–28. [Google Scholar] - Erol, A.; Bebis, G.; Nicolescu, M.; Boyle, R.D.; Twombly, X. Vision based hand pose estimation: A review. Comput. Vis. Image Underst.
**2007**, 108, 52–73. [Google Scholar] [CrossRef] - Dipietro, L.; Sabatini, A.M.; Dario, P. A Survey of Glove-Based Systems and their Applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev.
**2008**, 38, 461–482. [Google Scholar] [CrossRef] - Dorfmüller-Ulhaas, K.; Schmalstieg, D. Finger Tracking for Interaction in Augmented Environments; Technical Report TR186-2-01-03; Institute of Computer Graphics and Algorithms, Vienna University of Technology: Vienna, Austria, 2001. [Google Scholar]
- Kim, J.H.; Thang, N.D.; Kim, T.S. 3-D hand Motion Tracking and Gesture Recognition Using a Data Glove. In Proceedings of the 2009 IEEE International Symposium on Industrial Electronics, Seoul, Korea, 5–8 July 2009; pp. 1013–1018. [Google Scholar]
- Chen, Y.P.; Yang, J.Y.; Liou, S.N.; Lee, G.Y.; Wang, J.S. Online Classifier Construction Algorithm for Human Activity Detection using a Triaxial Accelerometer. Appl. Math. Comput.
**2008**, 205, 849–860. [Google Scholar] - O’Flynn, B.; Sachez, J.; Tedesco, S.; Downes, B.; Connolly, J.; Condell, J.; Curran, K. Novel Smart Glove Technology as a Biomechanical Monitoring Tool. Sens. Transducers J.
**2015**, 193, 23–32. [Google Scholar] - Hsiao, P.C.; Yang, S.Y.; Lin, B.S.; Lee, I.J.; Chou, W. Data glove embedded with 9-axis IMU and force sensing sensors for evaluation of hand function. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4631–4634. [Google Scholar]
- Gloveone-Neurodigital. Available online: https://www.neurodigital.es/gloveone/ (accessed on 6 November 2017).
- Manus, V.R. The Pinnacle of Virtual Reality Controllers. Available online: https://manus-vr.com/ (accessed on 6 November 2017).
- Hi5 VR Glove. Available online: https://hi5vrglove.com/ (accessed on 6 November 2017).
- Tzemanaki, A.; Burton, T.M.; Gillatt, D.; Melhuish, C.; Persad, R.; Pipe, A.G.; Dogramadzi, S. μAngelo: A Novel Minimally Invasive Surgical System based on an Anthropomorphic Design. In Proceedings of the 5th IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, 12–15 August 2014; pp. 369–374. [Google Scholar]
- Sturman, D.; Zeltzer, D. A survey of glove-based input. IEEE Comput. Graph. Appl.
**1994**, 14, 30–39. [Google Scholar] [CrossRef] - Wang, R.Y.; Popović, J. Real-time hand-tracking with a color glove. ACM Trans. Graph.
**2009**, 28, 63. [Google Scholar] [CrossRef] - Weichert, F.; Bachmann, D.; Rudak, B.; Fisseler, D. Analysis of the accuracy and robustness of the leap motion controller. Sensors
**2013**, 13, 6380–6393. [Google Scholar] [CrossRef] [PubMed] - Nair, R.; Ruhl, K.; Lenzen, F.; Meister, S.; Schäfer, H.; Garbe, C.S.; Eisemann, M.; Magnor, M.; Kondermann, D. A survey on time-of-flight stereo fusion. In Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, LNCS; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8200, pp. 105–127. [Google Scholar]
- Zhang, Z. Microsoft Kinect sensor and its effect. IEEE Multimed.
**2012**, 19, 4–10. [Google Scholar] [CrossRef][Green Version] - Suarez, J.; Murphy, R.R. Hand Gesture Recognition with Depth Images: A Review. In Proceedings of the IEEE RO-MAN, Paris, France, 9–13 September 2012; pp. 411–417. [Google Scholar]
- Cheng, H.; Yang, L.; Liu, Z. Survey on 3D hand gesture recognition. IEEE Trans. Circuits Syst. Video Technol.
**2016**, 26, 1659–1673. [Google Scholar] [CrossRef] - Tzionas, D.; Srikantha, A.; Aponte, P.; Gall, J. Capturing hand motion with an RGB-D sensor, fusing a generative model with salient points. In GCPR 2014 36th German Conference on Pattern Recognition; Springer: Cham, Switzerland, 2014; pp. 1–13. [Google Scholar]
- Li, Y. Hand gesture recognition using Kinect. In Proceedings of the 2012 IEEE International Conference on Computer Science and Automation Engineering, Beijing, China, 22–24 June 2012; pp. 196–199. [Google Scholar]
- Wang, R.; Paris, S.; Popović, J. 6D hands: Markerless hand-tracking for computer aided design. In 24th Annual ACM Symposium on User Interface Software and Technology; ACM: New York, NY, USA, 2011; pp. 549–558. [Google Scholar]
- Sridhar, S.; Bailly, G.; Heydrich, E.; Oulasvirta, A.; Theobalt, C. Full Hand: Markerless Skeleton-based Tracking for Free-Hand Interaction. In Full Hand: Markerless Skeleton-based Tracking for Free-Hand Interaction; MPI-I-2016-4-002; Max Planck Institute for Informatics: Saarbrücken, Germany, 2016; pp. 1–11. [Google Scholar]
- Ballan, L.; Taneja, A.; Gall, J.; Van Gool, L.; Pollefeys, M. Motion capture of hands in action using discriminative salient points. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7577, pp. 640–653. [Google Scholar]
- Du, G.; Zhang, P.; Mai, J.; Li, Z. Markerless Kinect-based hand tracking for robot teleoperation. Int. J. Adv. Robot. Syst.
**2012**, 9, 36. [Google Scholar] [CrossRef] - Sridhar, S.; Mueller, F.; Oulasvirta, A.; Theobalt, C. Fast and robust hand tracking using detection-guided optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3213–3221. [Google Scholar]
- Tkach, A.; Pauly, M.; Tagliasacchi, A. Sphere-Meshes for Real-Time Hand Modeling and Tracking. ACM Trans. Graph.
**2016**, 35, 222. [Google Scholar] [CrossRef] - Chen, C.; Jafari, R.; Kehtarnavaz, N. A survey of depth and inertial sensor fusion for human action recognition. Multimed. Tools Appl.
**2015**, 76, 4405–4425. [Google Scholar] [CrossRef] - Liu, K.; Chen, C.; Jafari, R.; Kehtarnavaz, N. Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens. J.
**2014**, 14, 1898–1903. [Google Scholar] - Lensless Smart Sensors-Rambus. Available online: https://www.rambus.com/emerging-solutions/lensless-smart-sensors/ (accessed on 6 November 2017).
- Stork, D.; Gill, P. Lensless Ultra-Miniature CMOS Computational Imagers and Sensors. In Proceedings of the International Conference Sensor Technology, Wellington, New Zealand, 3–5 December 2013; pp. 186–190. [Google Scholar]
- Abraham, L.; Urru, A.; Wilk, M.P.; Tedesco, S.; O’Flynn, B. 3D Ranging and Tracking Using Lensless Smart Sensors. Available online: https://www.researchgate.net/publication/321027909_Target_Tracking?showFulltext=1&linkId=5a097fa00f7e9b68229d000e (accessed on 13 November 2017).
- Abraham, L.; Urru, A.; Wilk, M.P.; Tedesco, S.; Walsh, M.; O’Flynn, B. Point Tracking with lensless smart sensors. In Proceedings of the IEEE Sensors, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
- Stork, D.G.; Gill, P.R. Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors. Int. J. Adv. Syst. Meas.
**2014**, 7, 201–208. [Google Scholar] - Gill, P.; Vogelsang, T. Lensless smart sensors: Optical and thermal sensing for the Internet of Things. In Proceedings of the 2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits), Honolulu, HI, USA, 15–17 June 2016. [Google Scholar]
- Zhao, Y.; Liebgott, H.; Cachard, C. Tracking micro tool in a dynamic 3D ultrasound situation using kalman filter and ransac algorithm. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, Spain, 2–5 May 2012; pp. 1076–1079. [Google Scholar]
- Hoehmann, L.; Kummert, A. Laser range finder based obstacle tracking by means of a two-dimensional Kalman filter. In Proceedings of the 2007 International Workshop on Multidimensional (nD) Systems, Aveiro, Portugal, 27–29 June 2007; pp. 9–14. [Google Scholar]
- Domuta, I.; Palade, T.P. Adaptive Kalman Filter for Target Tracking in the UWB Networks. In Proceedings of the 13th Workshop on Positioning, Navigation and Communications, Bremen, Germany, 19–20 October 2016; pp. 1–6. [Google Scholar]
- Wilk, M.P.; Urru, A.; Tedesco, S.; O’Flynn, B. Sub-pixel point detection algorithm for point tracking with low-power wearable camera systems. In Proceedings of the Irish Signals and Systems Conference, ISSC, Killarney, Ireland, 20–21 June 2017; pp. 1–6. [Google Scholar]
- Palais, B.; Palais, R. Euler’s fixed point theorem: The axis of a rotation. J. Fixed Point Theory Appl.
**2007**, 2, 215–220. [Google Scholar] [CrossRef] - Belgioioso, G.; Cenedese, A.; Cirillo, G.I.; Fraccaroli, F.; Susto, G.A. A machine learning based approach for gesture recognition from inertial measurements. In Proceedings of the IEEE Conference on Decision Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 4899–4904. [Google Scholar]
- Normani, N.; Urru, A.; Abraham, L.; Walsh, M.; Tedesco, S.; Cenedese, A.; Susto, G.A.; O’Flynn, B. A Machine Learning Approach for Gesture Recognition with a Lensless Smart Sensor System. In Proceedings of the IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Las Vegas, NV, USA, 4–7 March 2018; pp. 1–4. [Google Scholar]
- Tian, S.; Zhang, X.; Tian, J.; Sun, Q. Random Forest Classification of Wetland Landcovers from Multi-Sensor Data in the Arid Region of Xinjiang, China. Remote Sens.
**2016**, 8, 954. [Google Scholar] [CrossRef] - Hand Tracking Using Rambus LSS. Available online: https://vimeo.com/240882492/ (accessed on 6 November 2017).
- Gesture Recognition Using Rambus LSS. Available online: https://vimeo.com/240882649/ (accessed on 2 November 2017).

**Figure 8.**3D Rendering of hand (

**a**); calculated LED positions (

**b**); palm plane reconstructed using Three LEDs on Palm (LP, UP1, and UP2) (

**c**); middle finger reconstructed with M LED (

**d**); index, ring, and little fingers reconstructed using the properties of M LED (

**e**); thumb reconstructed with T LED.

**Figure 10.**(

**a**) LP; (

**b**) UP1; (

**c**) UP2; (

**d**) M; (

**e**) T. Tracked plane with respect to the reference plane for all LED positions.

**Figure 12.**Different orientations of hand in front of LSSs: (

**a**) hand held straight; (

**b**) hand held upside down at an inclination; (

**c**) fingers bend.

**Figure 13.**Calculated vs. Actual Orientation: (

**a**) Along X-Axis; (

**b**) Along Y-Axis; (

**c**) Along Z-Axis; (

**d**) Along Middle Finger between S3 and d (α).

**Figure 14.**Image Frames and Reconstructed Frames (

**a**) before Reducing the Region of Interest (ROI); (

**b**) after Reducing the ROI.

**Figure 15.**(

**a**) Classification accuracy as a function of LED positions; (

**b**) confusion matrix for the dataset.

LP | UP1 | UP2 | |
---|---|---|---|

LP | d_{LP, LP} | d_{LP, UP1} | d_{LP, UP2} |

UP1 | d_{UP1, LP} | d_{UP1, UP1} | d_{UP1, UP2} |

UP2 | d_{UP2, LP} | d_{UP2, UP1} | d_{UP2, UP2} |

Gesture Label | Gesture Description |
---|---|

0 – Forward | Forward Movement along Z axis |

1—Backward | Backward Movement along Z axis |

2—Triangle | Triangle performed on X-Y plane: Basis parallel to the X axis |

3—Circle | Circle performed on X-Y plane |

4—Line Up → Down | Line Up/ Down direction on X-Y plane |

5—Blank | None of the previous gestures |

Precision | Centre | +40 deg | −40 deg | +60 deg | −60 deg |
---|---|---|---|---|---|

RMSE (cm) | 0.2059 | 0.2511 | 0.2740 | 0.3572 | 0.4128 |

Repeatability (cm) | 0.0016 | 0.0058 | 0.0054 | 0.0210 | 0.0313 |

Temporal Noise (cm) | 0.0027 | 0.0082 | 0.0078 | 0.0435 | 0.0108 |

RMSE (cm) | LP | UP1 | UP2 | M | T |
---|---|---|---|---|---|

X | 0.5054 | 0.6344 | 0.5325 | 0.7556 | 0.7450 |

Y | 0.3622 | 0.3467 | 0.5541 | 0.9934 | 0.5222 |

Z | 0.8510 | 1.0789 | 0.9498 | 1.2081 | 0.7903 |

Total | 1.0540 | 1.2987 | 1.1457 | 1.7370 | 1.2051 |

RMSE (cm) | X-Axis | Y-Axis | ||
---|---|---|---|---|

With Respect to LP | With Respect to Mean | With Respect to LP | With Respect to Mean | |

UP1 | 0.5054 | 0.6344 | 0.5325 | 0.7556 |

UP2 | 0.3622 | 0.3467 | 0.5541 | 0.9934 |

M | 0.8510 | 1.0789 | 0.9498 | 1.2081 |

T | 1.0540 | 1.2987 | 1.1457 | 1.7370 |

No. of Frames Averaged | Full Image Frames (480 × 320) | Image Frames with Reduced ROI (200 × 320) | ||
---|---|---|---|---|

Dt (s) | EFPs | Dt (s) | EFPs | |

5 | 0.0553 ± 0.0086 | ≈ 18 | 0.0482 ± 0.0134 | ≈ 21 |

2 | 0.0439 ± 0.0182 | ≈ 23 | 0.0296 ± 0.0085 | ≈ 34 |

1 | 0.0368 ± 0.0036 | ≈ 27 | 0.0248 ± 0.0047 | ≈ 40 |

No. of Frames Averaged | Full Image Frames (480 × 320) | Image Frames with Reduced ROI (200 × 320) | ||||
---|---|---|---|---|---|---|

RMSE (cm) | Repeatability (cm) | Temporal Noise (cm) | RMSE (cm) | Repeatability (cm) | Temporal Noise (cm) | |

5 | 0.4814 | 0.0025 | 0.0031 | 0.5071 | 0.0018 | 0.0036 |

2 | 0.5333 | 0.0047 | 0.0052 | 0.5524 | 0.0041 | 0.0049 |

1 | 0.5861 | 0.0086 | 0.0074 | 0.6143 | 0.0088 | 0.0091 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Abraham, L.; Urru, A.; Normani, N.; Wilk, M.P.; Walsh, M.; O’Flynn, B. Hand Tracking and Gesture Recognition Using Lensless Smart Sensors. *Sensors* **2018**, *18*, 2834.
https://doi.org/10.3390/s18092834

**AMA Style**

Abraham L, Urru A, Normani N, Wilk MP, Walsh M, O’Flynn B. Hand Tracking and Gesture Recognition Using Lensless Smart Sensors. *Sensors*. 2018; 18(9):2834.
https://doi.org/10.3390/s18092834

**Chicago/Turabian Style**

Abraham, Lizy, Andrea Urru, Niccolò Normani, Mariusz P. Wilk, Michael Walsh, and Brendan O’Flynn. 2018. "Hand Tracking and Gesture Recognition Using Lensless Smart Sensors" *Sensors* 18, no. 9: 2834.
https://doi.org/10.3390/s18092834