Next Article in Journal / Special Issue
Perspective Preserving Solution for Quasi-Orthoscopic Video See-Through HMDs
Previous Article in Journal
An Algorithm for Data Hiding in Radiographic Images and ePHI/R Application
Previous Article in Special Issue
A Low-Cost, Wearable Opto-Inertial 6-DOF Hand Pose Tracking System for VR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery

by
Lidia Santos
1,*,
Nicola Carbonaro
2,3,
Alessandro Tognetti
2,3,
José Luis González
1,
Eusebio De la Fuente
1,
Juan Carlos Fraile
1 and
Javier Pérez-Turiel
1
1
Instituto de las Tecnologías Avanzadas de la Producción (ITAP), University of Valladolid, 47011 Valladolid, Spain
2
Research Centre E. Piaggio, University of Pisa, 56122 Pisa, Italy
3
Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
*
Author to whom correspondence should be addressed.
Technologies 2018, 6(1), 8; https://doi.org/10.3390/technologies6010008
Submission received: 14 November 2017 / Revised: 10 January 2018 / Accepted: 10 January 2018 / Published: 13 January 2018
(This article belongs to the Special Issue Wearable Technologies)

Abstract

:
This paper presents a methodology for movement recognition in hand-assisted laparoscopic surgery using a textile-based sensing glove. The aim is to recognize the commands given by the surgeon’s hand inside the patient’s abdominal cavity in order to guide a collaborative robot. The glove, which incorporates piezoresistive sensors, continuously captures the degree of flexion of the surgeon’s fingers. These data are analyzed throughout the surgical operation using an algorithm that detects and recognizes some defined movements as commands for the collaborative robot. However, hand movement recognition is not an easy task, because of the high variability in the motion patterns of different people and situations. The data detected by the sensing glove are analyzed using the following methodology. First, the patterns of the different selected movements are defined. Then, the parameters of the movements for each person are extracted. The parameters concerning bending speed and execution time of the movements are modeled in a prephase, in which all of the necessary information is extracted for subsequent detection during the execution of the motion. The results obtained with 10 different volunteers show a high degree of precision and recall.

1. Introduction

One of the most important innovations in surgery over the past three decades has been the advent of minimally invasive surgery (MIS). This technique has revolutionized surgical practice due to its ability to avoid the trauma of traditional open surgery and diminish the possibility of incision-related complications. These benefits also have economic consequences, because they result in a reduction of hospital stay times. However, MIS is technically challenging, because it must be conducted in a very restricted space using micro instruments and endoscopes that dramatically limit the surgeon’s perception. In order to gain tactile and force feedback, new technologies and techniques have been introduced over the last few years. One of these novel techniques is hand-assisted laparoscopic surgery (HALS). In HALS, the surgeon inserts a hand in the patient’s abdomen through a small incision via a pressurized sleeve while operating a surgical tool with the other hand. Although this approach is slightly more invasive for the patient, it is still a MIS intervention, and has been proved especially advantageous in some types of operations, such as colon and colorectal cancer surgery [1].
However, HALS has a major shortcoming. As the surgeon is holding the tissue with the inserted hand and a micro instrument with the other, he/she needs the close cooperation of an assistant to manage the endoscope and additional surgical tools when performing surgical maneuvers such as stitching and knot tying. In this paper, we tackle the automation of the tasks performed by the human assistant using, instead, a collaborative robot (Figure 1). This robotic system requires, among other important issues, a simple communication scheme capable of recognizing the surgeon’s direct orders given by the hand inserted in the abdominal cavity.
Previous works on surgical robots can be found in the literature. The first robot systems for laparoscopic surgery were developed to provide more stability and precision to the movements of the surgical tools and endoscopes. They were teleoperated systems that integrated a simple robotic arm with a laparoscopic instrument attached to it [2]. Since then, a number of semi-autonomous robots have been developed and studied to assist the surgeon in the different phases of the operation [3,4,5].
Autonomous systems require recognition of the surgical gestures made by the surgeon. The use of cameras is an early developed technology to sense hand gestures [6,7,8,9,10], including gloved hand recognition [11], but image processing is always problematic when the scene is under variable illumination or with a cluttered background [12]. In HALS, the variable lighting provided by an endoscope under continuous movement, as well as the difficulty of extracting a permanently blood-stained hand from the internal scenes, prevent the use of this technology. These circumstances are aggravated as the hand would only be partially visible in the images due to the limited viewing field available inside the abdominal cavity.
In order to communicate with the collaborative robot in a natural way, the use of a sensor glove is proposed. A dynamic gesture recognition algorithm has been developed to identify the commands the surgeon gives to the robot with his/her hand inserted in the abdominal cavity. The chosen textile-based motion sensing glove is comfortable, and permits the perfect mobility of the surgeon’s hand in the reduced space inside the patient’s abdomen. Although the glove used in this study was tailored for a different application (i.e., daily-life monitoring of the grasping activity of stroke patients, as described in [13]) and has a low number of sensors, i.e., three sensors covering the thumb, index, and middle fingers, this wearable device allows the surgeon’s hand movements to be monitored. The movement of the fingers is followed by the glove sensors, without limiting the operability. However, some disturbances may appear due to cross-talk between sensors. When the operator tries to move one finger, this can generate a noise signal in another. These disturbances are filtered to avoid misclassification of the surgeon’s gesture.
To check the algorithm, 10 tests were performed by 10 different subjects to detect some movements designated in a previous selection phase. Each test consisted of three predefined movements and two additional gestures, which were included in order to demonstrate that the algorithm does not erroneously confuse a movement that is not predefined with a predefined one.
The aim of these tests is to determine whether the developed gesture recognition algorithm can be used to send commands using a sensor glove to a collaborator robot during a HALS with a high degree of precision.
This paper is organized as follows. Section 2 introduces the materials and methodologies used in the experiments that are shown in Section 3. The results are presented in Section 3 and discussed in Section 4. Finally, Section 5 presents the conclusions.

2. Materials and Methods

2.1. Sensing Glove

The sensing glove adopted in this work is made of cotton–lycra, and has three textile goniometers directly attached to the fabric. Figure 2a shows the position of the goniometers on the glove, while Figure 2b show the final prototype of the glove where the goniometers are insulated with an additional layer of black fabric.
The textile goniometers are double layer angular sensors, as previously described in [14,15]. The sensing layers are knitted piezoresistive fabrics (KPF) that are made of 75% electro-conductive yarn and 25% Lycra [16,17]. The two KPF layers are coupled through an electrically-insulating stratum (Figure 3a). The sensor output is the electrical resistance difference (ΔR) of the two sensing layers. We demonstrated earlier that the sensor output is proportional to the flexion angle (θ) [14], which is the angle delimited by the tangent planes to the sensor extremities (Figure 3b).
The glove was developed in previous studies to monitor stroke patients’ everyday activity to evaluate the outcome of their rehabilitation treatment [13,18]. In [19], the reliable performance of the glove goniometers was demonstrated, and showed errors below five degrees as compared with an optical motion capture instrument during natural hand opening/closing movements. The glove has two KPF goniometers on the dorsal side of the hand to detect the flexion-extension movement of the metacarpal-phalangeal joints of the index and middle fingers. The third goniometer covers the trapezium-metacarpal and the metacarpal-phalangeal joints of the thumb to detect thumb opposition. We conceived this minimal sensor configuration as a tradeoff between grasping recognition and the wearability of the prototype.
An ad hoc three-channel analog front-end was designed for the acquisition of ΔR from each of the three goniometers (Figure 3c). For each goniometer, the voltages V1 = Vp2 − Vp3 and V2 = Vp5 − Vp4 are measured when a constant and known current I is supplied through p1 p6. A high-input impedance stage, consisting of two instrumentation amplifiers (INS1 and INS2), measures the voltages across the KPF sensors. These voltages are proportional, through the known current I, to the resistances of the top and bottom layers (R1 and R2). A differential amplifier (DIFF) amplifies the difference between the measured voltages, obtaining the final output ΔV, which is proportional to ΔR and θ. Each channel was analogically low-pass filtered (anti-aliasing, cut-off frequency of 10 Hz). The resulting data were digitally converted (sample time of 100 Sa/s) and wirelessly transmitted to a remote PC for storage and further elaboration.

2.2. Algorithm for Movement Detection

The glove will communicate with a collaborative robot to assist during a HALS. The actions to test the collaborative robot take into account the various robotic actions covered by the literature [20], among which are the guidance of the laparoscopic camera for the safe movement of the endoscope [21] or a needle insertion [22], the prediction of the end point [23,24], the knotting and unknotting on suture procedures [25], or grasping and lifting on tissue retraction [26]. Ultimately, we selected three actions to be performed by the collaborative robot: center the image from the endoscope, indicate a place to suture, and stretch the thread to suture. These actions are performed in a cholecystectomy, which is the surgical removal of the gallbladder.
Each of the three robot actions mentioned above is associated with a hand movement to be performed by the surgeon. Therefore, the system must be prepared to unambiguously recognize the different movements defined as commands for the robot in order to prevent it from performing undesirable operations. They will be differentiated by the detection algorithm, which is tested with a protocol.
The protocol includes these three movements, which must be detected as robot commands, and are shown in Table 1 and numbered from 1 to 3. Actions 4 and 5 are introduced to test the developed algorithm. These were selected for their similarity to the movements selected in both the sensor value and motion patterns. As a result, differentiation in advance between the different movements is difficult.
To detect these movements, the developed algorithm analyzes the following parameters: flexion pattern, velocity, execution times, and value provided by the sensor of each finger. To evaluate these parameters, there is a previous phase in which the variables of each movement in each person are examined. This previous stage is required for each person, because the speed and timing of the finger movement is highly variable, as shown in Figure 4.
Once these variables are defined, as explained in later paragraphs, the detection algorithm can identify each of the three movements.
The motion of the index and middle fingers is sensed by the glove. The acquired data is continuously processed by the developed algorithm in order to detect some of the predefined dynamics patterns. Due to the unique textile substrate to which of all the sensors are attached, cross-talk between sensors may appear. This could be observed as a disturbing signal from a finger when the operator tries to move another finger, as shown in Figure 5. These movements are filtered in order to avoid a misclassification.
Due to the nature of the sensors used, it is possible to determine the degree of flexion being applied to the sensor on the glove. However, movements 4 and 2 could be confused due to their similarity, as shown in Figure 6.
Movement 1 can be identified by analyzing the data from the index and middle fingers. Each rise and fall in the glove sensor values corresponds to the flexion and extension movements of the fingers. This movement consists of a descent (called D1) and ascent (A1), followed by another descent (D2) and ascent (A2), as shown in Figure 7. This is the flexion pattern considered for movement 1. The D time and A time are, respectively, the times taken during a descent or ascent.
The flexion velocity involved in this dynamic gesture is higher than the cross-talk ones, as shown in Figure 7b. To establish the typical velocity for this movement, the average and the standard deviation of the velocity along D1 and D2, and A1 and A2, are calculated. This typical velocity, V1u, is the minimum value obtained from the subtraction of the standard deviation from the average in three tests performed by the same person. The minimum time during descents, t1Du, (D1 and D2) and ascents, t1Au, (A1 and A2) is also calculated, and will represent the characteristic ascent and descent execution times of movement 1.
To determine the execution time, t1u, the maximum time in which the whole movement is performed is considered; that is D1, A1, D2, and A2.
The last parameters to be defined are the maximum, xmax, and minimum, xmin, values of the sensor, which set the thresholds to consider if the obtained values are part of movement 1. They are obtained by analyzing three movement samples from the same person.
With these parameters, shown in Table 2, movement 1 can be defined and differentiated from others, considering the flexion velocity Ve as the instantaneous velocity scanned during the entire movement performed, and the execution time te as the time in which the velocity exceeds the velocity threshold.
Using the graphs obtained during the performance of movement 2, as shown in Figure 8, we can conclude that it is necessary to determine the movements of the index and middle finger in order to obtain a definition. The flexion pattern for this movement is D1, A1, D2, and A2 for the index finger, and no movement for the middle finger. The velocity, time of execution, minimum time during descents (D1 and D2) and ascents (A1 and A2), and the sensor value are defined as described in movement 1.
Movement 3, in Figure 9, differs from the other two in that the velocity must be 0, so it is a static position maintained for a certain time. To identify it, we examine the values of the index and middle finger sensors, which will be proportional to the flexion carried out by the finger with the sensor.
The algorithm for the detection of defined movements evaluates all of the abovementioned parameters, and detects when one of these movements is executed.

3. Experiments and Results

The test consists of carrying out the movements shown in Table 1 in the same order, as well as performing a flat position between them, in a scenario of experiments (Figure 10). Therefore, the correct order of execution is: flat position, movement 1, flat position, movement 2, flat position, movement 3, flat position, movement 4, flat position, movement 5, flat position.
Movements 1, 2, and 3 have been selected to be detected by the algorithm, while movements 4 and 5 were introduced to prove that they are not detected in the same manner as the three selected ones. The two newly introduced movements are similar to movements 1 and 2, but there are small differences between them.
First, the data are collected from the glove. Then, they are analyzed by the algorithm to detect the movements that will be interpreted as commands for the robot. These orders are then sent to the collaborative robot.
The test has been carried out by 10 people, 10 times. Ten right-handed volunteers (five men, five women) completed the test. All of the participating people in these research activities gave informed consent for the experiments. No one reported physical limitations that would affect their skill in performing the task.
The characteristic parameters of each movement are calculated from three tests performed by the same person. These parameters are characteristic of each person, so 10 sets of patterns have been obtained for each type of movement, one per person.
Movements 1, 2, and 3 must be detected by the algorithm, while movements 4 and 5 should not be classified as selected movements. As shown in Table 3, movement 1 was detected with a precision—the percentage of positive predictions that were correct—of 0.99, and a recall—the percent of the positive cases recognized—of 0.98. Movement 4 was identified as movement 1 only 1% of the time. On the other hand, movement 2 was detected with a precision of 0.73 and a recall of 0.87. Movement 4 was recognized as movement 2 33% of the time. Movement 5 was never mistaken with other movements, and movement 3 was detected with a precision of 1.0 and a recall of 0.97.
F1 as scored for movements 1 and 3 is 0.98, while for movement 2 it was considerably lower, 0.79.

4. Discussion

Movement 4 is detected as movement 2 or 1 because of their similarity, as explained in the previous sections. Despite the study of different patterns, times, and speeds, movement 4 is detected as movement 2 35% of the time. Whenever movement 3 has not been detected, this was due to an insufficient time in the static position.
Reviewing the results, it can be concluded that the effectiveness of the algorithm depends largely on the person performing the test as shown in Appendix A. Results with surgeons are expected to be better, because they have greater motor skills, considering their specific training [27]. Tests have shown that the newly developed algorithm can adequately identify the three movements defined in a series of different continuous movements. Movement recognition is precise, because identification is based not only on the initial and final pose, but also on intermediate positions and speeds that are continuously analyzed to determine whether their pattern is analogous to the model. Different filters are also introduced to make the dynamic gesture recognition algorithm more reliable. The patterns obtained with the sensing glove present sufficient information as to be robustly identified, and prevent failures in those cases where the positions are similar to those of the model, but the execution speed of the movement is different.
One of the purposes of this study was to test the validity of our non-specific glove to demonstrate the possibility of using this kind of device, and define the specification for a HALS-dedicated textile glove for use in future studies. In future works, glove-based hand motion sensing could be fused with other sensing modalities, such as artificial vison, to make the system more robust.

5. Conclusions

Most current surgical robots are not suitable for HALS operations. Its teleoperated nature prevents its application in these operations where the surgeon is in direct contact with the patient. In this scenario, it is necessary to have a robot co-worker that cooperates closely with the surgeon in order to emulate the interaction with a human assistant. A natural communication interface between surgeon and robot is crucial in this context. This paper tackles the design of a dynamic gesture recognition algorithm using a sensor glove that identifies the commands given by the surgeon’s hand inside the patient’s abdominal cavity. Three different dynamic gestures have been predefined to: point the robot where to suture, order it to focus the endoscope, and stretch the thread. All of these tasks present automatic procedures in the literature to carry them out. The algorithm designed to recognize these gestures analyzes continuously the timing and the bending speed of the index and middle fingers, and it tries to match them with some of the patterns previously recorded by a particular operator.
The experiments conducted with 10 different volunteers show a good recognition rate and time performance. However, considering its application in surgical operations, there is room for improvement. Although this study has considered the option of the sensing glove, another hand motion sensor would need to be added in order to make the system completely reliable. Furthermore, other important issues such as safety or electromagnetic compatibility should be addressed in future works.

Acknowledgments

This research has been partially funded by the Spanish State Secretariat for Research, Development and Innovation, through project DPI2013-47196-C3-3-R.

Author Contributions

Alessandro Tognetti and Nicola Carbonaro conceived and designed the sensing glove. Lidia Santos carried out the experiments. Lidia Santos processed and analyzed the data. Eusebio de la Fuente, José Luis González, Alessandro Tognetti and Nicola Carbonaro gave advice and discussion. Lidia Santos, Eusebio de la Fuente, Alessandro Tognetti and Nicola Carbonaro wrote the paper. Juan Carlos Fraile, Javier Turiel, Alessandro Tognetti and Nicola Carbonaro supervised the entire work. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Results for each volunteer.
Table A1. Results Volunteer 1.
Table A1. Results Volunteer 1.
Volunteer 1Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2080001.000.800.89
30010001.001.001.00
Table A2. Results Volunteer 2.
Table A2. Results Volunteer 2.
Volunteer 2Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
20100200.831.000.91
30010001.001.001.00
Table A3. Results Volunteer 3.
Table A3. Results Volunteer 3.
Volunteer 3Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2080200.800.800.80
30010001.001.001.00
Table A4. Results Volunteer 4.
Table A4. Results Volunteer 4.
Volunteer 4Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2090500.640.900.75
3009001.000.900.95
Table A5. Results Volunteer 5.
Table A5. Results Volunteer 5.
Volunteer 5Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2090200.820.900.86
30010001.001.001.00
Table A6. Results Volunteer 6.
Table A6. Results Volunteer 6.
Volunteer 6Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2080700.530.800.64
30010001.001.001.00
Table A7. Results Volunteer 7.
Table A7. Results Volunteer 7.
Volunteer 7Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
2080900.470.800.59
30010001.001.001.00
Table A8. Results Volunteer 8.
Table A8. Results Volunteer 8.
Volunteer 8Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000100.911.000.95
20100001.001.001.00
3009001.000.900.95
Table A9. Results Volunteer 9.
Table A9. Results Volunteer 9.
Volunteer 9Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement1800001.000.800.89
2070001.000.700.82
3009001.000.900.95
Table A10. Results Volunteer 10.
Table A10. Results Volunteer 10.
Volunteer 10Actual MovementPrecisionRecallF1-Score
12345
Predicted Movement11000001.001.001.00
20100600.631.000.77
30010001.001.001.00

References

  1. Jayne, D.G.; Thorpe, H.C.; Copeland, J.; Quirke, P.; Brown, J.M.; Guillou, P.J. Five-year follow-up of the Medical Research Council CLASICC trial of laparoscopically assisted versus open surgery for colorectal cancer. Br. J. Surg. 2010, 97, 1638–1645. [Google Scholar] [CrossRef] [PubMed]
  2. LaRose, D.; Taylor, R.H.; Funda, J.; Eldridge, B.; Gomory, S.; Talamini, M.; Kavoussi, L.; Anderson, J.; Gruben, K. A Telerobotic Assistant for Laparoscopic Surgery. IEEE Eng. Med. Biol. Mag. 1995, 14, 279–288. [Google Scholar]
  3. Bauzano, E.; Garcia-Morales, I.; del Saz-Orozco, P.; Fraile, J.C.; Muñoz, V.F. A minimally invasive surgery robotic assistant for HALS-SILS techniques. Comput. Methods Programs Biomed. 2013, 112, 272–283. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, K.Y.; Song, H.S.; Suh, J.W.; Lee, J.J. A novel surgical manipulator with workspace-conversion ability for telesurgery. IEEE/ASME Trans. Mechatron. 2013, 18, 200–211. [Google Scholar] [CrossRef]
  5. Estebanez, B.; del Saz-Orozco, P.; García-Morales, I.; Muñoz, V.F. Multimodal Interface for a Surgical Robotic Assistant: Surgical Maneuvers Recognition Approach. Rev. Iberoam. Autom. Inform. Ind. 2011, 8, 24–34. [Google Scholar] [CrossRef]
  6. Song, Y.; Demirdjian, D.; Davis, R. Continuous body and hand gesture recognition for natural human-computer interaction. Int. Jt. Conf. Artif. Intell. 2015, 2, 4212–4216. [Google Scholar] [CrossRef]
  7. Ganokratanaa, T.; Pumrin, S. The vision-based hand gesture recognition using blob analysis. In Proceedings of the 2017 International Conference on Digital Arts, Media and Technology (ICDAMT), Chiang Mai, Thailand, 1–4 March 2017; pp. 336–341. [Google Scholar]
  8. Asadi-Aghbolaghi, M.; Clapes, A.; Bellantonio, M.; Escalante, H.J.; Ponce-Lopez, V.; Baro, X.; Guyon, I.; Kasaei, S.; Escalera, S. A Survey on Deep Learning Based Approaches for Action and Gesture Recognition in Image Sequences. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 30 May–3 June 2017; pp. 476–483. [Google Scholar]
  9. Alon, J.; Athitsos, V.; Yuan, Q.; Sclaroff, S. Simultaneous localization and recognition of dynamic hand gestures. In Proceedings of the 2005 WACV/MOTIONS ’05 Volume 1. Seventh IEEE Workshops on Application of Computer Vision, Breckenridge, CO, USA, 5–7 January 2005; pp. 254–260. [Google Scholar]
  10. Suryanarayan, P.; Subramanian, A.; Mandalapu, D. Dynamic hand pose recognition using depth data. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3105–3108. [Google Scholar]
  11. Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE Trans. Hum. Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
  12. Lu, Z.; Chen, X.; Li, Q.; Zhang, X.; Zhou, P. A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices. IEEE Trans. Hum. Mach. Syst. 2014, 44, 293–299. [Google Scholar] [CrossRef]
  13. Lorussi, F.; Carbonaro, N.; De Rossi, D.; Paradiso, R.; Veltink, P.; Tognetti, A. Wearable Textile Platform for Assessing Stroke Patient Treatment in Daily Life Conditions. Front. Bioeng. Biotechnol. 2016, 4, 28. [Google Scholar] [CrossRef] [PubMed]
  14. Tognetti, A.; Lorussi, F.; Mura, G.; Carbonaro, N.; Pacelli, M.; Paradiso, R.; Rossi, D. New generation of wearable goniometers for motion capture systems. J. Neuroeng. Rehabil. 2014, 11, 56. [Google Scholar] [CrossRef] [PubMed]
  15. Tognetti, A.; Lorussi, F.; Carbonaro, N.; de Rossi, D. Wearable goniometer and accelerometer sensory fusion for knee joint angle measurement in daily life. Sensors 2015, 15, 28435–28455. [Google Scholar] [CrossRef] [PubMed]
  16. Pacelli, M.; Caldani, L.; Paradiso, R. Performances evaluation of piezoresistive fabric sensors as function of yarn structure. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 6502–6505. [Google Scholar]
  17. Pacelli, M.; Caldani, L.; Paradiso, R. Textile piezoresistive sensors for biomechanical variables monitoring. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 5358–5361. [Google Scholar]
  18. Tognetti, A.; Lorussi, F.; Carbonaro, N.; De Rossi, D.; De Toma, G.; Mancuso, C.; Paradiso, R.; Luinge, H.; Reenalda, J.; Droog, E.; et al. Daily-life monitoring of stroke survivors motor performance: The INTERACTION sensing system. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 4099–4102. [Google Scholar]
  19. Carbonaro, N.; Mura, G.D.; Lorussi, F.; Paradiso, R.; De Rossi, D.; Tognetti, A. Exploiting wearable goniometer technology for motion sensing gloves. IEEE J. Biomed. Health Inform. 2014, 18, 1788–1795. [Google Scholar] [CrossRef] [PubMed]
  20. Kranzfelder, M.; Staub, C.; Fiolka, A.; Schneider, A.; Gillen, S.; Wilhelm, D.; Friess, H.; Knoll, A.; Feussner, H. Toward increased autonomy in the surgical OR: Needs, requests, and expectations. Surg. Endosc. 2013, 27, 1681–1688. [Google Scholar] [CrossRef] [PubMed]
  21. Moustris, G.P.; Hiridis, S.C.; Deliparaschos, K.M.; Konstantinidis, K.M. Robust feature tracking in the beating heart for a robotic-guided endoscope. Int. J. Med. Robot. 2011, 7, 375–392. [Google Scholar] [CrossRef] [PubMed]
  22. Wen, R.; Tay, W.L.; Nguyen, B.P.; Chng, C.B.; Chui, C.K. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput. Methods Programs Biomed. 2014, 116, 68–80. [Google Scholar] [CrossRef] [PubMed]
  23. Weede, O.; Mönnich, H.; Müller, B.; Wörn, H. An intelligent and autonomous endoscopic guidance system for minimally invasive surgery. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5762–5768. [Google Scholar]
  24. Staub, C.; Osa, T.; Knoll, A.; Bauernschmitt, R. Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4585–4590. [Google Scholar]
  25. Shi, H.F.; Payandeh, S. Real-time knotting and unkotting. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 2570–2575. [Google Scholar]
  26. Patil, S.; Alterovitz, R. Toward automated tissue retraction in robot-assisted surgery. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 2088–2094. [Google Scholar]
  27. Richard, K.; Reznick, H.M. Changes in the wind. N. Engl. J. Med. 2006, 355, 2664–2669. [Google Scholar]
Figure 1. Hand-assisted laparoscopic surgery (HALS) scenario using a robotic assistant.
Figure 1. Hand-assisted laparoscopic surgery (HALS) scenario using a robotic assistant.
Technologies 06 00008 g001
Figure 2. (a) The goniometers attached to the glove fabric, (b) the sensing glove prototype and the wireless acquisition unit.
Figure 2. (a) The goniometers attached to the glove fabric, (b) the sensing glove prototype and the wireless acquisition unit.
Technologies 06 00008 g002
Figure 3. (a) Schematic structure of the knitted piezoresistive fabrics (KPF) goniometer. The black stripes represent the two identical piezoresistive layers, while the gray stripe is the insulating layer; (b) The output (ΔR) is proportional to the bending angle (θ) (c) KPF goniometer electrical model and block diagram of the electronics front-end. Two instrumentation amplifiers (INS1 and INS2) and a differential amplifier (DIFF) produce the output ΔV, which is proportional to ΔR and thus to Δθ.
Figure 3. (a) Schematic structure of the knitted piezoresistive fabrics (KPF) goniometer. The black stripes represent the two identical piezoresistive layers, while the gray stripe is the insulating layer; (b) The output (ΔR) is proportional to the bending angle (θ) (c) KPF goniometer electrical model and block diagram of the electronics front-end. Two instrumentation amplifiers (INS1 and INS2) and a differential amplifier (DIFF) produce the output ΔV, which is proportional to ΔR and thus to Δθ.
Technologies 06 00008 g003
Figure 4. Sensor values during the performance of the movements represented in Table 1 in the same order. (a) person 1, and (b) person 2.
Figure 4. Sensor values during the performance of the movements represented in Table 1 in the same order. (a) person 1, and (b) person 2.
Technologies 06 00008 g004
Figure 5. Sensor values during movement 2. There should be no motion in the middle finger, because only the index finger should participate.
Figure 5. Sensor values during movement 2. There should be no motion in the middle finger, because only the index finger should participate.
Technologies 06 00008 g005
Figure 6. (a) Sensor values during movement 2 and (b) movement 4, performed by the same person.
Figure 6. (a) Sensor values during movement 2 and (b) movement 4, performed by the same person.
Technologies 06 00008 g006
Figure 7. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 1; (b) Velocity of flexion involved in movement 1.
Figure 7. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 1; (b) Velocity of flexion involved in movement 1.
Technologies 06 00008 g007
Figure 8. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 2; (b) Velocity of the flexion involved in movement 2.
Figure 8. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 2; (b) Velocity of the flexion involved in movement 2.
Technologies 06 00008 g008
Figure 9. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 3; (b) Velocity of the flexion involved in movement 3.
Figure 9. (a) Glove sensor data, which are proportional to the flexion of the finger in movement 3; (b) Velocity of the flexion involved in movement 3.
Technologies 06 00008 g009
Figure 10. Scenario of experiments with a pelvitrainer, which simulates the patient’s abdomen. The collaborative robot is holding the endoscope. The sensing glove is partially viewed on the screen.
Figure 10. Scenario of experiments with a pelvitrainer, which simulates the patient’s abdomen. The collaborative robot is holding the endoscope. The sensing glove is partially viewed on the screen.
Technologies 06 00008 g010
Table 1. Selected movements to be detected.
Table 1. Selected movements to be detected.
Initial PostureFinal PostureDescriptionCommand
1 Technologies 06 00008 i001 Technologies 06 00008 i002From initial posture to final posture twiceTo center the image from the endoscope.
2 Technologies 06 00008 i003 Technologies 06 00008 i004From initial posture to final posture twice.To indicate a place to suture.
3 Technologies 06 00008 i005-Initial posture for a defined time.To indicate to stretch the thread.
4 Technologies 06 00008 i006 Technologies 06 00008 i007From initial posture to final posture twice.-
5 Technologies 06 00008 i008 Technologies 06 00008 i009From initial posture to final posture twice.-
Table 2. Characterization of defined movements.
Table 2. Characterization of defined movements.
Mov.FingerFlexion PatternFlexion VelocityExecution TimeD TimeA TimeSensor Value
1Index MiddleD1 A1 D2 A2|Ve| > V1ute < t1utD > t1DutA > t1Auxmin < x < xmax
2IndexD1 A1 D2 A2|Ve| > V2ute < t2utD > t1DutA > t1Auxmin < x < xmax
3Index Middle--te > t3u--xmin < x < xmax
Table 3. Total results.
Table 3. Total results.
TotalActual MovementPrecisionRecallF1-Score
12345
Predicted Movement19800100.990.980.98
208703300.730.870.79
30097001.000.970.98

Share and Cite

MDPI and ACS Style

Santos, L.; Carbonaro, N.; Tognetti, A.; González, J.L.; De la Fuente, E.; Fraile, J.C.; Pérez-Turiel, J. Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies 2018, 6, 8. https://doi.org/10.3390/technologies6010008

AMA Style

Santos L, Carbonaro N, Tognetti A, González JL, De la Fuente E, Fraile JC, Pérez-Turiel J. Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies. 2018; 6(1):8. https://doi.org/10.3390/technologies6010008

Chicago/Turabian Style

Santos, Lidia, Nicola Carbonaro, Alessandro Tognetti, José Luis González, Eusebio De la Fuente, Juan Carlos Fraile, and Javier Pérez-Turiel. 2018. "Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery" Technologies 6, no. 1: 8. https://doi.org/10.3390/technologies6010008

APA Style

Santos, L., Carbonaro, N., Tognetti, A., González, J. L., De la Fuente, E., Fraile, J. C., & Pérez-Turiel, J. (2018). Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies, 6(1), 8. https://doi.org/10.3390/technologies6010008

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop