Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = Microsoft Kinect One (V2), Microsoft Kinect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9749 KiB  
Article
Enhanced Pose Estimation for Badminton Players via Improved YOLOv8-Pose with Efficient Local Attention
by Yijian Wu, Zewen Chen, Hongxing Zhang, Yulin Yang and Weichao Yi
Sensors 2025, 25(14), 4446; https://doi.org/10.3390/s25144446 - 17 Jul 2025
Viewed by 432
Abstract
With the rapid development of sports analytics and artificial intelligence, accurate human pose estimation in badminton is becoming increasingly important. However, challenges such as the lack of domain-specific datasets and the complexity of athletes’ movements continue to hinder progress in this area. To [...] Read more.
With the rapid development of sports analytics and artificial intelligence, accurate human pose estimation in badminton is becoming increasingly important. However, challenges such as the lack of domain-specific datasets and the complexity of athletes’ movements continue to hinder progress in this area. To address these issues, we propose an enhanced pose estimation framework tailored to badminton players, built upon an improved YOLOv8-Pose architecture. In particular, we introduce an efficient local attention (ELA) mechanism that effectively captures fine-grained spatial dependencies and contextual information, thereby significantly improving the keypoint localization accuracy and overall pose estimation performance. To support this study, we construct a dedicated badminton pose dataset comprising 4000 manually annotated samples, captured using a Microsoft Kinect v2 camera. The raw data undergo careful processing and refinement through a combination of depth-assisted annotation and visual inspection to ensure high-quality ground truth keypoints. Furthermore, we conduct an in-depth comparative analysis of multiple attention modules and their integration strategies within the network, offering generalizable insights to enhance pose estimation models in other sports domains. The experimental results show that the proposed ELA-enhanced YOLOv8-Pose model consistently achieves superior accuracy across multiple evaluation metrics, including the mean squared error (MSE), object keypoint similarity (OKS), and percentage of correct keypoints (PCK), highlighting its effectiveness and potential for broader applications in sports vision tasks. Full article
(This article belongs to the Special Issue Computer Vision-Based Human Activity Recognition)
Show Figures

Figure 1

30 pages, 1362 KiB  
Article
Resilient AI in Therapeutic Rehabilitation: The Integration of Computer Vision and Deep Learning for Dynamic Therapy Adaptation
by Egidia Cirillo, Claudia Conte, Alberto Moccardi and Mattia Fonisto
Appl. Sci. 2025, 15(12), 6800; https://doi.org/10.3390/app15126800 - 17 Jun 2025
Viewed by 563
Abstract
Resilient artificial intelligence (Resilient AI) is relevant in many areas where technology needs to adapt quickly to changing and unexpected conditions, such as in the medical, environmental, security, and agrifood sectors. In the case study involving the therapeutic rehabilitation of patients with motor [...] Read more.
Resilient artificial intelligence (Resilient AI) is relevant in many areas where technology needs to adapt quickly to changing and unexpected conditions, such as in the medical, environmental, security, and agrifood sectors. In the case study involving the therapeutic rehabilitation of patients with motor problems, the Resilient AI system is crucial to ensure that systems can effectively respond to changes, maintain high performance, cope with uncertainties and complex variables, and enable the dynamic monitoring and adaptation of therapy in real time. The proposed system integrates advanced technologies, such as computer vision and deep learning models, focusing on non-invasive solutions for monitoring and adapting rehabilitation therapies. The system combines the Microsoft Kinect v3 sensor with MoveNet Thunder – SinglePose, a state-of-the-art deep-learning model for human pose estimation. Kinect’s 3D skeletal tracking and MoveNet’s high-precision 2D keypoint detection together improve the accuracy and reliability of postural analysis. The main objective is to develop an intelligent system that captures and analyzes a patient’s movements in real time using Motion Capture techniques and artificial intelligence (AI) models to improve the effectiveness of therapies. Computer vision tracks human movement, identifying crucial biomechanical parameters and improving the quality of rehabilitation. Full article
(This article belongs to the Special Issue eHealth Innovative Approaches and Applications: 2nd Edition)
Show Figures

Figure 1

19 pages, 2459 KiB  
Article
Optomechanical Analysis of Gait in Patients with Ankylosing Spondylitis
by Vedran Brnić, Frane Grubišić, Simeon Grazio, Maja Mirković and Igor Gruić
Sensors 2025, 25(6), 1797; https://doi.org/10.3390/s25061797 - 14 Mar 2025
Viewed by 910
Abstract
Ankylosing spondylitis (AS) is a chronic inflammatory rheumatic disease associated with alterations in posture and gait. The aim of this study was to assess the gait of AS patients using pedobarography and a markerless motion capture system. This is the first study of [...] Read more.
Ankylosing spondylitis (AS) is a chronic inflammatory rheumatic disease associated with alterations in posture and gait. The aim of this study was to assess the gait of AS patients using pedobarography and a markerless motion capture system. This is the first study of this population to combine these two methods. Twelve AS patients and twelve healthy controls were enrolled in this study. An instrumented gait analysis of both groups was performed using pedobarography and Microsoft Kinect v2. The AS group was significantly older than the controls (p < 0.05). The AS group showed a significantly lower relative pressure distribution in the front-right quadrant (p = 0.01) and a significantly higher relative pressure distribution in the rear-right quadrant (p = 0.05) on the static pedobarography. The AS group also had a higher peak force in the midfoot on the dynamic pedobarography (p < 0.05). The AS group had a significantly shorter stride length (p = 0.01). No significant differences between the groups were found in their hip flexion/extension and adduction/abduction, knee flexion, or ankle dorsiflexion/plantarflexion angles. This study shows significant alterations in the pedobarographic and spatiotemporal, but not in the kinematic, gait parameters of AS patients. These alterations represent a feature of AS and not antalgic adjustments. Rehabilitation programs for AS patients could be tailored according to the results of an instrumented gait analysis and should include balance and gait exercises. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

29 pages, 18651 KiB  
Article
Realization of Impression Evidence with Reverse Engineering and Additive Manufacturing
by Osama Abdelaal and Saleh Ahmed Aldahash
Appl. Sci. 2024, 14(13), 5444; https://doi.org/10.3390/app14135444 - 23 Jun 2024
Cited by 1 | Viewed by 2167
Abstract
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to [...] Read more.
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to evaluate the capabilities and limitations of five 3D scanning technologies, including laser scanning (LS), structured-light (SL) scanning, smartphone (SP) photogrammetry, Microsoft Kinect v2 RGB-D camera, and iPhone’s LiDAR (iLiDAR) Sensor, for 3D reconstruction of 3D impression evidence. Furthermore, methodologies for 3D reconstruction of latent impression and visible 2D impression based on a single 2D photo were proposed. Additionally, the FDM additive manufacturing process was employed to build impression evidence models created by each procedure. The results showed that the SL scanning system generated the highest reconstruction accuracy. Consequently, the SL system was employed as a benchmark to assess the reconstruction quality of other systems. In comparison to the SL data, LS showed the smallest absolute geometrical deviations (0.37 mm), followed by SP photogrammetry (0.78 mm). In contrast, the iLiDAR exhibited the largest absolute deviations (2.481 mm), followed by Kinect v2 (2.382 mm). Additionally, 3D printed impression replicas demonstrated superior detail compared to Plaster of Paris (POP) casts. The feasibility of reconstructing 2D impressions into 3D models is progressively increasing. Finally, this article explores potential future research directions in this field. Full article
(This article belongs to the Special Issue Advances in 3D Sensing Techniques and Its Applications)
Show Figures

Figure 1

11 pages, 1726 KiB  
Article
Comparing a Portable Motion Analysis System against the Gold Standard for Potential Anterior Cruciate Ligament Injury Prevention and Screening
by Nicolaos Karatzas, Patrik Abdelnour, Jason Philip Aaron Hiro Corban, Kevin Y. Zhao, Louis-Nicolas Veilleux, Stephane G. Bergeron, Thomas Fevens, Hassan Rivaz, Athanasios Babouras and Paul A. Martineau
Sensors 2024, 24(6), 1970; https://doi.org/10.3390/s24061970 - 20 Mar 2024
Cited by 7 | Viewed by 2095
Abstract
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet [...] Read more.
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet to be assessed specifically for tracking the coronal and sagittal knee angles of the drop vertical jump. Eleven participants performed three drop vertical jumps that were recorded using both the Kinect V2 and a gold standard motion analysis system (Vicon, Los Angeles, CA, USA). The initial coronal, peak coronal, and peak sagittal angles of the left and right knees were measured by both systems simultaneously. Analysis of the data obtained by the Kinect V2 was performed by our software. The differences in the mean knee angles measured by the Kinect V2 and the Vicon system were non-significant for all parameters except for the peak sagittal angle of the right leg with a difference of 7.74 degrees and a p-value of 0.008. There was excellent agreement between the Kinect V2 and the Vicon system, with intraclass correlation coefficients consistently over 0.75 for all knee angles measured. Visual analysis revealed a moderate frame-to-frame variability for coronal angles measured by the Kinect V2. The Kinect V2 can be used to capture knee coronal and sagittal angles with sufficient accuracy during a drop vertical jump, suggesting that a Kinect-based portable motion analysis system is suitable to screen individuals for the risk of non-contact anterior cruciate ligament injury. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

14 pages, 4823 KiB  
Article
Determining the Presence and Size of Shoulder Lesions in Sows Using Computer Vision
by Shubham Bery, Tami M. Brown-Brandl, Bradley T. Jones, Gary A. Rohrer and Sudhendu Raj Sharma
Animals 2024, 14(1), 131; https://doi.org/10.3390/ani14010131 - 29 Dec 2023
Cited by 8 | Viewed by 2340
Abstract
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal’s body condition, or an existing injury that causes [...] Read more.
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal’s body condition, or an existing injury that causes lameness. These lesions represent not only a welfare concern but also have an economic impact due to the labor needed for treatment and medication. The objective of this study was to evaluate the use of computer vision techniques in detecting and determining the size of shoulder lesions. A Microsoft Kinect V2 camera captured the top-down depth and RGB images of sows in farrowing crates. The RGB images were collected at a resolution of 1920 × 1080. To ensure the best view of the lesions, images were selected with sows lying on their right and left sides with all legs extended. A total of 824 RGB images from 70 sows with lesions at various stages of development were identified and annotated. Three deep learning-based object detection models, YOLOv5, YOLOv8, and Faster-RCNN, pre-trained with the COCO and ImageNet datasets, were implemented to localize the lesion area. YOLOv5 was the best predictor as it was able to detect lesions with an mAP@0.5 of 0.92. To estimate the lesion area, lesion pixel segmentation was carried out on the localized region using traditional image processing techniques like Otsu’s binarization and adaptive thresholding alongside DL-based segmentation models based on U-Net architecture. In conclusion, this study demonstrates the potential of computer vision techniques in effectively detecting and assessing the size of shoulder lesions in breeding sows, providing a promising avenue for improving sow welfare and reducing economic losses. Full article
(This article belongs to the Special Issue 2nd U.S. Precision Livestock Farming Conference)
Show Figures

Figure 1

24 pages, 12290 KiB  
Article
METRIC—Multi-Eye to Robot Indoor Calibration Dataset
by Davide Allegro, Matteo Terreran and Stefano Ghidoni
Information 2023, 14(6), 314; https://doi.org/10.3390/info14060314 - 29 May 2023
Cited by 1 | Viewed by 2677
Abstract
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all [...] Read more.
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all the viewpoints of the network. This is particularly important in applications such as object detection and people tracking. Multi-camera calibration is a critical requirement also in several robotics scenarios, particularly those involving a robotic workcell equipped with a manipulator surrounded by multiple sensors. Within this scenario, the robot-world hand-eye calibration is an additional crucial element for determining the exact position of each camera with respect to the robot, in order to provide information about the surrounding workspace directly to the manipulator. Despite the importance of the calibration process in the two scenarios outlined above, namely (i) a camera network, and (ii) a camera network with a robot, there is a lack of standard datasets available in the literature to evaluate and compare calibration methods. Moreover they are usually treated separately and tested on dedicated setups. In this paper, we propose a general standard dataset acquired in a robotic workcell where calibration methods can be evaluated in two use cases: camera network calibration and robot-world hand-eye calibration. The Multi-Eye To Robot Indoor Calibration (METRIC) dataset consists of over 10,000 synthetic and real images of ChAruCo and checkerboard patterns, each one rigidly attached to the robot end-effector, which was moved in front of four cameras surrounding the manipulator from different viewpoints during the image acquisition. The real images in the dataset includes several multi-view image sets captured by three different types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455 and Intel RealSense Lidar L515, to evaluate their advantages and disadvantages for calibration. Furthermore, in order to accurately analyze the effect of camera-robot distance on calibration, we acquired a comprehensive synthetic dataset, with related ground truth, with three different camera network setups corresponding to three levels of calibration difficulty depending on the cell size. An additional contribution of this work is to provide a comprehensive evaluation of state-of-the-art calibration methods using our dataset, highlighting their strengths and weaknesses, in order to outline two benchmarks for the two aforementioned use cases. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

14 pages, 1755 KiB  
Article
RGB-Depth Camera-Based Assessment of Motor Capacity: Normative Data for Six Standardized Motor Tasks
by Hanna Marie Röhling, Karen Otte, Sophia Rekers, Carsten Finke, Rebekka Rust, Eva-Maria Dorsch, Behnoush Behnia, Friedemann Paul and Tanja Schmitz-Hübsch
Int. J. Environ. Res. Public Health 2022, 19(24), 16989; https://doi.org/10.3390/ijerph192416989 - 17 Dec 2022
Cited by 5 | Viewed by 2113
Abstract
Background: Instrumental motion analysis constitutes a promising development in the assessment of motor function in clinical populations affected by movement disorders. To foster implementation and facilitate interpretation of respective outcomes, we aimed to establish normative data of healthy subjects for a markerless RGB-Depth [...] Read more.
Background: Instrumental motion analysis constitutes a promising development in the assessment of motor function in clinical populations affected by movement disorders. To foster implementation and facilitate interpretation of respective outcomes, we aimed to establish normative data of healthy subjects for a markerless RGB-Depth camera-based motion analysis system and to illustrate their use. Methods: We recorded 133 healthy adults (56% female) aged 20 to 60 years with an RGB-Depth camera-based motion analysis system. Forty-three spatiotemporal parameters were extracted from six short, standardized motor tasks—including three gait tasks, stepping in place, standing-up and sitting down, and a postural control task. Associations with confounding factors, height, weight, age, and sex were modelled using a predictive linear regression approach. A z-score normalization approach was provided to improve usability of the data. Results: We reported descriptive statistics for each spatiotemporal parameter (mean, standard deviation, coefficient of variation, quartiles). Robust confounding associations emerged for step length and step width in comfortable speed gait only. Accessible normative data usage was lastly exemplified with recordings from one randomly selected individual with multiple sclerosis. Conclusion: We provided normative data for an RGB depth camera-based motion analysis system covering broad aspects of motor capacity. Full article
(This article belongs to the Special Issue Neuromuscular Control of Human Movement)
Show Figures

Figure 1

17 pages, 6000 KiB  
Article
3D Measurement of Large Deformations on a Tensile Structure during Wind Tunnel Tests Using Microsoft Kinect V2
by Daniele Marchisotti, Paolo Schito and Emanuele Zappa
Sensors 2022, 22(16), 6149; https://doi.org/10.3390/s22166149 - 17 Aug 2022
Viewed by 1912
Abstract
Wind tunnel tests often require deformation and displacement measures to determine the behavior of structures to evaluate their response to wind excitation. However, common measurement techniques make it possible to measure these quantities only at a few specific points. Moreover, these kinds of [...] Read more.
Wind tunnel tests often require deformation and displacement measures to determine the behavior of structures to evaluate their response to wind excitation. However, common measurement techniques make it possible to measure these quantities only at a few specific points. Moreover, these kinds of measurements, such as Linear Variable Differential Transformer LVDTs or fiber optics, usually influence the downstream and upstream air fluxes and the structure under test. In order to characterize the displacement of the structure not just at a few points, but for the entire structure, in this article, the application of 3D cameras during a wind tunnel test is presented. In order to validate this measurement technique in this application field, a wind tunnel test was executed. Three Kinect V2 depth sensors were used for a 3D displacement measurement of a test structure that did not present any optical marker or feature. The results highlighted that by using a low-cost and user-friendly measurement system, it is possible to obtain 3D measurements in a volume of several cubic meters (4 m × 4 m × 4 m wind tunnel chamber), without significant disturbance of wind flux and by means of a simple calibration of sensors, executed directly inside the wind tunnel. The obtained results highlighted a displacement directed to the internal part of the structure for the side most exposed to wind, while the sides, parallel to the wind flux, were more subjected to vibrations and with an outwards average displacement. These results are compliant with the expected behavior of the structure. Full article
Show Figures

Figure 1

17 pages, 4003 KiB  
Article
Reliability of 3D Depth Motion Sensors for Capturing Upper Body Motions and Assessing the Quality of Wheelchair Transfers
by Alicia Marie Koontz, Ahlad Neti, Cheng-Shiu Chung, Nithin Ayiluri, Brooke A. Slavens, Celia Genevieve Davis and Lin Wei
Sensors 2022, 22(13), 4977; https://doi.org/10.3390/s22134977 - 30 Jun 2022
Cited by 4 | Viewed by 2318
Abstract
Wheelchair users must use proper technique when performing sitting-pivot-transfers (SPTs) to prevent upper extremity pain and discomfort. Current methods to analyze the quality of SPTs include the TransKinect, a combination of machine learning (ML) models, and the Transfer Assessment Instrument (TAI), to automatically [...] Read more.
Wheelchair users must use proper technique when performing sitting-pivot-transfers (SPTs) to prevent upper extremity pain and discomfort. Current methods to analyze the quality of SPTs include the TransKinect, a combination of machine learning (ML) models, and the Transfer Assessment Instrument (TAI), to automatically score the quality of a transfer using Microsoft Kinect V2. With the discontinuation of the V2, there is a necessity to determine the compatibility of other commercial sensors. The Intel RealSense D435 and the Microsoft Kinect Azure were compared against the V2 for inter- and intra-sensor reliability. A secondary analysis with the Azure was also performed to analyze its performance with the existing ML models used to predict transfer quality. The intra- and inter-sensor reliability was higher for the Azure and V2 (n = 7; ICC = 0.63 to 0.92) than the RealSense and V2 (n = 30; ICC = 0.13 to 0.7) for four key features. Additionally, the V2 and the Azure both showed high agreement with each other on the ML outcomes but not against a ground truth. Therefore, the ML models may need to be retrained ideally with the Azure, as it was found to be a more reliable and robust sensor for tracking wheelchair transfers in comparison to the V2. Full article
(This article belongs to the Special Issue Robotics and Sensors for Rehabilitation)
Show Figures

Figure 1

22 pages, 15610 KiB  
Article
Evaluating the Accuracy of the Azure Kinect and Kinect v2
by Gregorij Kurillo, Evan Hemingway, Mu-Lin Cheng and Louis Cheng
Sensors 2022, 22(7), 2469; https://doi.org/10.3390/s22072469 - 23 Mar 2022
Cited by 81 | Viewed by 10692
Abstract
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors [...] Read more.
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors are used to capture a planar whiteboard at 15 locations in a grid pattern with laser scanner data serving as ground truth. A set of histograms reveals the temporal-based random depth error inherent in each Kinect. Additionally, a two-dimensional cone of accuracy illustrates the systematic spatial error. At distances greater than 2.5 m, we find the Azure Kinect to have improved accuracy in both spatial and temporal domains as compared to the Kinect v2, while for distances less than 2.5 m, the spatial and temporal accuracies were found to be comparable. In another experiment, we compare the distribution of random depth error between each Kinect sensor by capturing a flat wall across the field of view in horizontal and vertical directions. We find the Azure Kinect to have improved temporal accuracy over the Kinect v2 in the range of 2.5 to 3.5 m for measurements close to the optical axis. The results indicate that the Azure Kinect is a suitable substitute for Kinect v2 in 3D scanning applications. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

21 pages, 1148 KiB  
Article
Balance Measurement Using Microsoft Kinect v2: Towards Remote Evaluation of Patient with the Functional Reach Test
by Ines Ayed, Antoni Jaume-i-Capó, Pau Martínez-Bueso, Arnau Mir and Gabriel Moyà-Alcover
Appl. Sci. 2021, 11(13), 6073; https://doi.org/10.3390/app11136073 - 30 Jun 2021
Cited by 10 | Viewed by 2765
Abstract
To prevent falls, it is important to measure periodically the balance ability of an individual using reliable clinical tests. As Red Green Blue Depth (RGBD) devices have been increasingly used for balance rehabilitation at home, they may also be used to assess objectively [...] Read more.
To prevent falls, it is important to measure periodically the balance ability of an individual using reliable clinical tests. As Red Green Blue Depth (RGBD) devices have been increasingly used for balance rehabilitation at home, they may also be used to assess objectively the balance ability and determine the effectiveness of a therapy. For this, we developed a system based on the Microsoft Kinect v2 for measuring the Functional Reach Test (FRT); one of the most used balance clinical tools to predict falls. Two experiments were conducted to compare the FRT measures computed by our system using the Microsoft Kinect v2 with those obtained by the standard method, i.e., manually. In terms of validity, we found a very strong correlation between the two methods (r = 0.97 and r = 0.99 (p < 0.05), for experiments 1 and 2, respectively). However, we needed to correct the measurements using a linear model to fit the data obtained by the Kinect system. Consequently, a linear regression model has been applied and examining the regression assumptions showed that the model works well for the data. Applying the paired t-test to the data after correction indicated that there is no statistically significant difference between the measurements obtained by both methods. As for the reliability of the test, we obtained good to excellent within repeatability of the FRT measurements tracked by Kinect (ICC = 0.86 and ICC = 0.99, for experiments 1 and 2, respectively). These results suggested that the Microsoft Kinect v2 device is reliable and adequate to calculate the standard FRT. Full article
(This article belongs to the Special Issue Tele-Rehabilitation Robotics)
Show Figures

Figure 1

10 pages, 1092 KiB  
Article
Facial Self-Touching and the Propagation of COVID-19: The Role of Gloves in the Dental Practice
by María Carrillo-Díaz, Laura Lacomba-Trejo, Martín Romero-Maroto and María José González-Olmo
Int. J. Environ. Res. Public Health 2021, 18(13), 6983; https://doi.org/10.3390/ijerph18136983 - 29 Jun 2021
Cited by 6 | Viewed by 2766
Abstract
Background: Despite facial self–touching being a possible source of transmission of SARS–Co–V–2 its role in dental practice has not been studied. Factors such as anxiety symptoms or threat perception of COVID-19 may increase the possibility of contagion. The objective was to compare the [...] Read more.
Background: Despite facial self–touching being a possible source of transmission of SARS–Co–V–2 its role in dental practice has not been studied. Factors such as anxiety symptoms or threat perception of COVID-19 may increase the possibility of contagion. The objective was to compare the impact of control measures, such as gloves or signs in the reduction in facial self–touching. Methods: An intra–subject design was undertaken with 150 adults. The patients’ movements in the waiting room were monitored with Microsoft Kinect software on three occasions: without any control measures, using plastic gloves or using advisory signs against self–touching. Additionally, the participants completed the sub–scale of STAI (State–Anxiety) and the BIP–Q5 (Brief Illness Perception Questionnaire); their blood pressure and heart rate were recorded. Results: The lowest incidence of facial self–touching occurred in the experimental situation in which gloves were introduced. The subjects with elevated anxiety symptoms realized more facial self–touching regardless of the control measures. However, the threat perception of COVID-19 is associated negatively with facial self–touching. Conclusions: The use of gloves is a useful control measure in the reduction in facial touching. However, people with anxiety symptoms regardless of whether they have greater threat perception for COVID-19 exhibit more facial touching. Full article
Show Figures

Figure 1

15 pages, 2385 KiB  
Article
Towards a Live Feedback Training System: Interchangeability of Orbbec Persee and Microsoft Kinect for Exercise Monitoring
by Verena Venek, Wolfgang Kremser and Thomas Stöggl
Designs 2021, 5(2), 30; https://doi.org/10.3390/designs5020030 - 15 Apr 2021
Cited by 9 | Viewed by 4571
Abstract
Many existing motion sensing applications in research, entertainment and exercise monitoring are based on the Microsoft Kinect and its skeleton tracking functionality. With the Kinect’s development and production halted, researchers and system designers are in need of a suitable replacement. We investigated the [...] Read more.
Many existing motion sensing applications in research, entertainment and exercise monitoring are based on the Microsoft Kinect and its skeleton tracking functionality. With the Kinect’s development and production halted, researchers and system designers are in need of a suitable replacement. We investigated the interchangeability of the discontinued Kinect v2 and the all-in-one, image-based motion tracking system Orbbec Persee for the use in an exercise monitoring system prototype called ILSE. Nine functional training exercises were performed by six healthy subjects in front of both systems simultaneously. Comparing the systems’ internal tracking states from ’not tracked’ to ‘tracked’ showed that the Persee system is more confident during motion sequences, while the Kinect is more confident for hip and trunk joint positions. Assessing the skeleton tracking robustness, the Persee’s tracking of body segment lengths was more consistent. Furthermore, we used both skeleton datasets as input for the ILSE exercise monitoring including posture recognition and repetition-counting. Persee data from exercises with lateral movement and in uncovered full-body frontal view provided the same results as Kinect data. The Persee further preferred tracking of quasi-static lower limb motions and tight-fitting clothes. With these limitations in mind, we find that the Orbbec Persee is a suitable replacement for the Microsoft Kinect for motion sensing within the ILSE exercise monitoring system. Full article
Show Figures

Figure 1

14 pages, 4559 KiB  
Article
A Practical and Effective Layout for a Safe Human-Robot Collaborative Assembly Task
by Leonardo Sabatino Scimmi, Matteo Melchiorre, Mario Troise, Stefano Mauro and Stefano Pastorelli
Appl. Sci. 2021, 11(4), 1763; https://doi.org/10.3390/app11041763 - 17 Feb 2021
Cited by 48 | Viewed by 5364
Abstract
This work describes a layout to carry out a demonstrative assembly task, during which a collaborative robot performs pick-and-place tasks to supply an operator the parts that he/she has to assemble. In this scenario, the robot and operator share the workspace and a [...] Read more.
This work describes a layout to carry out a demonstrative assembly task, during which a collaborative robot performs pick-and-place tasks to supply an operator the parts that he/she has to assemble. In this scenario, the robot and operator share the workspace and a real time collision avoidance algorithm is implemented to modify the planned trajectories of the robot avoiding any collision with the human worker. The movements of the operator are tracked by two Microsoft Kinect v2 sensors to overcome problems related with occlusions and poor perception of a single camera. The data obtained by the two Kinect sensors are combined and then given as input to the collision avoidance algorithm. The experimental results show the effectiveness of the collision avoidance algorithm and the significant gain in terms of task times that the highest level of human-robot collaboration can bring. Full article
(This article belongs to the Special Issue Smart Robots for Industrial Applications)
Show Figures

Figure 1

Back to TopTop