Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = reconstruction of skeleton joints

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9837 KB  
Article
SSR-HMR: Skeleton-Aware Sparse Node-Based Real-Time Human Motion Reconstruction
by Linhai Li, Jiayi Lin and Wenhui Zhang
Electronics 2025, 14(18), 3664; https://doi.org/10.3390/electronics14183664 - 16 Sep 2025
Viewed by 659
Abstract
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates [...] Read more.
The growing demand for real-time human motion reconstruction in Virtual Reality (VR), Augmented Reality (AR), and the Metaverse requires high accuracy with minimal hardware. This paper presents SSR-HMR, a skeleton-aware, sparse node-based method for full-body motion reconstruction from limited inputs. The approach incorporates a lightweight spatiotemporal graph convolutional module, a torso pose refinement design to mitigate orientation drift, and kinematic tree-based optimization to enhance end-effector positioning accuracy. Smooth motion transitions are achieved via a multi-scale velocity loss. Experiments demonstrate that SSR-HMR achieves high-accuracy reconstruction, with mean joint and end-effector position errors of 1.06 cm and 0.52 cm, respectively, while operating at 267 FPS on a CPU. Full article
(This article belongs to the Special Issue AI Models for Human-Centered Computer Vision and Signal Analysis)
Show Figures

Figure 1

20 pages, 4569 KB  
Article
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
by Luca Cruciata, Salvatore Contino, Marianna Ciccarelli, Roberto Pirrone, Leonardo Mostarda, Alessandra Papetti and Marco Piangerelli
Sensors 2025, 25(15), 4750; https://doi.org/10.3390/s25154750 - 1 Aug 2025
Cited by 1 | Viewed by 1037
Abstract
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly [...] Read more.
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly on raw RGB images without requiring skeleton reconstruction, joint angle estimation, or image segmentation. A single ViT model simultaneously classifies eight anatomical regions, enabling efficient multi-label posture assessment. Training is supervised using a multimodal dataset acquired from synchronized RGB video and full-body inertial motion capture, with ergonomic risk labels derived from RULA scores computed on joint kinematics. The system is validated on realistic, simulated industrial tasks that include common challenges such as occlusion and posture variability. Experimental results show that the ViT model achieves state-of-the-art performance, with F1-scores exceeding 0.99 and AUC values above 0.996 across all regions. Compared to previous CNN-based system, the proposed model improves classification accuracy and generalizability while reducing complexity and enabling real-time inference on edge devices. These findings demonstrate the model’s potential for unobtrusive, scalable ergonomic risk monitoring in real-world manufacturing environments. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

15 pages, 1463 KB  
Article
Spatial–Temporal Heatmap Masked Autoencoder for Skeleton-Based Action Recognition
by Cunling Bian, Yang Yang, Tao Wang and Weigang Lu
Sensors 2025, 25(10), 3146; https://doi.org/10.3390/s25103146 - 16 May 2025
Cited by 1 | Viewed by 1670
Abstract
Skeleton representation learning offers substantial advantages for action recognition by encoding intricate motion details and spatial–temporal dependencies among joints. However, fully supervised approaches necessitate large amounts of annotated data, which are often labor-intensive and costly to acquire. In this work, we propose the [...] Read more.
Skeleton representation learning offers substantial advantages for action recognition by encoding intricate motion details and spatial–temporal dependencies among joints. However, fully supervised approaches necessitate large amounts of annotated data, which are often labor-intensive and costly to acquire. In this work, we propose the Spatial–Temporal Heatmap Masked Autoencoder (STH-MAE), a novel self-supervised framework tailored for skeleton-based action recognition. Unlike coordinate-based methods, STH-MAE adopts heatmap volumes as its primary representation, mitigating noise inherent in pose estimation while capitalizing on advances in Vision Transformers. The framework constructs a spatial–temporal heatmap (STH) by aggregating 2D joint heatmaps across both spatial and temporal axes. This STH is partitioned into non-overlapping patches to facilitate local feature learning, with a masking strategy applied to randomly conceal portions of the input. During pre-training, a Vision Transformer-based autoencoder equipped with a lightweight prediction head reconstructs the masked regions, fostering the extraction of robust and transferable skeletal representations. Comprehensive experiments on the NTU RGB+D 60 and NTU RGB+D 120 benchmarks demonstrate the superiority of STH-MAE, achieving state-of-the-art performance under multiple evaluation protocols. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

27 pages, 1938 KB  
Article
Skeleton Reconstruction Using Generative Adversarial Networks for Human Activity Recognition Under Occlusion
by Ioannis Vernikos and Evaggelos Spyrou
Sensors 2025, 25(5), 1567; https://doi.org/10.3390/s25051567 - 4 Mar 2025
Cited by 1 | Viewed by 1836
Abstract
Recognizing human activities from motion data is a complex task in computer vision, involving the recognition of human behaviors from sequences of 3D motion data. These activities encompass successive body part movements, interactions with objects, or group dynamics. Camera-based recognition methods are cost-effective [...] Read more.
Recognizing human activities from motion data is a complex task in computer vision, involving the recognition of human behaviors from sequences of 3D motion data. These activities encompass successive body part movements, interactions with objects, or group dynamics. Camera-based recognition methods are cost-effective and perform well under controlled conditions but face challenges in real-world scenarios due to factors such as viewpoint changes, illumination variations, and occlusion. The latter is the most significant challenge in real-world recognition; partial occlusion impacts recognition accuracy to varying degrees depending on the activity and the occluded body parts while complete occlusion can render activity recognition impossible. In this paper, we propose a novel approach for human activity recognition in the presence of partial occlusion, which may be applied in cases wherein up to two body parts are occluded. The proposed approach works under the assumptions that (a) human motion is modeled using a set of 3D skeletal joints, and (b) the same body parts remain occluded throughout the whole activity. Contrary to previous research, in this work, we address this problem using a Generative Adversarial Network (GAN). Specifically, we train a Convolutional Recurrent Neural Network (CRNN), whose goal is to serve as the generator of the GAN. Its aim is to complete the missing parts of the skeleton due to occlusion. Specifically, the input to this CRNN consists of raw 3D skeleton joint positions, upon the removal of joints corresponding to occluded parts. The output of the CRNN is a reconstructed skeleton. For the discriminator of the GAN, we use a simple long short-term memory (LSTM) network. We evaluate the proposed approach using publicly available datasets in a series of occlusion scenarios. We demonstrate that in all scenarios, the occlusion of certain body parts causes a significant decline in performance, although in some cases, the reconstruction process leads to almost perfect recognition. Nonetheless, in almost every circumstance, the herein proposed approach exhibits superior performance compared to previous works, which varies between 2.2% and 37.5%, depending on the dataset used and the occlusion case. Full article
(This article belongs to the Special Issue Robust Motion Recognition Based on Sensor Technology)
Show Figures

Figure 1

19 pages, 26378 KB  
Article
2D to 3D Human Skeleton Estimation Based on the Brown Camera Distortion Model and Constrained Optimization
by Lan Ma and Hua Huo
Electronics 2025, 14(5), 960; https://doi.org/10.3390/electronics14050960 - 27 Feb 2025
Viewed by 2408
Abstract
In the rapidly evolving field of computer vision and machine learning, 3D skeleton estimation is critical for applications such as motion analysis and human–computer interaction. While stereo cameras are commonly used to acquire 3D skeletal data, monocular RGB systems attract attention due to [...] Read more.
In the rapidly evolving field of computer vision and machine learning, 3D skeleton estimation is critical for applications such as motion analysis and human–computer interaction. While stereo cameras are commonly used to acquire 3D skeletal data, monocular RGB systems attract attention due to benefits including cost-effectiveness and simple deployment. However, persistent challenges remain in accurately inferring depth from 2D images and reconstructing 3D structures using monocular approaches. The current 2D to 3D skeleton estimation methods overly rely on deep training of datasets, while neglecting the importance of human intrinsic structure and the principles of camera imaging. To address this, this paper introduces an innovative 2D to 3D gait skeleton estimation method that leverages the Brown camera distortion model and constrained optimization. Utilizing the Azure Kinect depth camera for capturing gait video, the Azure Kinect Body Tracking SDK was employed to effectively extract 2D and 3D joint positions. The camera’s distortion properties were analyzed, using the Brown camera distortion model which is suitable for this scenario, and iterative methods to compensate the distortion of 2D skeleton joints. By integrating the geometric constraints of the human skeleton, an optimization algorithm was analyzed to achieve precise 3D joint estimations. Finally, the framework was validated through comparisons between the estimated 3D joint coordinates and corresponding measurements captured by depth sensors. Experimental evaluations confirmed that this training-free approach achieved superior precision and stability compared to conventional methods. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Figure 1

16 pages, 8801 KB  
Article
Noise-Robust 3D Pose Estimation Using Appearance Similarity Based on the Distributed Multiple Views
by Taemin Hwang and Minjoon Kim
Sensors 2024, 24(17), 5645; https://doi.org/10.3390/s24175645 - 30 Aug 2024
Viewed by 2352
Abstract
In this paper, we present a noise-robust approach for the 3D pose estimation of multiple people using appearance similarity. The common methods identify the cross-view correspondences between the detected keypoints and determine their association with a specific person by measuring the distances between [...] Read more.
In this paper, we present a noise-robust approach for the 3D pose estimation of multiple people using appearance similarity. The common methods identify the cross-view correspondences between the detected keypoints and determine their association with a specific person by measuring the distances between the epipolar lines and the joint locations of the 2D keypoints across all the views. Although existing methods achieve remarkable accuracy, they are still sensitive to camera calibration, making them unsuitable for noisy environments where any of the cameras slightly change angle or position. To address these limitations and fix camera calibration error in real-time, we propose a framework for 3D pose estimation which uses appearance similarity. In the proposed framework, we detect the 2D keypoints and extract the appearance feature and transfer it to the central server. The central server uses geometrical affinity and appearance similarity to match the detected 2D human poses to each person. Then, it compares these two groups to identify calibration errors. If a camera with the wrong calibration is identified, the central server fixes the calibration error, ensuring accuracy in the 3D reconstruction of skeletons. In the experimental environment, we verified that the proposed algorithm is robust against false geometrical errors. It achieves around 11.5% and 8% improvement in the accuracy of 3D pose estimation on the Campus and Shelf datasets, respectively. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

11 pages, 1142 KB  
Article
The Presence of the Human Auditory Ossicles—Detected Postmortem by CT Scan—As a Taphonomic Indicator
by Edda E. Guareschi, Sara Poggesi, Marco Palmesino and Paola A. Magni
Forensic Sci. 2023, 3(4), 560-570; https://doi.org/10.3390/forensicsci3040039 - 2 Nov 2023
Viewed by 6552
Abstract
Introduction: Three tiny bones compose the human ossicular chain: malleus, incus and stapes. Also known as auditory ossicles, they are united by joints in the middle ear cavity of the petrous part of the temporal bone. Completely developed two years after birth, the [...] Read more.
Introduction: Three tiny bones compose the human ossicular chain: malleus, incus and stapes. Also known as auditory ossicles, they are united by joints in the middle ear cavity of the petrous part of the temporal bone. Completely developed two years after birth, the ossicular chain is involved in the physiological process of hearing, by which sound waves from the environment are converted into electrochemical impulses. In the last 500 years, most studies have focused on the morphogenesis, morphological variability and clinical pathology of the ossicular chain, whilst only a few studies have added relevant knowledge to anthropology and forensic science. The auditory ossicles and the enclosing petrous bone are some of the hardest in the human skeleton. This is reflected in a relative resistance to fire and in the possibility of preservation and fossilization in millions of years. Materials and Methods: The literature and four present-day forensic cases were included in studying the postmortem loss of the auditory ossicles in skeletal or decomposing remains. Results indicate that it can be ascribed to their destruction or physical displacement, by either macro-micro-faunal action and/or any other natural or artificial disturbance. Discussion: Physical displacement is closely connected to the depositional environment of the skeletal remains, such as burial, entombment (sarcophagus, coffin, vault…), submersion or exposure to natural elements. Auditory ossicles can be recovered in situ, or very close to their anatomical location, when the skeletal material has been involved in an archaeological excavation. In the case of accessible or disturbed remains, scavengers may remove the tiny ossicles and/or they can slip out of the middle ear cavity following skull movements. Entombment offers effective protection against the displacement of the auditory ossicles, whereas aquatic submersion and aquatic movement almost invariably displace them. Conclusion: the preservation of the human auditory ossicles should be critically considered in the comprehensive context of any forensic investigation on human remains since it can assist the reconstruction of their taphonomic history. Taphonomic histories of remains can add crucial information to forensic investigations (e.g., the Post Mortem Interval, PMI). The aim of this study, limited by scarce relevant literature, is to discuss the potential role of the ossicular chain, detected by postmortem imaging techniques, as a taphonomical indicator in decomposing and/or skeletonized bodies. Full article
Show Figures

Figure 1

25 pages, 5943 KB  
Article
The Study on Solving Large Pore Heat Transfer Simulation in Malan Loess Based on Volume Averaging Method Combined with CT Scan Images
by Yangchun Lu, Ting Lu, Yudong Lu, Bo Wang, Guanghao Zeng and Xu Zhang
Sustainability 2023, 15(16), 12389; https://doi.org/10.3390/su151612389 - 15 Aug 2023
Cited by 2 | Viewed by 1700
Abstract
Malan loess is a wind-formed sediment in arid and semi-arid regions and is an important constituent of the Earth’s critical zone. Therefore, the study of the relationship between microstructure and heat transfer in Malan loess is of great significance for the in-depth understanding [...] Read more.
Malan loess is a wind-formed sediment in arid and semi-arid regions and is an important constituent of the Earth’s critical zone. Therefore, the study of the relationship between microstructure and heat transfer in Malan loess is of great significance for the in-depth understanding of the heat transfer mechanism and the accurate prediction of the heat transfer properties of intact loess. In order to quantitatively characterize the heat transfer processes in the two-phase medium of solid particles and gas pores in the intact loess, this study used modern computed tomography to CT scan the Malan loess in Huan County, Gansu Province, the western part of the Loess Plateau, China and used the specific yield of the intact Malan loess as the parameter basis for extracting the threshold segmentation of the large pores in the scanned images for the three-dimensional reconstruction of the connected large pores. An experimental space for heat conduction of intact Malan loess was constructed, and the surface temperature of Malan loess was measured on the surface of the space with a thermal imager. The simulation of the heat conduction process was carried out using the solution program in AVIZO (2019) software using the volume averaging method combined with CT scanning to reconstruct the 3D pores. The experiments of heat conduction in the intact Malan loess showed that for a given external temperature pressure, the temperature decreases along the heat flow direction as a whole. The temperature of the pores in the normal plane along the heat flow direction is higher than the temperature of the solid skeleton. Abnormal temperature points were formed at the junction of the surface and internal pores of Maran loess, and the temperature of the jointed macropores was about 1 °C higher at the surface of the sample than that of the surrounding solid skeleton. Simulation of heat conduction in Malan loess showed that the heat transfer process in Malan loess was preferentially conducted along the large pores and then the heat was transferred to the surrounding Malan loess particle skeleton. The simulation results of heat conduction in Malan loess were in high agreement with the experimental results of heat conduction in Malan loess, which verifies the reliability of the calculated model. Full article
(This article belongs to the Special Issue Geological Environment Monitoring and Early Warning Systems)
Show Figures

Figure 1

15 pages, 3136 KB  
Article
Temporal Estimation of Non-Rigid Dynamic Human Point Cloud Sequence Using 3D Skeleton-Based Deformation for Compression
by Jin-Kyum Kim, Ye-Won Jang, Sol Lee, Eui-Seok Hwang and Young-Ho Seo
Sensors 2023, 23(16), 7163; https://doi.org/10.3390/s23167163 - 14 Aug 2023
Cited by 1 | Viewed by 1986
Abstract
This paper proposes an algorithm for transmitting and reconstructing the estimated point cloud by temporally estimating a dynamic point cloud sequence. When a non-rigid 3D point cloud sequence (PCS) is input, the sequence is divided into groups of point cloud frames (PCFs), and [...] Read more.
This paper proposes an algorithm for transmitting and reconstructing the estimated point cloud by temporally estimating a dynamic point cloud sequence. When a non-rigid 3D point cloud sequence (PCS) is input, the sequence is divided into groups of point cloud frames (PCFs), and a key PCF is selected. The 3D skeleton is predicted through 3D pose estimation, and the motion of the skeleton is estimated by analyzing the joints and bones of the 3D skeleton. For the deformation of the non-rigid human PC, the 3D PC model is transformed into a mesh model, and the key PCF is rigged using the 3D skeleton. After deforming the key PCF into the target PCF utilizing the motion vector of the estimated skeleton, the residual PC between the motion compensation PCF and the target PCF is generated. If there is a key PCF, the motion vector of the target PCF, and a residual PC, the target PCF can be reconstructed. Just as compression is performed using pixel correlation between frames in a 2D video, this paper compresses 3D PCFs by estimating the non-rigid 3D motion of a 3D object in a 3D PC. The proposed algorithm can be regarded as an extension of the 2D motion estimation of a rigid local region in a 2D plane to the 3D motion estimation of a non-rigid object (human) in 3D space. Experimental results show that the proposed method can successfully compress 3D PC sequences. If it is used together with a PC compression technique such as MPEG PCC (point cloud compression) in the future, a system with high compression efficiency may be configured. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

10 pages, 2802 KB  
Case Report
Custom-Made 3D-Printed Prosthesis after Resection of a Voluminous Giant Cell Tumour Recurrence in Pelvis
by Adyb-Adrian KHAL, Dragos APOSTU, Calin SCHIAU, Nona BEJINARIU, Sebastien PESENTI and Jean-Luc JOUVE
Diagnostics 2023, 13(3), 485; https://doi.org/10.3390/diagnostics13030485 - 29 Jan 2023
Cited by 7 | Viewed by 2726
Abstract
Abstract: Giant-cell tumours are benign aggressive bone lesions that can affect any part of the skeleton. In early stages, curettage is preferred, but in case of local recurrence or voluminous lesions in the periacetabular region, wide resection and reconstruction are recommended. The purpose [...] Read more.
Abstract: Giant-cell tumours are benign aggressive bone lesions that can affect any part of the skeleton. In early stages, curettage is preferred, but in case of local recurrence or voluminous lesions in the periacetabular region, wide resection and reconstruction are recommended. The purpose of this article is to increase clinicians’ awareness of the importance of the follow-up of these patients and to describe a case of a voluminous recurrence of a giant-cell tumour in the pelvis. We present a 25-year-old female who underwent internal hemipelvectomy assisted by 3D cutting-guides and reconstruction with a custom-made 3D-printed pelvic prosthesis, hip arthroplasty and ilio-sacral arthrodesis. No postoperative complications occurred and, at long-term follow-up, the patient had a stable and painless hip joint, good bone-implant osteointegration, with an excellent functional outcome. In spite of all available reconstructive techniques, in well-selected patients with voluminous pelvic resections, custom-made 3D-printed implants allow patients to have a good mechanical outcome. Full article
(This article belongs to the Section Point-of-Care Diagnostics and Devices)
Show Figures

Graphical abstract

15 pages, 3232 KB  
Article
Unsafe Mining Behavior Identification Method Based on an Improved ST-GCN
by Xiangang Cao, Chiyu Zhang, Peng Wang, Hengyang Wei, Shikai Huang and Hu Li
Sustainability 2023, 15(2), 1041; https://doi.org/10.3390/su15021041 - 6 Jan 2023
Cited by 18 | Viewed by 3373
Abstract
Aiming to solve the problems of large environmental interference and complex types of personnel behavior that are difficult to identify in the current identification of unsafe behavior in mining areas, an improved spatial temporal graph convolutional network (ST-GCN) for miners’ unsafe behavior identification [...] Read more.
Aiming to solve the problems of large environmental interference and complex types of personnel behavior that are difficult to identify in the current identification of unsafe behavior in mining areas, an improved spatial temporal graph convolutional network (ST-GCN) for miners’ unsafe behavior identification network in a transportation roadway (NP-AGCN) was proposed. First, the skeleton spatial-temporal map constructed using multi-frame human key points was used for behavior recognition to reduce the interference caused by the complex environment of the coal mine. Second, aiming to solve the problem that the original graph structure cannot learn the association relationship between the non-naturally connected nodes, which leads to the low recognition rate of climbing belts, fighting and other behaviors, the graph structure was reconstructed and the original partitioning strategy was changed to improve the recognition ability of the model for multi-joint interaction behaviors. Finally, in order to alleviate the problem that the graph convolution network has difficulty learning global information due to the small receptive field, multiple self-attention mechanisms were introduced into the graph convolution to improve the recognition ability of the model for unsafe behaviors. In order to verify the detection ability of the model regarding identifying unsafe behaviors of personnel in a coal mine belt area, our model was tested on the public datasets NTU-RGB + D and the self-built datasets of unsafe behaviors in a coal mine belt area. The recognition accuracies of the proposed model in the above datasets were 94.7% and 94.1%, respectively, which were 6.4% and 7.4% higher than the original model, which verified that the proposed model had excellent recognition accuracies. Full article
(This article belongs to the Topic Smart Energy)
Show Figures

Figure 1

16 pages, 5327 KB  
Article
Mechanical and Biological Evaluation of Melt-Electrowritten Polycaprolactone Scaffolds for Acetabular Labrum Restoration
by Matthias X. T. Santschi, Stephanie Huber, Jan Bujalka, Nouara Imhof, Michael Leunig and Stephen J. Ferguson
Cells 2022, 11(21), 3450; https://doi.org/10.3390/cells11213450 - 31 Oct 2022
Cited by 13 | Viewed by 3150
Abstract
Repair or reconstruction of a degenerated or injured acetabular labrum is essential to the stability and health of the hip joint. Current methods for restoration fail to reproduce the structure and mechanical properties of the labrum. In this study, we characterized the structure [...] Read more.
Repair or reconstruction of a degenerated or injured acetabular labrum is essential to the stability and health of the hip joint. Current methods for restoration fail to reproduce the structure and mechanical properties of the labrum. In this study, we characterized the structure and tensile mechanical properties of melt-electrowritten polycaprolactone scaffolds of varying architectures and assessed the labrum cell compatibility of selected graft candidates. Cell compatibility was assessed using immunofluorescence of the actin skeleton. First, labrum explants were co-cultured with scaffold specimen to investigate the scaffold compatibility with primary cells. Second, effects of pore size on pre-cultured seeded labrum cells were studied. Third, cell compatibility under dynamic stretching was examined. Grid-like structures showed favorable tensile properties with decreasing fibre spacing. Young’s moduli ranging from 2.33 ± 0.34 to 13.36 ± 2.59 MPa were measured across all structures. Primary labrum cells were able to migrate from co-cultured labrum tissue specimens into the scaffold and grow in vitro. Incorporation of small-diameter-fibre and interfibre spacing improved cell distribution and cell spreading, whereas mechanical properties were only marginally affected. Wave-patterned constructs reproduced the non-linear elastic behaviour of native labrum tissue and, therefore, allowed for physiological cyclic tensile strain but showed decreased cell compatibility under dynamic loading. In conclusion, melt-electrowritten polycaprolactone scaffolds are promising candidates for labral grafts; however, further development is required to improve both the mechanical and biological compatibility. Full article
Show Figures

Figure 1

16 pages, 30804 KB  
Data Descriptor
Vertical Jump Data from Inertial and Optical Motion Tracking Systems
by Mateo Rico-Garcia, Juan Botero-Valencia and Ruber Hernández-García
Data 2022, 7(8), 116; https://doi.org/10.3390/data7080116 - 17 Aug 2022
Cited by 5 | Viewed by 4384
Abstract
Motion capture (MOCAP) is a widely used technique to record human, animal, and object movement for various applications such as animation, biomechanical assessment, and control systems. Different systems have been proposed based on diverse technologies, such as visible light cameras, infrared cameras with [...] Read more.
Motion capture (MOCAP) is a widely used technique to record human, animal, and object movement for various applications such as animation, biomechanical assessment, and control systems. Different systems have been proposed based on diverse technologies, such as visible light cameras, infrared cameras with passive or active markers, inertial systems, or goniometer-based systems. Each system has pros and cons that make it usable in different scenarios. This paper presents a dataset that combines Optical Motion and Inertial Systems, capturing a well-known sports movement as the vertical jump. As a reference system, the optical motion capture consists of six Flex 3 Optitrack cameras with 100 FPS. On the other hand, we developed an inertial system consisting of seven custom-made devices based on the IMU MPU-9250, which includes a three-axis magnetometer, accelerometer and gyroscope, and an embedded Digital Motion Processor (DMP) attached to a microcontroller mounted on a Teensy 3.2 with an ARM Cortex-M4 processor with wireless operation using Bluetooth. The purpose of taking IMU data with a low-cost and customized system is the deployment of applications that can be performed with similar hardware and can be adjusted to different areas. The developed measurement system is flexible, and the acquisition format and enclosure can be customized. The proposed dataset comprises eight jumps recorded from four healthy humans using both systems. Experimental results on the dataset show two usage examples for measuring joint angles and COM position. The proposed dataset is publicly available online and can be used in comparative algorithms, biomechanical studies, skeleton reconstruction, sensor fusion techniques, or machine learning models. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

14 pages, 11526 KB  
Article
Planning Collision-Free Robot Motions in a Human–Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking
by Saverio Farsoni, Jacopo Rizzi, Giulia Nenna Ufondu and Marcello Bonfè
Electronics 2022, 11(15), 2407; https://doi.org/10.3390/electronics11152407 - 1 Aug 2022
Cited by 6 | Viewed by 2913
Abstract
The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into [...] Read more.
The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot. Full article
Show Figures

Figure 1

11 pages, 2145 KB  
Article
A Linkage Representation of the Human Hand Skeletal System Using CT Hand Scan Images
by Ying Cao, Xiaopeng Yang, Zhichan Lim, Hayoung Jung, Dougho Park and Heecheon You
Appl. Sci. 2021, 11(13), 5857; https://doi.org/10.3390/app11135857 - 24 Jun 2021
Cited by 3 | Viewed by 5051
Abstract
The present study proposed a method for establishing a linkage representation of the human hand skeletal system. Hand skeletons of 15 male subjects were reconstructed from computed tomography (CT) scans in 10 different postures selected from a natural hand-closing motion. The wrist joint [...] Read more.
The present study proposed a method for establishing a linkage representation of the human hand skeletal system. Hand skeletons of 15 male subjects were reconstructed from computed tomography (CT) scans in 10 different postures selected from a natural hand-closing motion. The wrist joint center was estimated as the intersection of the centerline of the metacarpal of the middle finger and the distal wrist crease. The remaining joint centers were kinematically estimated based on the relative motion between the distal bone segment and the proximal bone segment of a given joint. A hand linkage representation was then formed by connecting the derived joint centers. Regression models for predicting internal hand link lengths using hand length as the independent variable were established. In addition, regression models for predicting the joint center coordinates of the thumb carpometacarpal (CMC) and finger metacarpophalangeal (MCP) joints using hand length or hand breadth were established. Our models showed higher R2 values and lower maximum standard errors than the existing models. The findings of the present study can be applied to hand models for ergonomic design and biomechanical modeling. Full article
(This article belongs to the Special Issue Novel Approaches and Applications in Ergonomic Design)
Show Figures

Figure 1

Back to TopTop