Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (242)

Search Parameters:
Keywords = 3D Human Pose Estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1787 KB  
Article
Event-Based Machine Vision for Edge AI Computing
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 935; https://doi.org/10.3390/s26030935 - 1 Feb 2026
Viewed by 127
Abstract
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object [...] Read more.
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object detection, (ii) 2D human pose estimation, (iii) hand posture recognition for human–machine interfaces. The main methodological contributions are timestamp-based, polarity-agnostic recency encoding that preserves moving-edge structure while suppressing static background, and task-specific network optimizations (architectural reduction and mixed-bit quantization) tailored to sparse event images. With a fixed downstream network, the recency encoding improves action recognition accuracy over temporal accumulation (0.908 vs. 0.896). In a 24 h indoor monitoring experiment (640 × 480), the raw DVS stream is about 30× smaller than conventional CMOS video and remains about 5× smaller after standard compression. For human detection, the optimized event processing reduces computation from 5.8 G to 81 M FLOPs and runtime from 172 ms to 15 ms (more than 11× speed-up). For pose estimation, a pruned HRNet reduces model size from 127 MB to 19 MB and inference time from 70 ms to 6 ms on an NVIDIA Titan X while maintaining a comparable accuracy (mAP from 0.95 to 0.94) on MS COCO 2017 using synthetic event streams generated by an event simulator. For hand posture recognition, a compact CNN achieves 99.19% recall and 0.0926% FAR with 14.31 ms latency on a single i5-4590 CPU core using 10-frame sequence voting. These results indicate that event-based sensing combined with lightweight inference is a practical approach to privacy-friendly, real-time perception under strict edge constraints. Full article
(This article belongs to the Special Issue Next-Generation Edge AI in Wearable Devices)
Show Figures

Figure 1

27 pages, 11232 KB  
Article
Aerokinesis: An IoT-Based Vision-Driven Gesture Control System for Quadcopter Navigation Using Deep Learning and ROS2
by Sergei Kondratev, Yulia Dyrchenkova, Georgiy Nikitin, Leonid Voskov, Vladimir Pikalov and Victor Meshcheryakov
Technologies 2026, 14(1), 69; https://doi.org/10.3390/technologies14010069 - 16 Jan 2026
Viewed by 308
Abstract
This paper presents Aerokinesis, an IoT-based software–hardware system for intuitive gesture-driven control of quadcopter unmanned aerial vehicles (UAVs), developed within the Robot Operating System 2 (ROS2) framework. The proposed system addresses the challenge of providing an accessible human–drone interaction interface for operators in [...] Read more.
This paper presents Aerokinesis, an IoT-based software–hardware system for intuitive gesture-driven control of quadcopter unmanned aerial vehicles (UAVs), developed within the Robot Operating System 2 (ROS2) framework. The proposed system addresses the challenge of providing an accessible human–drone interaction interface for operators in scenarios where traditional remote controllers are impractical or unavailable. The architecture comprises two hierarchical control levels: (1) high-level discrete command control utilizing a fully connected neural network classifier for static gesture recognition, and (2) low-level continuous flight control based on three-dimensional hand keypoint analysis from a depth camera. The gesture classification module achieves an accuracy exceeding 99% using a multi-layer perceptron trained on MediaPipe-extracted hand landmarks. For continuous control, we propose a novel approach that computes Euler angles (roll, pitch, yaw) and throttle from 3D hand pose estimation, enabling intuitive four-degree-of-freedom quadcopter manipulation. A hybrid signal filtering pipeline ensures robust control signal generation while maintaining real-time responsiveness. Comparative user studies demonstrate that gesture-based control reduces task completion time by 52.6% for beginners compared to conventional remote controllers. The results confirm the viability of vision-based gesture interfaces for IoT-enabled UAV applications. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

20 pages, 4633 KB  
Article
Teleoperation System for Service Robots Using a Virtual Reality Headset and 3D Pose Estimation
by Tiago Ribeiro, Eduardo Fernandes, António Ribeiro, Carolina Lopes, Fernando Ribeiro and Gil Lopes
Sensors 2026, 26(2), 471; https://doi.org/10.3390/s26020471 - 10 Jan 2026
Viewed by 350
Abstract
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense [...] Read more.
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense D455 RGB-D (Red-Green-Blue plus Depth) camera for depth acquisition, enabling 3D reconstruction of key joints. Joint angles are computed using efficient vector operations and mapped to the kinematic constraints of an anthropomorphic arm on the CHARMIE service robot. A VR-based telepresence interface provides stereoscopic video and head-motion-based view control to improve situational awareness during manipulation tasks. Experiments in real-world object grasping demonstrate reliable arm teleoperation and effective telepresence; however, vision-only estimation remains limited for axial rotations (e.g., elbow and wrist yaw), particularly under occlusions and unfavorable viewpoints. The proposed system provides a practical pathway toward low-cost, sensor-driven, immersive human–robot interaction for service robotics in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1010 KB  
Article
Chlorinated Paraffins in Chicken Eggs from Five Regions in China and Dietary Exposure Health Risk Assessment
by Nan Wu, Lei Zhang, Tingting Zhou, Jiyuan Weng, Changliang Li, Wenjie Song, Yingying Zhou, Qi Li, Yu Lu, Pingping Zhou and Lirong Gao
Toxics 2026, 14(1), 60; https://doi.org/10.3390/toxics14010060 - 8 Jan 2026
Viewed by 444
Abstract
Chlorinated paraffins (CPs) are a class of persistent organic pollutants that pose potential human health risks through dietary exposure. In this study, we analyzed CPs in 55 chicken egg samples collected from five regions across China. Short-chain chlorinated paraffins (SCCPs) and medium-chain chlorinated [...] Read more.
Chlorinated paraffins (CPs) are a class of persistent organic pollutants that pose potential human health risks through dietary exposure. In this study, we analyzed CPs in 55 chicken egg samples collected from five regions across China. Short-chain chlorinated paraffins (SCCPs) and medium-chain chlorinated paraffins (MCCPs) were detected using a two-dimensional gas chromatograph coupled with an electron-capture negative-ionization mass spectrometer. Dietary exposure risks were assessed using the margin of exposure (MOE) approach based on the food consumption data of Chinese residents from 2018 to 2020. The average concentrations of SCCPs and MCCPs in all samples were 28.4 ng/g wet weight (ww) and 176.5 ng/g ww, respectively. The congener profiles of SCCPs and MCCPs were similar across different regions, with C10–11 Cl6–7 as the dominant homologs. For MCCPs, the average contributions of C14-CP, C15-CP, C16-CP, and C17-CP were 25%, 21%, 27%, and 27%, respectively. The estimated daily intake (EDI) for the entire population was 18.3 ng/kg body weight (bw)/d for SCCPs and 118.3 ng/kg bw/d for MCCPs. In the consumer-only group, the average exposure levels of SCCPs and MCCPs were 27.8 ng/kg bw/d and 174.1 ng/kg bw/d, respectively. This preliminary risk assessment indicates that there is no health risk to the Chinese population from exposure to CP through consumption of chicken eggs. Full article
Show Figures

Graphical abstract

15 pages, 979 KB  
Article
Hybrid Skeleton-Based Motion Templates for Cross-View and Appearance-Robust Gait Recognition
by João Ferreira Nunes, Pedro Miguel Moreira and João Manuel R. S. Tavares
J. Imaging 2026, 12(1), 32; https://doi.org/10.3390/jimaging12010032 - 7 Jan 2026
Viewed by 221
Abstract
Gait recognition methods based on silhouette templates, such as the Gait Energy Image (GEI), achieve high accuracy under controlled conditions but often degrade when appearance varies due to viewpoint, clothing, or carried objects. In contrast, skeleton-based approaches provide interpretable motion cues but remain [...] Read more.
Gait recognition methods based on silhouette templates, such as the Gait Energy Image (GEI), achieve high accuracy under controlled conditions but often degrade when appearance varies due to viewpoint, clothing, or carried objects. In contrast, skeleton-based approaches provide interpretable motion cues but remain sensitive to pose-estimation noise. This work proposes two compact 2D skeletal descriptors—Gait Skeleton Images (GSIs)—that encode 3D joint trajectories into line-based and joint-based static templates compatible with standard 2D CNN architectures. A unified processing pipeline is introduced, including skeletal topology normalization, rigid view alignment, orthographic projection, and pixel-level rendering. Core design factors are analyzed on the GRIDDS dataset, where depth-based 3D coordinates provide stable ground truth for evaluating structural choices and rendering parameters. An extensive evaluation is then conducted on the widely used CASIA-B dataset, using 3D coordinates estimated via human pose estimation, to assess robustness under viewpoint, clothing, and carrying covariates. Results show that although GEIs achieve the highest same-view accuracy, GSI variants exhibit reduced degradation under appearance changes and demonstrate greater stability under severe cross-view conditions. These findings indicate that compact skeletal templates can complement appearance-based descriptors and may benefit further from continued advances in 3D human pose estimation. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

29 pages, 3921 KB  
Article
A Semantic Priors-Based Non-Euclidean Topological Enhancement Method for 3D Human Pose Estimation in Multi-Class Complex Human Actions
by Xiaowei Han, Chaolong Fei, Yibo Feng, Wenbao Si and Guilin Yao
Electronics 2026, 15(1), 155; https://doi.org/10.3390/electronics15010155 - 29 Dec 2025
Viewed by 190
Abstract
Three-dimensional human pose estimation (3D HPE) aims to recover the three-dimensional coordinates of human joints from 2D images or videos to achieve precise quantification of human movement. In 3D HPE tasks based on multi-class complex human action datasets, the performance of existing Graph [...] Read more.
Three-dimensional human pose estimation (3D HPE) aims to recover the three-dimensional coordinates of human joints from 2D images or videos to achieve precise quantification of human movement. In 3D HPE tasks based on multi-class complex human action datasets, the performance of existing Graph Convolutional Network (GCN) and Transformer fusion models is constrained by the fixed physical connections of the skeleton, which impedes the modeling of cross-joint long-range semantic dependencies and hinders further performance gains. To address this issue, this study proposes a semantic prior-based non-Euclidean topology enhancement method for multi-class complex human actions, built upon a GCN–Transformer fusion model. The proposed method retains the original physical connections while introducing semantic prior edges; by constructing a hybrid topology structure, it explicitly models long-range semantic dependencies between non-adjacent joints, thereby facilitating the extraction of cross-joint semantic information. Experimental results on the Human3.6M and HumanEva-I datasets surpass those of SOTA baseline models. On the Human3.6M dataset, MPJPE and P-MPJPE are reduced by 1.25% and 0.63%, respectively. For the Walk and Jog actions on the HumanEva-I dataset, MPJPE is reduced by approximately 6.5%. These results demonstrate that the proposed method offers significant advantages for 3D HPE tasks based on multi-class complex human action data. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 9622 KB  
Article
View-Aware Pose Analysis: A Robust Pipeline for Multi-Person Joint Injury Prediction from Single Camera
by Basant Adel, Ahmad Salah, Mahmoud A. Mahdi and Heba Mohsen
AI 2026, 7(1), 7; https://doi.org/10.3390/ai7010007 - 27 Dec 2025
Viewed by 617
Abstract
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday [...] Read more.
This paper presents a novel, accessible pipeline for the prediction and prevention of motion-related joint injuries in multiple individuals. Current methodologies for biomechanical analysis often rely on complex, restrictive setups such as multi-camera systems, wearable sensors, or markers, limiting their applicability in everyday environments. To overcome these limitations, we propose a comprehensive solution that utilizes only single-camera 2D images. Our pipeline comprises four distinct stages: (1) extraction of 2D human pose keypoints for multiple persons using a pretrained Human Pose Estimation model; (2) a novel ensemble learning model for person-view classification—distinguishing between front, back, and side perspectives—which is critical for accurate subsequent analysis; (3) a view-specific module that calculates body-segment angles, robustly handling movement pairs (e.g., flexion–extension) and mirrored joints; and (4) a pose assessment module that evaluates calculated angles against established biomechanical Range of Motion (ROM) standards to detect potentially injurious movements. Evaluated on a custom dataset of high-risk poses and diverse images, the end-to-end pipeline demonstrated an 87% success rate in identifying dangerous postures. The view classification stage, a key contribution of this work, achieved a 90% overall accuracy. The system delivers individualized, joint-specific feedback, offering a scalable and deployable solution for enhancing human health and safety in various settings, from home environments to workplaces, without the need for specialized equipment. Full article
Show Figures

Figure 1

13 pages, 1221 KB  
Article
A 2D Hand Pose Estimation System Accuracy for Finger Tapping Test Monitoring: A Pilot Study
by Saeid Edriss, Cristian Romagnoli, Rossella Rotondo, Maria Francesca De Pandis, Elvira Padua, Vincenzo Bonaiuto, Giuseppe Annino and Lloyd Smith
Appl. Sci. 2026, 16(1), 229; https://doi.org/10.3390/app16010229 - 25 Dec 2025
Viewed by 746
Abstract
Accurate and accessible motor function quantification is important for monitoring the movement disorders’ progression. Manual muscle testing models and wearable sensors can be costly or reduce degrees of freedom. Artificial intelligence, especially human pose estimation (PE), offers promising alternatives. This work aims to [...] Read more.
Accurate and accessible motor function quantification is important for monitoring the movement disorders’ progression. Manual muscle testing models and wearable sensors can be costly or reduce degrees of freedom. Artificial intelligence, especially human pose estimation (PE), offers promising alternatives. This work aims to compare the accuracy of a 2D PE tool for the Finger Tapping Test (FTT) with a 3D infrared motion capture system (MoCap). PE tracked three anatomical landmarks (wrist, thumb, index finger), while reflective markers were placed at corresponding locations on both tools to measure wrist-centered angles. Different trials of slow and rapid FTT sessions were statistically analyzed by rank correlation analysis, Friedman, Bland–Altman, and Kruskal–Wallis to assess agreement and repeatability. PE and MoCap measurements showed no significant differences (p > 0.05), with high reliability (ICC 0.87–0.91), low variability (CV 6–8.6%), and negligible effect size. Bland–Altman slopes indicated minor amplitude-dependent bias, while RMSE (2.92–4.48°) and MAPE (6.38–8.22%) errors occurred in slow and rapid conditions. These results demonstrate that 2D PE provides a reliable, accessible, and low-cost alternative for quantifying finger movement. The findings suggest that PE can serve as an assistive method for monitoring motor function. Future studies can be population-level studies with patients with neurological disorders. Full article
Show Figures

Figure 1

15 pages, 2618 KB  
Article
Multi-Agent Collaboration for 3D Human Pose Estimation and Its Potential in Passenger-Gathering Behavior Early Warning
by Xirong Chen, Hongxia Lv, Lei Yin and Jie Fang
Electronics 2026, 15(1), 78; https://doi.org/10.3390/electronics15010078 - 24 Dec 2025
Viewed by 363
Abstract
Passenger-gathering behavior often triggers safety incidents such as stampedes due to overcrowding, posing significant challenges to public order maintenance and passenger safety. Traditional early warning algorithms for passenger-gathering behavior typically perform only global modeling of image appearance, neglecting the analysis of individual passenger [...] Read more.
Passenger-gathering behavior often triggers safety incidents such as stampedes due to overcrowding, posing significant challenges to public order maintenance and passenger safety. Traditional early warning algorithms for passenger-gathering behavior typically perform only global modeling of image appearance, neglecting the analysis of individual passenger actions in practical 3D physical space, leading to high false-alarm and missed-alarm rates. To address this issue, we decompose the modeling process into two stages: human pose estimation and gathering behavior recognition. Specifically, the pose of each individual in 3D space is first estimated from images, and then fused with global features to complete the early warning. This work focuses on the former stage and aims to develop an accurate and efficient human pose estimation model capable of real-time inference on resource-constrained devices. To this end, we propose a 3D human pose estimation framework that integrates a hybrid spatio-temporal Transformer with three collaborative agents. First, a reinforcement learning-based architecture search agent is designed to adaptively select among Global Self-Attention, Window Attention, and External Attention for each block to optimize the model structure. Second, a feedback optimization agent is developed to dynamically adjust the search process, balancing exploration and convergence. Third, a quantization agent is employed that leverages quantization-aware training (QAT) to generate an INT8 deployment-ready model with minimal loss in accuracy. Experiments conducted on the Human3.6M dataset demonstrate that the proposed method achieves a mean per joint position error (MPJPE) of 42.15 mm with only 4.38 M parameters and 19.39 GFLOPs under FP32 precision, indicating substantial potential for subsequent gathering behavior recognition tasks. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 4961 KB  
Article
Human Pose Intelligent Detection Algorithm Based on Spatiotemporal Hybrid Dilated Convolution Model
by Lili Zhang, Shenxi Dai, Lihuang She and Shuwei Huo
Electronics 2025, 14(24), 4798; https://doi.org/10.3390/electronics14244798 - 5 Dec 2025
Viewed by 405
Abstract
Three-dimensional human pose estimation (3D HPE) refers to converting the input image or video into the coordinates of the keypoints of the 3D human body in the coordinate system. At present, the mainstream implementation scheme of a 3D HPE task is to take [...] Read more.
Three-dimensional human pose estimation (3D HPE) refers to converting the input image or video into the coordinates of the keypoints of the 3D human body in the coordinate system. At present, the mainstream implementation scheme of a 3D HPE task is to take the 2D pose estimation result as the intermediate process and then return it to the 3D pose. The general difficulty of this scheme is how to effectively extract the features between 2D joint points and return them to 3D coordinates in a highly nonlinear 3D space. In this paper, we propose a new algorithm, called TSHDC, to solve the above dilemma by considering the temporal and spatial characteristics of human joint points. By introducing the self-attention mechanism and the temporal convolutional network (TCN) into the 3D HPE task, the model can use only 27 frames of temporal receptive field to make the model have fewer parameters and faster convergence speed when the accuracy is not much different from the SOTA-level algorithm (+6.8 mm). The TSHDC model is deployed on the embedded platform JetsonTX2, and by deploying TensorRT, the model inference speed can be greatly improved (13.7 times) with only a small loss of accuracy (5%). The comprehensive experimental results on representative benchmarks show that our method outperforms the state-of-the-art methods in quantitative and qualitative evaluation. Full article
Show Figures

Figure 1

23 pages, 15360 KB  
Article
A Mobile Robotic System Design and Approach for Autonomous Targeted Disinfection
by Mohammed Z. Shaqura, Linyan Han, Mohammadali Javaheri Koopaee, Wissem Haouas, Moustafa Motawei, Peter Mooney, Nick Fry, Tony Wiese, Bilal Kaddouh and Robert C. Richardson
Robotics 2025, 14(12), 178; https://doi.org/10.3390/robotics14120178 - 30 Nov 2025
Viewed by 530
Abstract
The recent global pandemic has posed unprecedented challenges to public health systems and has highlighted the critical need for effective, contactless disinfection strategies in shared environments. This study investigates the use of autonomous robotics to enhance disinfection efficiency and safety in public spaces [...] Read more.
The recent global pandemic has posed unprecedented challenges to public health systems and has highlighted the critical need for effective, contactless disinfection strategies in shared environments. This study investigates the use of autonomous robotics to enhance disinfection efficiency and safety in public spaces through the development of a custom-built mobile spraying platform. The proposed robotic system is equipped with an integrated 3D object pose estimation framework that fuses RGB-based object detection with point cloud segmentation to accurately identify and localize high-contact surfaces. To facilitate autonomous operation, both local and global motion planning algorithms are implemented, enabling the robot to navigate complex environments and execute disinfection tasks with minimal human intervention. Experimental results demonstrate the feasibility of the proposed disinfection robotic system. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

17 pages, 3038 KB  
Article
Research on Deep Learning-Based Human–Robot Static/Dynamic Gesture-Driven Control Framework
by Gong Zhang, Jiahong Su, Shuzhong Zhang, Jianzheng Qi, Zhicheng Hou and Qunxu Lin
Sensors 2025, 25(23), 7203; https://doi.org/10.3390/s25237203 - 25 Nov 2025
Cited by 1 | Viewed by 860
Abstract
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid [...] Read more.
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid architecture combining three-dimensional Convolutional Neural Networks (3D-CNNs) and Long Short-Term Memory networks (3D-CNN+LSTM) for dynamic gesture recognition. Results on a custom gesture dataset demonstrate validation accuracies of 95.38% for static gestures and 93.18% for dynamic gestures, respectively. Then, in order to control and drive the robot to perform corresponding tasks, hand pose estimation was performed. The MediaPipe machine learning framework was first employed to extract hand feature points. These 2D feature points were then converted into 3D coordinates using a depth camera-based pose estimation method, followed by coordinate system transformation to obtain hand poses relative to the robot’s base coordinate system. Finally, an experimental platform for human–robot gesture-driven interaction was established, deploying both gesture recognition models. Four participants were invited to perform 100 trials each of gesture-driven object-grasping and delivery tasks under three lighting conditions: natural light, low light, and strong light. Experimental results show that the average success rates for completing tasks via static and dynamic gestures are no less than 96.88% and 94.63%, respectively, with task completion times consistently within 20 s. These findings demonstrate that the proposed approach enables robust vision-based robotic control through natural hand gestures, showing great prospects for human–robot collaboration applications. Full article
Show Figures

Figure 1

17 pages, 43598 KB  
Review
Body Measurements for Digital Forensic Comparison of Individuals—An Overview of Current Developments
by Sabine Richter and Dirk Labudde
Appl. Sci. 2025, 15(23), 12518; https://doi.org/10.3390/app152312518 - 25 Nov 2025
Viewed by 639
Abstract
Forensic identification of individuals faces significant challenges, particularly when conventional biometric features such as the face are hidden. This paper examines the historical development and revival of body patterns (anthropometric rig) as biometric comparison feature, from historical Bertillonage to modern, computer-assisted methods such [...] Read more.
Forensic identification of individuals faces significant challenges, particularly when conventional biometric features such as the face are hidden. This paper examines the historical development and revival of body patterns (anthropometric rig) as biometric comparison feature, from historical Bertillonage to modern, computer-assisted methods such as digital anthropometric rig matching and the connection to 3D human pose estimation (HPE). It highlights both the mathematical and methodological foundations of this revival, as well as the potential and limitations of applying artificial intelligence (AI) in the context of digital anthropometric rig matching. The aim is to highlight the development of potential and challenges for the forensic validity of the person-specific digital skeleton. This clearly shows the time required for manual work, which underlines the need for automation. The time required can be reduced by approaches that use AI. However, these methods are often not yet up to the requirements in a forensic context. Full article
Show Figures

Figure 1

13 pages, 8294 KB  
Article
Occurrence, Homologue Profiles and Risk Assessment of Short- and Medium-Chain Chlorinated Paraffins in Edible Vegetable Oils
by Yu Lu, Nan Wu, Lirong Gao, Lei Zhang, Tingting Zhou, Pei Cao, Jinyao Chen and Pingping Zhou
Foods 2025, 14(23), 3988; https://doi.org/10.3390/foods14233988 - 21 Nov 2025
Cited by 1 | Viewed by 564
Abstract
Dietary intake is the major route of human exposure to fat-soluble and persistent chlorinated paraffins (CPs), which tend to accumulate in lipid-rich foods such as edible vegetable oils. This study investigated the levels of short-chain (SCCPs) and medium-chain chlorinated paraffins (MCCPs) in commercially [...] Read more.
Dietary intake is the major route of human exposure to fat-soluble and persistent chlorinated paraffins (CPs), which tend to accumulate in lipid-rich foods such as edible vegetable oils. This study investigated the levels of short-chain (SCCPs) and medium-chain chlorinated paraffins (MCCPs) in commercially available vegetable oils and assessed their potential health risks. The concentrations of SCCPs and MCCPs in 29 commercial edible vegetable oils were analyzed using comprehensive two-dimensional gas chromatography coupled with electron capture negative ionization mass spectrometry (GC × GC-ECNI-MS). Dietary exposure levels were estimated through probabilistic assessment integrating analytical results with dietary consumption data from the Chinese Total Diet Study (2017–2020). The margin of exposure (MOE) approach was employed for risk characterization. The average concentrations of SCCPs and MCCPs were 112 ng/g and 139 ng/g, respectively. The highest SCCP and MCCP concentration were found in sesame oil and peanut oil, respectively. Overall, MCCPs levels were generally higher than SCCPs. The estimated daily intakes (EDIs) of SCCPs and MCCPs were 56.06 and 73.63 ng/kg bw/d on average, with high consumers (P95) exposed to 180.91 and 230.49 ng/kg bw/d, respectively. Corresponding MOE at P95 were 1.27 × 104 for SCCPs and 1.56 × 105 for MCCPs. The current SCCPs and MCCPs dietary intake originated from edible vegetable oils did not pose a significant health risk. This study provides the first probabilistic exposure assessment of CPs in Chinese edible vegetable oils, offering current contamination profiles. Full article
(This article belongs to the Section Food Quality and Safety)
Show Figures

Figure 1

22 pages, 5377 KB  
Article
Hand–Object Pose Estimation Based on Anchor Regression from a Single Egocentric Depth Image
by Jingang Lin, Dongnian Li, Chengjun Chen and Zhengxu Zhao
Sensors 2025, 25(22), 6881; https://doi.org/10.3390/s25226881 - 11 Nov 2025
Viewed by 809
Abstract
To precisely understand the interaction behaviors of humans, a computer vision system needs to accurately acquire the poses of the hand and its manipulated object. Vision-based hand–object pose estimation has become an important research topic. However, it is still a challenging task due [...] Read more.
To precisely understand the interaction behaviors of humans, a computer vision system needs to accurately acquire the poses of the hand and its manipulated object. Vision-based hand–object pose estimation has become an important research topic. However, it is still a challenging task due to severe occlusion. In this study, a hand–object pose estimation method based on anchor regression is proposed to address this problem. First, a hand–object 3D center detection method was established to extract hand–object foreground images from the original depth images. Second, a method based on anchor regression is proposed to simultaneously estimate the poses of the hand and object in a single framework. A convolutional neural network with ResNet-50 as the backbone was built to predict the position deviations and weights of the uniformly distributed anchor points in the image to the keypoints of the hand and the manipulated object. According to the experimental results on the FPHA-HO dataset, the mean keypoint errors of the hand and object of the proposed method were 11.85 mm and 18.97 mm, respectively. The proposed hand–object pose estimation method can accurately estimate the poses of the hand and the manipulated object based on a single egocentric depth image. Full article
Show Figures

Figure 1

Back to TopTop