Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (208)

Search Parameters:
Keywords = human pose tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2860 KiB  
Review
Multimodal Sensing-Enabled Large Language Models for Automated Emotional Regulation: A Review of Current Technologies, Opportunities, and Challenges
by Liangyue Yu, Yao Ge, Shuja Ansari, Muhammad Imran and Wasim Ahmad
Sensors 2025, 25(15), 4763; https://doi.org/10.3390/s25154763 (registering DOI) - 1 Aug 2025
Abstract
Emotion regulation is essential for mental health. However, many people ignore their own emotional regulation or are deterred by the high cost of psychological counseling, which poses significant challenges to making effective support widely available. This review systematically examines the convergence of multimodal [...] Read more.
Emotion regulation is essential for mental health. However, many people ignore their own emotional regulation or are deterred by the high cost of psychological counseling, which poses significant challenges to making effective support widely available. This review systematically examines the convergence of multimodal sensing technologies and large language models (LLMs) for the development of Automated Emotional Regulation (AER) systems. The review draws upon a comprehensive analysis of the existing literature, encompassing research papers, technical reports, and relevant theoretical frameworks. Key findings indicate that multimodal sensing offers the potential for rich, contextualized data pertaining to emotional states, while LLMs provide improved capabilities for interpreting these inputs and generating nuanced, empathetic, and actionable regulatory responses. The integration of these technologies, including physiological sensors, behavioral tracking, and advanced LLM architectures, presents the improvement of application, moving AER beyond simpler, rule-based systems towards more adaptive, context-aware, and human-like interventions. Opportunities for personalized interventions, real-time support, and novel applications in mental healthcare and other domains are considerable. However, these prospects are counterbalanced by significant challenges and limitations. In summary, this review synthesizes current technological advancements, identifies substantial opportunities for innovation and application, and critically analyzes the multifaceted technical, ethical, and practical challenges inherent in this domain. It also concludes that while the integration of multimodal sensing and LLMs holds significant potential for AER, the field is nascent and requires concerted research efforts to realize its full capacity to enhance human well-being. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 1470 KiB  
Article
An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human–Robot Collaboration
by Dianhao Zhang, Mien Van, Pantelis Sopasakis and Seán McLoone
Machines 2025, 13(8), 672; https://doi.org/10.3390/machines13080672 (registering DOI) - 1 Aug 2025
Abstract
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes [...] Read more.
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute safe path planning based on feedback from a vision system. To satisfy the requirements of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times, NMPC solutions are approximate; therefore, the safety of the system cannot be guaranteed. To address this, we formulate a novel safety-critical paradigm that uses an exponential control barrier function (ECBF) as a safety filter. Several common human–robot assembly subtasks have been integrated into a real-life HRC assembly task to validate the performance of the proposed controller and to investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework, with a 23.2% reduction in execution time achieved for the HRC task compared to an implementation without human motion prediction. Full article
(This article belongs to the Special Issue Visual Measurement and Intelligent Robotic Manufacturing)
Show Figures

Figure 1

18 pages, 3281 KiB  
Article
A Preprocessing Pipeline for Pupillometry Signal from Multimodal iMotion Data
by Jingxiang Ong, Wenjing He, Princess Maglanque, Xianta Jiang, Lawrence M. Gillman, Ashley Vergis and Krista Hardy
Sensors 2025, 25(15), 4737; https://doi.org/10.3390/s25154737 (registering DOI) - 31 Jul 2025
Abstract
Pupillometry is commonly used to evaluate cognitive effort, attention, and facial expression response, offering valuable insights into human performance. The combination of eye tracking and facial expression data under the iMotions platform provides great opportunities for multimodal research. However, there is a lack [...] Read more.
Pupillometry is commonly used to evaluate cognitive effort, attention, and facial expression response, offering valuable insights into human performance. The combination of eye tracking and facial expression data under the iMotions platform provides great opportunities for multimodal research. However, there is a lack of standardized pipelines for managing pupillometry data on a multimodal platform. Preprocessing pupil data in multimodal platforms poses challenges like timestamp misalignment, missing data, and inconsistencies across multiple data sources. To address these challenges, the authors introduced a systematic preprocessing pipeline for pupil diameter measurements collected using iMotions 10 (version 10.1.38911.4) during an endoscopy simulation task. The pipeline involves artifact removal, outlier detection using advanced methods such as the Median Absolute Deviation (MAD) and Moving Average (MA) algorithm filtering, interpolation of missing data using the Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), and mean pupil diameter calculation through linear regression, as well as normalization of mean pupil diameter and integration of the pupil diameter dataset with facial expression data. By following these steps, the pipeline enhances data quality, reduces noise, and facilitates the seamless integration of pupillometry other multimodal datasets. In conclusion, this pipeline provides a detailed and organized preprocessing method that improves data reliability while preserving important information for further analysis. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

54 pages, 1242 KiB  
Review
Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling
by Sabrine Dhaouadi, Mohamed Moncef Ben Khelifa, Ala Balti and Pascale Duché
Sensors 2025, 25(15), 4612; https://doi.org/10.3390/s25154612 - 25 Jul 2025
Viewed by 200
Abstract
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and [...] Read more.
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and video systems to identify obesity-specific deviations, such as reduced stride length and asymmetric movement patterns. Pose estimation algorithms—including markerless frameworks like OpenPose and MediaPipe—track kinematic patterns indicative of postural imbalance and altered locomotor control. Human voxel modeling reconstructs 3D body composition metrics, such as waist–hip ratio, through infrared-depth sensing, offering precise, contactless anthropometry. Despite their potential, challenges persist in sensor robustness under uncontrolled environments, algorithmic biases in diverse populations, and scalability for widespread deployment in existing health workflows. Emerging solutions such as federated learning and edge computing aim to address these limitations by enabling multimodal data harmonization and portable, real-time analytics. Future priorities involve standardizing validation protocols to ensure reproducibility, optimizing cost-efficacy for scalable deployment, and integrating optical systems with wearable technologies for holistic health monitoring. By shifting obesity diagnostics from static metrics to dynamic, multidimensional profiling, optical sensing paves the way for scalable public health interventions and personalized care strategies. Full article
Show Figures

Figure 1

14 pages, 2434 KiB  
Article
Rapid Detection of VOCs from Pocket Park Surfaces for Health Risk Monitoring Using SnO2/Nb2C Sensors
by Peng Wang, Yuhang Liu, Sheng Hu, Haoran Han, Liangchao Guo and Yan Xiao
Biosensors 2025, 15(7), 457; https://doi.org/10.3390/bios15070457 - 15 Jul 2025
Viewed by 310
Abstract
The organic volatile compound gases (VOCs) emitted by the rubber running tracks in the park pose a threat to human health. Currently, the challenge lies in how to detect the VOC gas concentration to ensure it is below the level that is harmful [...] Read more.
The organic volatile compound gases (VOCs) emitted by the rubber running tracks in the park pose a threat to human health. Currently, the challenge lies in how to detect the VOC gas concentration to ensure it is below the level that is harmful to human health. This study developed a low-power acetone gas sensor based on SnO2/Nb2C MXene composites, designed for monitoring acetone gas in pocket park rubber tracks at room temperature. Nb2C MXene was combined with SnO2 nanoparticles through a hydrothermal method, and the results showed that the SnO2/Nb2C MXene composite sensor (SnM-2) exhibited a response value of 146.5% in detecting 1 ppm acetone gas, with a response time of 155 s and a recovery time of 295 s. This performance was significantly better than that of the pure SnO2 sensor, with a 6-fold increase in response value. Additionally, the sensor exhibits excellent selectivity against VOCs, such as ethanol, formaldehyde, and isopropanol, with good stability (~20 days) and reversibility (~50). It can accurately recognize acetone gas concentrations and has been successfully used to simulate rubber track environments and provide accurate acetone concentration data. This study provides a feasible solution for monitoring VOCs in rubber tracks and the foundation for the development of low-power, high-performance, and 2D MXene gas sensors. Full article
Show Figures

Figure 1

16 pages, 4481 KiB  
Article
Construction and Validation of a Digital Twin-Driven Virtual-Reality Fusion Control Platform for Industrial Robots
by Wenxuan Chang, Wenlei Sun, Pinghui Chen and Huangshuai Xu
Sensors 2025, 25(13), 4153; https://doi.org/10.3390/s25134153 - 3 Jul 2025
Viewed by 534
Abstract
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for [...] Read more.
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for remote control and real-time status monitoring. In this study, a novel approach is proposed by integrating digital twin technology with traditional robot control methodologies to establish a virtual–real mapping architecture. A high-precision and efficient digital twin-based control platform for industrial robots is developed using the Unity3D (2022.3.53f1c1) engine, offering enhanced visualization, interaction, and system adaptability. The high-precision twin environment is constructed from the three dimensions of the physical layer, digital layer, and information fusion layer. The system adopts the socket communication mechanism based on TCP/IP protocol to realize the real-time acquisition of robot state information and the synchronous issuance of control commands, and constructs the virtual–real bidirectional mapping mechanism. The Unity3D platform is integrated to develop a visual human–computer interaction interface, and the user-oriented graphical interface and modular command system effectively reduce the threshold of robot use. A spatially curved part welding experiment is carried out to verify the adaptability and control accuracy of the system in complex trajectory tracking and flexible welding tasks, and the experimental results show that the system has high accuracy as well as good interactivity and stability. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 8991 KiB  
Article
Learning-Based Variable Admittance Control Combined with NMPC for Contact Force Tracking in Unknown Environments
by Yikun Zhang, Jianjun Yao and Chen Qian
Actuators 2025, 14(7), 323; https://doi.org/10.3390/act14070323 - 30 Jun 2025
Viewed by 305
Abstract
With the development of robotics, robots are playing an increasingly critical role in complex tasks such as flexible manufacturing, physical human–robot interaction, and intelligent assembly. These tasks place higher demands on the force control performance of robots, particularly in scenarios where the environment [...] Read more.
With the development of robotics, robots are playing an increasingly critical role in complex tasks such as flexible manufacturing, physical human–robot interaction, and intelligent assembly. These tasks place higher demands on the force control performance of robots, particularly in scenarios where the environment is unknown, making constant force control challenging. This study first analyzes the robot and its interaction model with the environment, highlighting the limitations of traditional force control methods in addressing unknown environmental stiffness. Based on this analysis, a variable admittance control strategy is proposed using the deep deterministic policy gradient algorithm, enabling the online tuning of admittance parameters through reinforcement learning. Furthermore, this strategy is integrated with a quaternion-based nonlinear model predictive control scheme, ensuring coordination between pose tracking and constant-force control and enhancing overall control performances. The experimental results demonstrate that the proposed method improves constant force control accuracy and task execution stability, validating the feasibility of the proposed approach. Full article
(This article belongs to the Special Issue Motion Planning, Trajectory Prediction, and Control for Robotics)
Show Figures

Figure 1

30 pages, 1362 KiB  
Article
Resilient AI in Therapeutic Rehabilitation: The Integration of Computer Vision and Deep Learning for Dynamic Therapy Adaptation
by Egidia Cirillo, Claudia Conte, Alberto Moccardi and Mattia Fonisto
Appl. Sci. 2025, 15(12), 6800; https://doi.org/10.3390/app15126800 - 17 Jun 2025
Viewed by 523
Abstract
Resilient artificial intelligence (Resilient AI) is relevant in many areas where technology needs to adapt quickly to changing and unexpected conditions, such as in the medical, environmental, security, and agrifood sectors. In the case study involving the therapeutic rehabilitation of patients with motor [...] Read more.
Resilient artificial intelligence (Resilient AI) is relevant in many areas where technology needs to adapt quickly to changing and unexpected conditions, such as in the medical, environmental, security, and agrifood sectors. In the case study involving the therapeutic rehabilitation of patients with motor problems, the Resilient AI system is crucial to ensure that systems can effectively respond to changes, maintain high performance, cope with uncertainties and complex variables, and enable the dynamic monitoring and adaptation of therapy in real time. The proposed system integrates advanced technologies, such as computer vision and deep learning models, focusing on non-invasive solutions for monitoring and adapting rehabilitation therapies. The system combines the Microsoft Kinect v3 sensor with MoveNet Thunder – SinglePose, a state-of-the-art deep-learning model for human pose estimation. Kinect’s 3D skeletal tracking and MoveNet’s high-precision 2D keypoint detection together improve the accuracy and reliability of postural analysis. The main objective is to develop an intelligent system that captures and analyzes a patient’s movements in real time using Motion Capture techniques and artificial intelligence (AI) models to improve the effectiveness of therapies. Computer vision tracks human movement, identifying crucial biomechanical parameters and improving the quality of rehabilitation. Full article
(This article belongs to the Special Issue eHealth Innovative Approaches and Applications: 2nd Edition)
Show Figures

Figure 1

14 pages, 2017 KiB  
Article
The Simulation of Offshore Radioactive Substances Diffusion Based on MIKE21: A Case Study of Jiaozhou Bay
by Zhilin Hu, Feng Ye, Ziao Jiao, Junjun Chen and Junjun Gong
Sustainability 2025, 17(12), 5315; https://doi.org/10.3390/su17125315 - 9 Jun 2025
Viewed by 355
Abstract
Nuclear accident-derived radionuclide dispersion poses critical challenges to marine ecological sustainability and human–ocean interdependence. While existing studies focus on hydrodynamic modeling of pollutant transport, the link between nuclear safety and sustainable ocean governance remains underexplored. This study investigates radionuclide diffusion patterns in semi-enclosed [...] Read more.
Nuclear accident-derived radionuclide dispersion poses critical challenges to marine ecological sustainability and human–ocean interdependence. While existing studies focus on hydrodynamic modeling of pollutant transport, the link between nuclear safety and sustainable ocean governance remains underexplored. This study investigates radionuclide diffusion patterns in semi-enclosed bays using a high-resolution coupled hydrodynamic particle-tracking model, explicitly addressing threats to marine ecosystem stability and coastal socioeconomic resilience. Simulations revealed that tidal oscillations and topographic constraints prolong pollutant retention by 40% compared to open seas, elevating local concentration peaks by 2–3× and intensifying bioaccumulation risks in benthic organisms. These findings directly inform sustainable marine resource management: the identified high-risk zones enable targeted monitoring of fishery resources, while diffusion pathways guide coastal zoning policies to decouple economic activities from contamination hotspots. Compared to Fukushima’s open-ocean dispersion models, our framework uniquely quantifies how semi-enclosed geomorphology exacerbates localized ecological degradation, providing actionable metrics for balancing nuclear energy development with UN Sustainable Development Goals (SDGs) 14 and 3. By integrating hydrodynamic specificity with ecosystem vulnerability thresholds, this work advances science-based protocols for sustainable nuclear facility siting and marine spatial planning. Full article
Show Figures

Figure 1

23 pages, 1664 KiB  
Article
Seeing the Unseen: Real-Time Micro-Expression Recognition with Action Units and GPT-Based Reasoning
by Gabriela Laura Sălăgean, Monica Leba and Andreea Cristina Ionica
Appl. Sci. 2025, 15(12), 6417; https://doi.org/10.3390/app15126417 - 6 Jun 2025
Viewed by 1235
Abstract
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and [...] Read more.
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and inter-subject variability. To address these challenges, the proposed system integrates advanced computer vision techniques, rule-based classification grounded in the Facial Action Coding System, and artificial intelligence components. The architecture employs MediaPipe for facial landmark tracking and action unit extraction, expert rules to resolve common emotional confusions, and deep learning modules for optimized classification. Experimental validation demonstrated a classification accuracy of 93.30% on CASME II, highlighting the effectiveness of the hybrid design. The system also incorporates mechanisms for amplifying weak signals and adapting to new subjects through continuous knowledge updates. These results confirm the advantages of combining domain expertise with AI-driven reasoning to improve micro-expression recognition. The proposed methodology has practical implications for various fields, including clinical psychology, security, marketing, and human-computer interaction, where the accurate interpretation of emotional micro-signals is essential. Full article
Show Figures

Figure 1

27 pages, 22376 KiB  
Article
Performance Evaluation of Monocular Markerless Pose Estimation Systems for Industrial Exoskeletons
by Soocheol Yoon, Ya-Shian Li-Baboud, Ann Virts, Roger Bostelman, Mili Shah and Nishat Ahmed
Sensors 2025, 25(9), 2877; https://doi.org/10.3390/s25092877 - 2 May 2025
Cited by 1 | Viewed by 589
Abstract
Industrial exoskeletons (a.k.a. wearable robots) have been developed to reduce musculoskeletal fatigue and work injuries. Human joint kinematics and human–robot alignment are important measurements in understanding the effects of industrial exoskeletons. Recently, markerless pose estimation systems based on monocular color (red, green, blue—RGB) [...] Read more.
Industrial exoskeletons (a.k.a. wearable robots) have been developed to reduce musculoskeletal fatigue and work injuries. Human joint kinematics and human–robot alignment are important measurements in understanding the effects of industrial exoskeletons. Recently, markerless pose estimation systems based on monocular color (red, green, blue—RGB) and depth cameras are being used to estimate human joint positions. This study analyzes the performance of monocular markerless pose estimation systems on human skeletal joint estimation while wearing exoskeletons. Two pose estimation systems producing RGB and depth images from ten viewpoints are evaluated for one subject in 14 industrial poses. The experiment was repeated for three different types of exoskeletons on the same subject. An optical tracking system (OTS) was used as a reference system. The image acceptance rate was 56% for the RGB, 22% for the depth, and 78% for the OTS pose estimation system. The key sources of pose estimation error were the occlusions from the exoskeletons, industrial poses, and viewpoints. The reference system showed decreased performance when the optical markers were occluded by the exoskeleton or when the markers’ position shifted with the exoskeleton. This study performs a systematic comparison of two types of monocular markerless pose estimation systems and an optical tracking system, as well as a proposed metric, based on a tracking quality ratio, to assess whether a skeletal joint estimation would be acceptable for human kinematics analysis in exoskeleton studies. Full article
(This article belongs to the Special Issue Wearable Robotics and Assistive Devices)
Show Figures

Figure 1

27 pages, 32676 KiB  
Article
Action Recognition via Multi-View Perception Feature Tracking for Human–Robot Interaction
by Chaitanya Bandi and Ulrike Thomas
Robotics 2025, 14(4), 53; https://doi.org/10.3390/robotics14040053 - 19 Apr 2025
Viewed by 1802
Abstract
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, [...] Read more.
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, and action recognition. We use the state-of-the-art object detection architecture to understand the scene for object detection and segmentation, ensuring high accuracy and real-time performance. In interaction environments, 3D whole-body pose estimation is necessary, and we integrate an existing work with high inference speed. We propose a novel architecture for 3D unified hand–object pose estimation and tracking, capturing real-time spatial relationships between hands and objects. Furthermore, we incorporate action recognition by leveraging whole-body pose, unified hand–object pose estimation, and object tracking to determine the handover interaction state. The proposed architecture is evaluated on large-scale, open-source datasets, demonstrating competitive accuracy and faster inference times, making it well-suited for real-time HRI applications. Full article
(This article belongs to the Special Issue Human–AI–Robot Teaming (HART))
Show Figures

Figure 1

21 pages, 85270 KiB  
Article
Multi-Humanoid Robot Arm Motion Imitation and Collaboration Based on Improved Retargeting
by Xisheng Jiang, Baolei Wu, Simin Li, Yongtong Zhu, Guoxiang Liang, Ye Yuan, Qingdu Li and Jianwei Zhang
Biomimetics 2025, 10(3), 190; https://doi.org/10.3390/biomimetics10030190 - 19 Mar 2025
Cited by 1 | Viewed by 1424
Abstract
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human [...] Read more.
Human–robot interaction (HRI) is a key technology in the field of humanoid robotics, and motion imitation is one of the most direct ways to achieve efficient HRI. However, due to significant differences in structure, range of motion, and joint torques between the human body and robots, motion imitation remains a challenging task. Traditional retargeting algorithms, while effective in mapping human motion to robots, typically either ensure similarity in arm configuration (joint space-based) or focus solely on tracking the end-effector position (Cartesian space-based). This creates a conflict between the liveliness and accuracy of robot motion. To address this issue, this paper proposes an improved retargeting algorithm that ensures both the similarity of the robot’s arm configuration to that of the human body and accurate end-effector position tracking. Additionally, a multi-person pose estimation algorithm is introduced, enabling real-time capture of multiple imitators’ movements using a single RGB-D camera. The captured motion data are used as input to the improved retargeting algorithm, enabling multi-robot collaboration tasks. Experimental results demonstrate that the proposed algorithm effectively ensures consistency in arm configuration and precise end-effector position tracking. Furthermore, the collaborative experiments validate the generalizability of the improved retargeting algorithm and the superior real-time performance of the multi-person pose estimation algorithm. Full article
Show Figures

Figure 1

23 pages, 6311 KiB  
Article
Green-Engineered Montmorillonite Clays for the Adsorption, Detoxification, and Mitigation of Aflatoxin B1 Toxicity
by Johnson O. Oladele, Xenophon Xenophontos, Gustavo M. Elizondo, Yash Daasari, Meichen Wang, Phanourios Tamamis, Natalie M. Johnson and Timothy D. Phillips
Toxins 2025, 17(3), 131; https://doi.org/10.3390/toxins17030131 - 11 Mar 2025
Cited by 2 | Viewed by 1152
Abstract
Dietary and environmental exposure to aflatoxins via contaminated food items can pose major health challenges to both humans and animals. Studies have reported the coexistence of aflatoxins and other environmental toxins. This emphasizes the urgent need for efficient and effective mitigation strategies for [...] Read more.
Dietary and environmental exposure to aflatoxins via contaminated food items can pose major health challenges to both humans and animals. Studies have reported the coexistence of aflatoxins and other environmental toxins. This emphasizes the urgent need for efficient and effective mitigation strategies for aflatoxins. Previous reports from our laboratory have demonstrated the potency of the green-engineered clays (GECs) on ochratoxin and other toxic chemicals. Therefore, this study sought to investigate the binding and detoxification potential of chlorophyll (CMCH and SMCH) and chlorophyllin (CMCHin and SMCHin)-amended montmorillonite clays for aflatoxin B1 (AFB1). In addition to analyzing binding metrics including affinity, capacity, free energy, and enthalpy, the sorption mechanisms of AFB1 onto the surfaces of engineered clays were also investigated. Computational and experimental studies were performed to validate the efficacy and safety of the clays. CMCH showed the highest binding capacity (Qmax) of 0.43 mol/kg compared to the parent clays CM (0.34 mol/kg) and SM (0.32 mol/kg). Interestingly, there were no significant changes in the binding capacity of the clays at pH2 and pH6, suggesting that the clays can bind to AFB1 throughout the gastrointestinal track. In silico investigations employing molecular dynamics simulations also demonstrated that CMCH enhanced AFB1 binding as compared to parent clay and predicted hydrophobic interactions as the main mode of interaction between the AFB1 and CMCH. This was corroborated by the kinetic results which indicated that the interaction was best defined by chemosorption with favorable thermodynamics and Gibbs free energy (∆G) being negative. In vitro experiments in Hep G2 cells showed that clay treatment mitigated AFB1-induced cytotoxicity, with the exception of 0.5% (w/v) SMCH. Finally, the in vivo results validated the protection of all the clays against AFB1-induced toxicities in Hydra vulgaris. This study showed that these clays significantly detoxified AFB1 (86% to 100%) and provided complete protection at levels as low as 0.1%, suggesting that they may be used as AFB1 binders in feed and food. Full article
Show Figures

Figure 1

43 pages, 5343 KiB  
Review
Wearable and Flexible Sensor Devices: Recent Advances in Designs, Fabrication Methods, and Applications
by Shahid Muhammad Ali, Sima Noghanian, Zia Ullah Khan, Saeed Alzahrani, Saad Alharbi, Mohammad Alhartomi and Ruwaybih Alsulami
Sensors 2025, 25(5), 1377; https://doi.org/10.3390/s25051377 - 24 Feb 2025
Cited by 10 | Viewed by 8659
Abstract
The development of wearable sensor devices brings significant benefits to patients by offering real-time healthcare via wireless body area networks (WBANs). These wearable devices have gained significant traction due to advantageous features, including their lightweight nature, comfortable feel, stretchability, flexibility, low power consumption, [...] Read more.
The development of wearable sensor devices brings significant benefits to patients by offering real-time healthcare via wireless body area networks (WBANs). These wearable devices have gained significant traction due to advantageous features, including their lightweight nature, comfortable feel, stretchability, flexibility, low power consumption, and cost-effectiveness. Wearable devices play a pivotal role in healthcare, defence, sports, health monitoring, disease detection, and subject tracking. However, the irregular nature of the human body poses a significant challenge in the design of such wearable systems. This manuscript provides a comprehensive review of recent advancements in wearable and flexible smart sensor devices that can support the next generation of such sensor devices. Further, the development of direct ink writing (DIW) and direct writing (DW) methods has revolutionised new high-resolution integrated smart structures, enabling the design of next-generation soft, flexible, and stretchable wearable sensor devices. Recognising the importance of keeping academia and industry informed about cutting-edge technology and time-efficient fabrication tools, this manuscript also provides a thorough overview of the latest progress in various fabrication methods for wearable sensor devices utilised in WBAN and their evaluation using body phantoms. An overview of emerging challenges and future research directions is also discussed in the conclusion. Full article
Show Figures

Figure 1

Back to TopTop