Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (410)

Search Parameters:
Keywords = wearable interaction device

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1688 KB  
Article
LumiCare: A Context-Aware Mobile System for Alzheimer’s Patients Integrating AI Agents and 6G
by Nicola Dall’Ora, Lorenzo Felli, Stefano Aldegheri, Nicola Vicino and Romeo Giuliano
Electronics 2025, 14(17), 3516; https://doi.org/10.3390/electronics14173516 - 2 Sep 2025
Abstract
Alzheimer’s disease is a growing global health concern, demanding innovative solutions for early detection, continuous monitoring, and patient support. This article reviews recent advances in Smart Wearable Medical Devices (SWMDs), Internet of Things (IoT) systems, and mobile applications used to monitor physiological, behavioral, [...] Read more.
Alzheimer’s disease is a growing global health concern, demanding innovative solutions for early detection, continuous monitoring, and patient support. This article reviews recent advances in Smart Wearable Medical Devices (SWMDs), Internet of Things (IoT) systems, and mobile applications used to monitor physiological, behavioral, and cognitive changes in Alzheimer’s patients. We highlight the role of wearable sensors in detecting vital signs, falls, and geolocation data, alongside IoT architectures that enable real-time alerts and remote caregiver access. Building on these technologies, we present LumiCare, a conceptual, context-aware mobile system that integrates multimodal sensor data, chatbot-based interaction, and emerging 6G network capabilities. LumiCare uses machine learning for behavioral analysis, delivers personalized cognitive prompts, and enables emergency response through adaptive alerts and caregiver notifications. The system includes the LumiCare Companion, an interactive mobile app designed to support daily routines, cognitive engagement, and safety monitoring. By combining local AI processing with scalable edge-cloud architectures, LumiCare balances latency, privacy, and computational load. While promising, this work remains at the design stage and has not yet undergone clinical validation. Our analysis underscores the potential of wearable, IoT, and mobile technologies to improve the quality of life for Alzheimer’s patients, support caregivers, and reduce healthcare burdens. Full article
(This article belongs to the Special Issue Smart Bioelectronics, Wearable Systems and E-Health)
Show Figures

Figure 1

47 pages, 15579 KB  
Article
Geometric Symmetry and Temporal Optimization in Human Pose and Hand Gesture Recognition for Intelligent Elderly Individual Monitoring
by Pongsarun Boonyopakorn and Mahasak Ketcham
Symmetry 2025, 17(9), 1423; https://doi.org/10.3390/sym17091423 - 1 Sep 2025
Viewed by 81
Abstract
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and [...] Read more.
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and reliability of behavior detection under varied real-world conditions. By leveraging the bilateral symmetry of human anatomy, the system improves the robustness of posture and gesture classification, even in the presence of partial occlusion or variable lighting. A total of 21 hand landmarks and 33 body pose points are used to recognize predefined actions and communication gestures, enabling seamless interaction without wearable devices. Experimental evaluations across four distinct lighting environments confirm a consistent accuracy above 90%, with real-time alerts triggered via IoT messaging platforms. The system’s modular architecture, interpretability, and adaptability make it a scalable solution for intelligent elderly individual monitoring, offering a novel application of spatial symmetry and optimized deep learning in healthcare technology. Full article
Show Figures

Figure 1

26 pages, 3346 KB  
Article
Virtual Reality as a Stress Measurement Platform: Real-Time Behavioral Analysis with Minimal Hardware
by Audrey Rah and Yuhua Chen
Sensors 2025, 25(17), 5323; https://doi.org/10.3390/s25175323 - 27 Aug 2025
Viewed by 469
Abstract
With the growing use of digital technologies and interactive games, there is rising interest in how people respond to challenges, stress, and decision-making in virtual environments. Studying human behavior in such settings helps to improve design, training, and user experience. Instead of relying [...] Read more.
With the growing use of digital technologies and interactive games, there is rising interest in how people respond to challenges, stress, and decision-making in virtual environments. Studying human behavior in such settings helps to improve design, training, and user experience. Instead of relying on complex devices, Virtual Reality (VR) creates new ways to observe and understand these responses in a simple and engaging format. This study introduces a lightweight method for monitoring stress levels that uses VR as the primary sensing platform. Detection relies on behavioral signals from VR. A minimal sensor such as Galvanic Skin Response (GSR), which measures skin conductance as a sign of physiological body response, supports the Sensor-Assisted Unity Architecture. The proposed Sensor-Assisted Unity Architecture focuses on analyzing the user’s behavior inside the virtual environment along with physical sensory measurements. Most existing systems rely on physiological wearables, which add both cost and complexity. The Sensor-Assisted Unity Architecture shifts the focus to behavioral analysis in VR supplemented by minimal physiological input. Behavioral cues captured within the VR environment are analyzed in real time by an embedded processor, which then triggers simple physical feedback. Results show that combining VR behavioral data with a minimal sensor can improve detection in cases where behavioral or physiological signals alone may be insufficient. While this study does not quantitatively compare the Sensor-Assisted Unity Architecture to multi-sensor setups, it highlights VR as the main platform, with sensor input offering targeted enhancements without significantly increasing system complexity. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human)
Show Figures

Figure 1

22 pages, 1706 KB  
Review
Integrating Precision Medicine and Digital Health in Personalized Weight Management: The Central Role of Nutrition
by Xiaoguang Liu, Miaomiao Xu, Huiguo Wang and Lin Zhu
Nutrients 2025, 17(16), 2695; https://doi.org/10.3390/nu17162695 - 20 Aug 2025
Viewed by 890
Abstract
Obesity is a global health challenge marked by substantial inter-individual differences in responses to dietary and lifestyle interventions. Traditional weight loss strategies often overlook critical biological variations in genetics, metabolic profiles, and gut microbiota composition, contributing to poor adherence and variable outcomes. Our [...] Read more.
Obesity is a global health challenge marked by substantial inter-individual differences in responses to dietary and lifestyle interventions. Traditional weight loss strategies often overlook critical biological variations in genetics, metabolic profiles, and gut microbiota composition, contributing to poor adherence and variable outcomes. Our primary aim is to identify key biological and behavioral effectors relevant to precision medicine for weight control, with a particular focus on nutrition, while also discussing their current and potential integration into digital health platforms. Thus, this review aligns more closely with the identification of influential factors within precision medicine (e.g., genetic, metabolic, and microbiome factors) but also explores how these factors are currently integrated into digital health tools. We synthesize recent advances in nutrigenomics, nutritional metabolomics, and microbiome-informed nutrition, highlighting how tailored dietary strategies—such as high-protein, low-glycemic, polyphenol-enriched, and fiber-based diets—can be aligned with specific genetic variants (e.g., FTO and MC4R), metabolic phenotypes (e.g., insulin resistance), and gut microbiota profiles (e.g., Akkermansia muciniphila abundance, SCFA production). In parallel, digital health tools—including mobile health applications, wearable devices, and AI-supported platforms—enhance self-monitoring, adherence, and dynamic feedback in real-world settings. Mechanistic pathways such as gut–brain axis regulation, microbial fermentation, gene–diet interactions, and anti-inflammatory responses are explored to explain inter-individual differences in dietary outcomes. However, challenges such as cost, accessibility, and patient motivation remain and should be addressed to ensure the effective implementation of these integrated strategies in real-world settings. Collectively, these insights underscore the pivotal role of precision nutrition as a cornerstone for personalized, scalable, and sustainable obesity interventions. Full article
(This article belongs to the Section Nutrition and Public Health)
Show Figures

Figure 1

27 pages, 7152 KB  
Review
Application of Large AI Models in Safety and Emergency Management of the Power Industry in China
by Wenxiang Guang, Yin Yuan, Shixin Huang, Fan Zhang, Jingyi Zhao and Fan Hu
Processes 2025, 13(8), 2569; https://doi.org/10.3390/pr13082569 - 14 Aug 2025
Viewed by 488
Abstract
Under the framework of the “dual-carbon” goals of China (“carbon peak” by 2030 and “carbon neutrality” by 2060), the escalating complexity of emerging power systems presents significant challenges to safety governance. Traditional management models are now confronting bottlenecks, notably in knowledge inheritance breakdown [...] Read more.
Under the framework of the “dual-carbon” goals of China (“carbon peak” by 2030 and “carbon neutrality” by 2060), the escalating complexity of emerging power systems presents significant challenges to safety governance. Traditional management models are now confronting bottlenecks, notably in knowledge inheritance breakdown and lagging risk prevention and control. This paper explores the application of large AI models in safety and emergency management in the power industry. Through core capabilities—such as natural language processing (NLP), knowledge reasoning, multimodal interaction, and auxiliary decision making—it achieves full-process optimization from data fusion to intelligent decision making. The study, anchored by 18 cases across five core scenarios, identifies three-dimensional challenges (including “soft”—dimension computing power, algorithm, and data bottlenecks; “hard”—dimension inspection equipment and wearable device constraints; and “risk”—dimension responsibility ambiguity, data bias accumulation, and model “hallucination” risks). It further outlines future directions for large-AI-model application innovation in power industry safety and management from a four-pronged outlook, covering technology, computing power, management, and macro-level perspectives. This work aims to provide theoretical and practical guidance for the industry’s shift from “passive response” to “intelligent proactive prevention”, leveraging quantified scenario-case analysis. Full article
Show Figures

Figure 1

11 pages, 697 KB  
Data Descriptor
A Multi-Sensor Dataset for Human Activity Recognition Using Inertial and Orientation Data
by Jhonathan L. Rivas-Caicedo, Laura Saldaña-Aristizabal, Kevin Niño-Tejada and Juan F. Patarroyo-Montenegro
Data 2025, 10(8), 129; https://doi.org/10.3390/data10080129 - 14 Aug 2025
Viewed by 475
Abstract
Human Activity Recognition (HAR) using wearable sensors is an increasingly relevant area for applications in healthcare, rehabilitation, and human–computer interaction. However, publicly available datasets that provide multi-sensor, synchronized data combining inertial and orientation measurements are still limited. This work introduces a publicly available [...] Read more.
Human Activity Recognition (HAR) using wearable sensors is an increasingly relevant area for applications in healthcare, rehabilitation, and human–computer interaction. However, publicly available datasets that provide multi-sensor, synchronized data combining inertial and orientation measurements are still limited. This work introduces a publicly available dataset for Human Activity Recognition, captured using wearable sensors placed on the chest, hands, and knees. Each device recorded inertial and orientation data during controlled activity sessions involving participants aged 20 to 70. A standardized acquisition protocol ensured consistent temporal alignment across all signals. The dataset was preprocessed and segmented using a sliding window approach. An initial baseline classification experiment, employing a Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM) model, demonstrated an average accuracy of 93.5% in classifying activities. The dataset is publicly available in CSV format and includes raw sensor signals, activity labels, and metadata. This dataset offers a valuable resource for evaluating machine learning models, studying distributed HAR approaches, and developing robust activity recognition pipelines utilizing wearable technologies. Full article
Show Figures

Figure 1

34 pages, 11523 KB  
Article
Hand Kinematic Model Construction Based on Tracking Landmarks
by Yiyang Dong and Shahram Payandeh
Appl. Sci. 2025, 15(16), 8921; https://doi.org/10.3390/app15168921 - 13 Aug 2025
Viewed by 346
Abstract
Visual body-tracking techniques have seen widespread adoption in applications such as motion analysis, human–machine interaction, tele-robotics and extended reality (XR). These systems typically provide 2D landmark coordinates corresponding to key limb positions. However, to construct a meaningful 3D kinematic model for body joint [...] Read more.
Visual body-tracking techniques have seen widespread adoption in applications such as motion analysis, human–machine interaction, tele-robotics and extended reality (XR). These systems typically provide 2D landmark coordinates corresponding to key limb positions. However, to construct a meaningful 3D kinematic model for body joint reconstruction, a mapping must be established between these visual landmarks and the underlying joint parameters of individual body parts. This paper presents a method for constructing a 3D kinematic model of the human hand using calibrated 2D landmark-tracking data augmented with depth information. The proposed approach builds a hierarchical model in which the palm serves as the root coordinate frame, and finger landmarks are used to compute both forward and inverse kinematic solutions. Through step-by-step examples, we demonstrate how measured hand landmark coordinates are used to define the palm reference frame and solve for joint angles for each finger. These solutions are then used in a visualization framework to qualitatively assess the accuracy of the reconstructed hand motion. As a future work, the proposed model offers a foundation for model-based hand kinematic estimation and has utility in scenarios involving occlusion or missing data. In such cases, the hierarchical structure and kinematic solutions can be used as generative priors in an optimization framework to estimate unobserved landmark positions and joint configurations. The novelty of this work lies in its model-based approach using real sensor data, without relying on wearable devices or synthetic assumptions. Although current validation is qualitative, the framework provides a foundation for future robust estimation under occlusion or sensor noise. It may also serve as a generative prior for optimization-based methods and be quantitatively compared with joint measurements from wearable motion-capture systems. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

24 pages, 10671 KB  
Article
Evaluating Cultural Heritage Preservation Through Augmented Reality: Insights from the Kaisareia-AR Application
by Hatice Dogan Turkoglu and Nese Cakıcı Alp
Architecture 2025, 5(3), 59; https://doi.org/10.3390/architecture5030059 - 11 Aug 2025
Viewed by 576
Abstract
This study investigates how augmented-reality (AR) technology can enhance the presentation and preservation of cultural heritage, using Kayseri Castle as a case study. Although previous studies have explored AR applications in heritage contexts, few have addressed their role in representing multi-layered architectural histories [...] Read more.
This study investigates how augmented-reality (AR) technology can enhance the presentation and preservation of cultural heritage, using Kayseri Castle as a case study. Although previous studies have explored AR applications in heritage contexts, few have addressed their role in representing multi-layered architectural histories of complex sites. The research focuses on the development and evaluation of the KAISAREIA-AR application, which integrates historical, architectural, and cultural narratives into an interactive AR platform. By reconstructing the castle’s distinct historical layers—spanning the Roman, Seljuk, Ottoman, and Republic periods—the study seeks to assess AR’s effectiveness in providing immersive visitor experiences while maintaining the authenticity of heritage sites. Three-dimensional models of the castle were created using 3ds Max, enriched with visual and auditory information, and deployed via Unity software on wearable AR devices. The study employed the Unified Theory of Acceptance and Use of Technology (UTAUT) framework and Structural Equation Modeling (SEM) to evaluate the application’s usability and impact on user engagement. The findings indicate that AR significantly enhances the accessibility, understanding, and appreciation of cultural heritage by providing dynamic, immersive experiences. The KAISAREIA-AR application demonstrated its potential to bridge historical authenticity with modern technology, offering a replicable model for integrating AR into cultural heritage conservation and education. Full article
Show Figures

Figure 1

22 pages, 706 KB  
Article
Technological Innovation and the Role of Smart Surveys in the Industrial Context
by Massimiliano Giacalone, Chiara Marciano, Claudia Pipino, Gianfranco Piscopo and Stefano Marra
Appl. Sci. 2025, 15(16), 8832; https://doi.org/10.3390/app15168832 - 11 Aug 2025
Viewed by 374
Abstract
Technological innovation has significantly transformed the field of statistics, not only in data analysis but also in data collection. Traditional methods based on direct observation have evolved into hybrid approaches that combine passively collected data (e.g., from GPS or accelerometers) with active user [...] Read more.
Technological innovation has significantly transformed the field of statistics, not only in data analysis but also in data collection. Traditional methods based on direct observation have evolved into hybrid approaches that combine passively collected data (e.g., from GPS or accelerometers) with active user input through digital interfaces. This evolution has led to Smart Surveys—next-generation tools that leverage smart devices, such as smartphones and wearables, to collect data actively (via questionnaires or images) and passively (via embedded sensors). Smart Surveys offer strategic value in industrial contexts by enabling real-time data collection on worker behavior, environments, and operational conditions. However, the heterogeneity of such data poses challenges in management, integration, and quality assurance. This study proposes a modular system architecture incorporating gamification elements to enhance user participation, particularly among hard-to-reach worker segments, such as mobile or shift workers. By leveraging motivational strategies and interactive feedback mechanisms, the system seeks to foster greater engagement while addressing critical data security and privacy concerns within industrial Internet of Things (IoT) environments. Full article
(This article belongs to the Special Issue Applications of Industrial Internet of Things (IIoT) Platforms)
Show Figures

Figure 1

45 pages, 10039 KB  
Article
Design of an Interactive System by Combining Affective Computing Technology with Music for Stress Relief
by Chao-Ming Wang and Ching-Hsuan Lin
Electronics 2025, 14(15), 3087; https://doi.org/10.3390/electronics14153087 - 1 Aug 2025
Viewed by 716
Abstract
In response to the stress commonly experienced by young people in high-pressure daily environments, a music-based stress-relief interactive system was developed by integrating music-assisted care with emotion-sensing technology. The design principles of the system were established through a literature review on stress, music [...] Read more.
In response to the stress commonly experienced by young people in high-pressure daily environments, a music-based stress-relief interactive system was developed by integrating music-assisted care with emotion-sensing technology. The design principles of the system were established through a literature review on stress, music listening, emotion detection, and interactive devices. A prototype was created accordingly and refined through interviews with four experts and eleven users participating in a preliminary experiment. The system is grounded in a four-stage guided imagery and music framework, along with a static activity model focused on relaxation-based stress management. Emotion detection was achieved using a wearable EEG device (NeuroSky’s MindWave Mobile device) and a two-dimensional emotion model, and the emotional states were translated into visual representations using seasonal and weather metaphors. A formal experiment involving 52 users was conducted. The system was evaluated, and its effectiveness confirmed, through user interviews and questionnaire surveys, with statistical analysis conducted using SPSS 26 and AMOS 23. The findings reveal that: (1) integrating emotion sensing with music listening creates a novel and engaging interactive experience; (2) emotional states can be effectively visualized using nature-inspired metaphors, enhancing user immersion and understanding; and (3) the combination of music listening, guided imagery, and real-time emotional feedback successfully promotes emotional relaxation and increases self-awareness. Full article
(This article belongs to the Special Issue New Trends in Human-Computer Interactions for Smart Devices)
Show Figures

Figure 1

14 pages, 4639 KB  
Article
CNTs/CNPs/PVA–Borax Conductive Self-Healing Hydrogel for Wearable Sensors
by Chengcheng Peng, Ziyan Shu, Xinjiang Zhang and Cailiu Yin
Gels 2025, 11(8), 572; https://doi.org/10.3390/gels11080572 - 23 Jul 2025
Viewed by 591
Abstract
The development of multifunctional conductive hydrogels with rapid self-healing capabilities and powerful sensing functions is crucial for advancing wearable electronics. This study designed and prepared a polyvinyl alcohol (PVA)–borax hydrogel incorporating carbon nanotubes (CNTs) and biomass carbon nanospheres (CNPs) as dual-carbon fillers. This [...] Read more.
The development of multifunctional conductive hydrogels with rapid self-healing capabilities and powerful sensing functions is crucial for advancing wearable electronics. This study designed and prepared a polyvinyl alcohol (PVA)–borax hydrogel incorporating carbon nanotubes (CNTs) and biomass carbon nanospheres (CNPs) as dual-carbon fillers. This hydrogel exhibits excellent conductivity, mechanical flexibility, and self-recovery properties. Serving as a highly sensitive piezoresistive sensor, it efficiently converts mechanical stimuli into reliable electrical signals. Sensing tests demonstrate that the CNT/CNP/PVA–borax hydrogel sensor possesses an extremely fast response time (88 ms) and rapid recovery time (88 ms), enabling the detection of subtle and rapid human motions. Furthermore, the hydrogel sensor also exhibits outstanding cyclic stability, maintaining stable signal output throughout continuous loading–unloading cycles exceeding 3200 repetitions. The hydrogel sensor’s characteristics, including rapid self-healing, fast-sensing response/recovery, and high fatigue resistance, make the CNT/CNP/PVA–borax conductive hydrogel an ideal choice for multifunctional wearable sensors. It successfully monitored various human motions. This study provides a promising strategy for high-performance self-healing sensing devices, suitable for next-generation wearable health monitoring and human–machine interaction systems. Full article
Show Figures

Figure 1

16 pages, 1434 KB  
Article
Utilizing Tympanic Membrane Temperature for Earphone-Based Emotion Recognition
by Kaita Furukawa, Xinyu Shui, Ming Li and Dan Zhang
Sensors 2025, 25(14), 4411; https://doi.org/10.3390/s25144411 - 15 Jul 2025
Viewed by 529
Abstract
Emotion recognition by wearable devices is essential for advancing emotion-aware human–computer interaction in real life. Earphones have the potential to naturally capture brain activity and its lateralization, which is associated with emotion. In this study, we newly introduced tympanic membrane temperature (TMT), previously [...] Read more.
Emotion recognition by wearable devices is essential for advancing emotion-aware human–computer interaction in real life. Earphones have the potential to naturally capture brain activity and its lateralization, which is associated with emotion. In this study, we newly introduced tympanic membrane temperature (TMT), previously used as an index of lateralized brain activation, for earphone-based emotion recognition. We developed custom earphones to measure bilateral TMT and conducted two experiments consisting of emotion induction by autobiographical recall and scenario imagination. Using features derived from the right–left TMT difference, we trained classifiers for both four-class discrete emotion and valence (positive vs. negative) classification tasks. The classifiers achieved 36.2% and 42.5% accuracy for four-class classification and 72.5% and 68.8% accuracy for binary classification, respectively, in the two experiments, confirmed by leave-one-participant-out cross-validation. Notably, consistent improvement in accuracy was specific to models utilizing right–left TMT and not observed in models utilizing the right–left wrist skin temperature. These findings suggest that lateralization in TMT provides unique information about emotional state, making it valuable for emotion recognition. With the ease of measurement by earphones, TMT has significant potential for real-world application of emotion recognition. Full article
(This article belongs to the Special Issue Advancements in Wearable Sensors for Affective Computing)
Show Figures

Figure 1

19 pages, 1635 KB  
Article
Integrating AI-Driven Wearable Metaverse Technologies into Ubiquitous Blended Learning: A Framework Based on Embodied Interaction and Multi-Agent Collaboration
by Jiaqi Xu, Xuesong Zhai, Nian-Shing Chen, Usman Ghani, Andreja Istenic and Junyi Xin
Educ. Sci. 2025, 15(7), 900; https://doi.org/10.3390/educsci15070900 - 15 Jul 2025
Viewed by 729
Abstract
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory [...] Read more.
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory experiences and adaptable learning environments that transcend the constraints of conventional ubiquitous learning. This research proposes a novel framework for ubiquitous blended learning in the wearable metaverse, aiming to address critical challenges, such as multi-source data fusion, effective human–computer collaboration, and efficient rendering on resource-constrained wearable devices, through the integration of embodied interaction and multi-agent collaboration. This framework leverages a real-time multi-modal data analysis architecture, powered by the MobileNetV4 and xLSTM neural networks, to facilitate the dynamic understanding of the learner’s context and environment. Furthermore, we introduced a multi-agent interaction model, utilizing CrewAI and spatio-temporal graph neural networks, to orchestrate collaborative learning experiences and provide personalized guidance. Finally, we incorporated lightweight SLAM algorithms, augmented using visual perception techniques, to enable accurate spatial awareness and seamless navigation within the metaverse environment. This innovative framework aims to create immersive, scalable, and cost-effective learning spaces within the wearable metaverse. Full article
Show Figures

Figure 1

12 pages, 8520 KB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Viewed by 801
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

15 pages, 2628 KB  
Article
High Anti-Swelling Zwitterion-Based Hydrogel with Merit Stretchability and Conductivity for Motion Detection and Information Transmission
by Qingyun Zheng, Jingyuan Liu, Rongrong Chen, Qi Liu, Jing Yu, Jiahui Zhu and Peili Liu
Nanomaterials 2025, 15(13), 1027; https://doi.org/10.3390/nano15131027 - 2 Jul 2025
Viewed by 643
Abstract
Hydrogel sensors show unique advantages in underwater detection, ocean monitoring, and human–computer interaction because of their excellent flexibility, biocompatibility, high sensitivity, and environmental adaptability. However, due to the water environment, hydrogels will dissolve to a certain extent, resulting in insufficient mechanical strength, poor [...] Read more.
Hydrogel sensors show unique advantages in underwater detection, ocean monitoring, and human–computer interaction because of their excellent flexibility, biocompatibility, high sensitivity, and environmental adaptability. However, due to the water environment, hydrogels will dissolve to a certain extent, resulting in insufficient mechanical strength, poor long-term stability, and signal interference. In this paper, a double-network structure was constructed by polyvinyl alcohol (PVA) and poly([2-(methacryloyloxy) ethyl]7 dimethyl-(3-sulfopropyl) ammonium hydroxide) (PSBMA). The resultant PVA/PSBMA-PA hydrogel demonstrated notable swelling resistance, a property attributable to the incorporation of non-covalent interactions (electrostatic interactions and hydrogen bonding) through the addition of phytic acid (PA). The hydrogel exhibited high stretchability (maximum tensile strength up to 304 kPa), high conductivity (5.8 mS/cm), and anti-swelling (only 1.8% swelling occurred after 14 days of immersion in artificial seawater). Assembled as a sensor, it exhibited high strain sensitivity (0.77), a low detection limit (1%), and stable electrical properties after multiple tensile cycles. The utilization of PVA/PSBMA-PA hydrogel as a wearable sensor shows promise for detecting human joint movements, including those of the fingers, wrists, elbows, and knees. Due to the excellent resistance to swelling, the PVA/PSBMA-PA-based sensors are also suitable for underwater applications, enabling the detection of underwater mannequin motion. This study proposes an uncomplicated and pragmatic methodology for producing hydrogel sensors suitable for use within subaquatic environments, thereby concomitantly broadening the scope of applications for wearable electronic devices. Full article
(This article belongs to the Special Issue Nanomaterials in Flexible Sensing and Devices)
Show Figures

Figure 1

Back to TopTop