Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (266)

Search Parameters:
Keywords = human haptics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10126 KB  
Article
Impact of Audio Feedback on User Experience in Haptic-Visual Mixed Reality Pulse Palpation Training Environments
by Nikitha Donekal Chandrashekar, Shawn D. Safford and Denis Gračanin
Information 2026, 17(5), 399; https://doi.org/10.3390/info17050399 - 22 Apr 2026
Abstract
Background: Mixed Reality (MR) environments rely on multimodal feedback to enrich sensory integration and realism, which enhances User Experience (UX). Prior studies have shown the benefits of haptic feedback in audio–visual MR medical training environments, but researchers have not fully examined how [...] Read more.
Background: Mixed Reality (MR) environments rely on multimodal feedback to enrich sensory integration and realism, which enhances User Experience (UX). Prior studies have shown the benefits of haptic feedback in audio–visual MR medical training environments, but researchers have not fully examined how audio cues influence Haptic–Visual (HV) training environments. Methods: We built a high-fidelity MR medical training environment that synchronized visual, haptic, and audio of the human pulse. We conducted a between-subjects study with thirty novice participants who performed pulse palpation tasks in HV and Haptic–Audio–Visual (HAV) modalities. We employ a multidimensional UX evaluation by measuring task performance, presence, usability, and task workload to assess the impact of adding audio feedback in MR pulse palpation training environments. Results: Participants in the HAV modality performed tasks more accurately and reported stronger presence and higher usability. They did not report any significant increase in workload compared to the HV modality. Conclusions: Audio feedback improved perceptual coherence and enhanced UX in pulse palpation tasks. Our findings highlight the training value of integrating multimodal feedback in MR pulse palpation training systems and provide practical guidelines for designing more immersive and effective MR environments. Full article
(This article belongs to the Topic Extended Reality: Models and Applications)
17 pages, 2172 KB  
Article
Combining Augmented Reality Guidance and Virtual Constraints for Skilled Epidural Needle Placement
by Daniel Haro-Mendoza, Marcos Lopez-Magaña, Luis Jimenez-Angeles and Victor J. Gonzalez-Villela
Machines 2026, 14(4), 446; https://doi.org/10.3390/machines14040446 - 17 Apr 2026
Viewed by 244
Abstract
Accurate needle insertion during epidural anesthesia is challenging due to strong dependence on clinician experience and the limited integration of guidance modalities that simultaneously provide visual feedback and physical motion constraints. Current approaches, including ultrasound guidance and augmented reality visualization, mainly offer passive [...] Read more.
Accurate needle insertion during epidural anesthesia is challenging due to strong dependence on clinician experience and the limited integration of guidance modalities that simultaneously provide visual feedback and physical motion constraints. Current approaches, including ultrasound guidance and augmented reality visualization, mainly offer passive assistance and do not actively regulate insertion trajectory and depth, which may lead to variability in accuracy and increased risk of complications. This work presents a multimodal human–machine assistance system that combines augmented reality guidance with virtual fixtures to support lumbar epidural needle placement. A Tuohy needle is coupled to a haptic device interacting with a patient-specific L3–L4 lumbar phantom fabricated using 3D printing and ballistic gel. A model-based force profile reproduces the mechanical response of anatomical layers during insertion. Three experimental conditions are evaluated: freehand execution, augmented reality guidance with trajectory and depth visualization, and cooperative guidance using virtual fixtures defined by a cylindrical corridor and a depth-limiting plane. Results show a progressive reduction in mean depth error from 6.82 ± 3.46 mm (freehand) to 4.96 ± 2.41 mm (augmented reality) and 2.21 ± 1.73 mm (virtual fixtures). These findings indicate that the integration of visual and haptic guidance significantly enhances insertion precision and control. The proposed approach highlights the potential of multimodal human–machine cooperation for safer training and assisted interventions. Full article
Show Figures

Figure 1

17 pages, 1496 KB  
Article
Assessing Spatial and Spatiotemporal Tactile Working Memory Using Adaptive Staircase Procedures
by Nashmin Yeganeh, Ivan Makarov, Runar Unnthorsson and Árni Kristjánsson
Sensors 2026, 26(8), 2361; https://doi.org/10.3390/s26082361 - 11 Apr 2026
Viewed by 278
Abstract
Tactile working memory limits the amount of information that can be processed through touch, with important implications for the design of haptic communication systems. Although visual and auditory working memory have been extensively investigated, tactile working memory, particularly for spatial and spatiotemporal sequences, [...] Read more.
Tactile working memory limits the amount of information that can be processed through touch, with important implications for the design of haptic communication systems. Although visual and auditory working memory have been extensively investigated, tactile working memory, particularly for spatial and spatiotemporal sequences, remains less well understood. The present study examined tactile working memory capacity in two psychophysical experiments. Participants reproduced sequential vibrotactile stimuli delivered to the forearm via a 3 × 3 array of voice-coil actuators by entering responses through keypresses. Both experiments employed an adaptive 3-up/1-down staircase procedure, in which sequence length was adjusted according to response accuracy, and thresholds were estimated from reversal points. In Experiment 1 (Ordered Recall), participants reproduced both the spatial locations and the temporal order of stimulation, yielding a memory capacity threshold of approximately four items. In Experiment 2 (Unordered Recall), participants recalled only the set of stimulated locations without regard to order, resulting in a higher threshold of approximately five items. These results demonstrate that incorporating temporal sequencing demands into spatial recall substantially increases cognitive load and reduces effective tactile memory capacity. The findings clarify fundamental limits of tactile working memory and provide practical guidance for the development of haptic interfaces, wearable feedback systems, and sensory substitution technologies that must balance information complexity with human cognitive constraints. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

14 pages, 3018 KB  
Article
Optimized Haptic Feedback and Natural Prehension System for Robotics and Virtual Reality Applications
by Eve Hirel, Odin Le Morvan, Marwan Mahdouf, Prune Picot, Matteo Quinquis and Christophe Delebarre
Sensors 2026, 26(7), 2222; https://doi.org/10.3390/s26072222 - 3 Apr 2026
Viewed by 420
Abstract
As robotics prehension systems and virtual reality applications are in constant evolution, the need for high-fidelity haptic interaction increases. This helps ensure and enhance user immersion and handling precision. While commercial haptic interfaces offer high performance, their prohibitive cost limits their widespread adoption [...] Read more.
As robotics prehension systems and virtual reality applications are in constant evolution, the need for high-fidelity haptic interaction increases. This helps ensure and enhance user immersion and handling precision. While commercial haptic interfaces offer high performance, their prohibitive cost limits their widespread adoption in general-purpose robotics. Furthermore, many low-cost solutions suffer from limited transparency, where the operator constantly fights the friction of the actuator even during free motion. This article presents the design and development of an innovative, cost-effective master–slave robotic system aimed at democratizing efficient haptic feedback devices. The solution is intended for remote manipulation of objects with a maximum mass of 1 kg, while limiting the gripping force to 50 N, thus ensuring the integrity of objects being manipulated. The device includes a master haptic module in the form of a clamp that reproduces the thumb–index–middle finger gripping motion performed by the user. The system relies on a custom haptic interface measuring the angular position of the master gripper, which is transmitted in real time to the slave gripper, so as to adjust the position of the clamp accordingly, thus optimizing the grasping control loop. As soon as an object is detected, using a force sensor integrated into the slave gripper, the master motor renders a resistive force, preventing the user from closing the haptic module. The other part of the system is the slave mechanical gripper with three fingers, each with three phalanges based on human anatomy, allowing the clamp to mechanically conform to irregular object geometries with a single actuator. The last but not least innovative aspect lies in the implementation of a current sensor, which provides the haptic feedback. The force applied by the user is reproduced by the slave gripper using current sensors, eliminating the need for expensive force-torque sensors while maintaining a responsive feedback loop. Full article
Show Figures

Figure 1

34 pages, 181429 KB  
Article
SENSASEA: Fostering Positive Behavioral Manifestations and Social Collaboration in Children Through an Interactive Multimodal Environment
by Yanjun Lyu, Ripon Kumar Saha, Assegid Kidane, Lauren Hayes and Xin Wei Sha
Multimedia 2026, 2(2), 5; https://doi.org/10.3390/multimedia2020005 - 31 Mar 2026
Viewed by 390
Abstract
The SensaSea System is a responsive multisensory environment, specifically, a room-sized interactive installation that incorporates wearable devices, interactive visual floor projections and auditory and tactile modalities. SensaSea is designed as a physical environment for embodied interaction and free play suitable for multiple players; [...] Read more.
The SensaSea System is a responsive multisensory environment, specifically, a room-sized interactive installation that incorporates wearable devices, interactive visual floor projections and auditory and tactile modalities. SensaSea is designed as a physical environment for embodied interaction and free play suitable for multiple players; the system uses social proximity as the primary mechanism. Our objective is to promote active peer interaction and social connectedness among elementary school children through sensory-guided approaches which include digitized and projected interactive sea creatures. The multi-modal system also features an interactive soundscape and innovative real-time haptic feedback. We conducted eight group user studies (24 children in total). Our usability and feasibility tests demonstrated that the system results in positive emotions and elicits multiple pro-social behaviors. Full article
Show Figures

Figure 1

26 pages, 3131 KB  
Article
Haptic Flow as a Symmetry-Bearing Invariant in Skilled Human Movement: A Screw-Theoretic Extension of Gibson’s Optic Flow
by Wangdo Kim
Symmetry 2026, 18(3), 471; https://doi.org/10.3390/sym18030471 - 10 Mar 2026
Viewed by 449
Abstract
Gibson’s concept of optic flow established that perception is grounded in lawful structure generated by action. However, no formal mechanical framework has described the invariant structure of action-generated kinesthetic information during skilled manipulation. This study introduces haptic flow as a screw-theoretic invariant defined [...] Read more.
Gibson’s concept of optic flow established that perception is grounded in lawful structure generated by action. However, no formal mechanical framework has described the invariant structure of action-generated kinesthetic information during skilled manipulation. This study introduces haptic flow as a screw-theoretic invariant defined by the coupled rotational–translational organization of a body–object system. Motion capture data from a two-case comparison (one proficient and one novice golfer) were analyzed using instantaneous screw axes (ISA), pitch evolution, and cylindroid geometry derived from a linear line-complex formulation. The proficient golfer exhibited (1) progressive convergence of ISAs toward a coherent bundle, (2) stabilization of screw pitch through impact, and (3) co-cylindrical alignment of harmonic screws consistent with inertial–restoring conjugacy. In contrast, the novice golfer showed fragmented ISA organization and elevated pitch variability. These differences were descriptive rather than inferential and do not imply population-level generalization. The findings suggest that skilled manipulation is characterized by stabilization of symmetry-bearing screw invariants rather than by independent joint control. Interpreted ecologically, haptic flow is proposed as a mechanically specified candidate invariant generated by lawful body–object coupling. The present study establishes a geometric framework for quantifying such invariants while identifying the need for cross-task and perceptual validation. Full article
Show Figures

Figure 1

21 pages, 2871 KB  
Article
From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge
by Song Xu, Chen Li, Jia-Rong Li and Teng-Wen Chang
Electronics 2026, 15(4), 832; https://doi.org/10.3390/electronics15040832 - 15 Feb 2026
Viewed by 395
Abstract
Modern interaction with smart devices is hindered by the “Midas Touch” problem, where sensors frequently misinterpret incidental physical movements as intentional commands due to a lack of human context. This research addresses this conflict by introducing the Multimodal Haptic Informatics (MHI) index within [...] Read more.
Modern interaction with smart devices is hindered by the “Midas Touch” problem, where sensors frequently misinterpret incidental physical movements as intentional commands due to a lack of human context. This research addresses this conflict by introducing the Multimodal Haptic Informatics (MHI) index within a novel Scene–Action–Trigger (SAT) framework. The goal is to contextualize mechanical movements as human intent by integrating physical, spatial, and cognitive data locally at the edge. The methodology employs an “Action-as-primary indexing” mechanism where the Action channel (IMU) serves as a temporal anchor t, triggering high-resolution Scene (computer vision) and Trigger (audio) processing only during critical haptic events. Validated through a complex origami crane task generating 29,408 data frames, the framework utilizes a three-stage informatics derivation process: single-modal scoring, score weighting, and hand state mapping. Results demonstrate that applying an adaptive “Speedometer” logic successfully reclassifies the “Transitional State”. While this state constitutes over half of the behavioral dataset (54.76% on average), it is effectively disambiguated into meaningful intent using a self-trained local Large Language Model (LLM) for semantic verification. Furthermore, the event-driven sampling of 93 keyframes reduces the processing overhead by 99.68% compared to linear annotation. This study contributes a low-latency, privacy-preserving “Protocol of Assent” that maintains user agency by providing intelligent system suggestions based on confirmed haptic intensity. Full article
(This article belongs to the Special Issue New Trends in Human-Computer Interactions for Smart Devices)
Show Figures

Figure 1

13 pages, 1124 KB  
Article
Comparative Performance of Haptic Virtual Simulation vs. Conventional Training in Class V Cavity Preparation: A Paired In Vitro Study
by Aitor Basterra López, Sebastiana Arroyo Bote, Ángel Arturo López-González, Raúl Cuesta Román, Joan Obrador de Hevia and Pere Riutord-Sbert
Dent. J. 2026, 14(2), 109; https://doi.org/10.3390/dj14020109 - 13 Feb 2026
Viewed by 312
Abstract
Background: Haptic virtual simulation (HVS) has emerged as a promising tool in dental education, yet evidence comparing its performance to conventional preclinical training remains limited. Establishing its effectiveness is essential to support its integration into competency-based curricula. Objective: The aim of this study [...] Read more.
Background: Haptic virtual simulation (HVS) has emerged as a promising tool in dental education, yet evidence comparing its performance to conventional preclinical training remains limited. Establishing its effectiveness is essential to support its integration into competency-based curricula. Objective: The aim of this study was to compare Class V cavity preparations performed using conventional training on extracted teeth with those performed using a haptic virtual simulator, evaluating preparation time and cavity volume. Methods: Sixty-one extracted human molars were digitized using cone-beam computed tomography (CBCT) to generate corresponding virtual replicas. A calibrated operator prepared 122 standardized Class V cavities (61 real and 61 virtual). The simulator automatically recorded preparation time and cavity volume. For natural teeth, cavity volume was calculated by digital superimposition of pre- and post-operative STL models using Blender. Paired means were compared using Student’s t-test (α = 0.05). Results: Preparation time was significantly shorter when using HVS compared with the conventional method (p < 0.001). Virtual preparations resulted in slightly larger cavity volumes than real preparations, with a statistically significant yet clinically small difference (p = 0.047). Conclusions: Haptic virtual simulation enables more time-efficient Class V cavity preparation while producing cavity volumes comparable to those obtained through conventional training. These findings support the implementation of haptic simulators as a valid and effective complement for preclinical skill acquisition in operative dentistry. Full article
Show Figures

Graphical abstract

23 pages, 4117 KB  
Perspective
Haptic and Palpation Sensing for Robotic Surgery: Engineering Perspectives on Design and Integration
by Michael H. Friebe
Sensors 2026, 26(4), 1126; https://doi.org/10.3390/s26041126 - 10 Feb 2026
Viewed by 1630
Abstract
Robotic-assisted surgery (RAS) provides enhanced dexterity and visualisation but remains constrained by the absence of clinically meaningful palpation and haptic feedback. This perspective examines palpation sensing in RAS from an engineering and system-integration standpoint, identifying the lack of tactile information as a major [...] Read more.
Robotic-assisted surgery (RAS) provides enhanced dexterity and visualisation but remains constrained by the absence of clinically meaningful palpation and haptic feedback. This perspective examines palpation sensing in RAS from an engineering and system-integration standpoint, identifying the lack of tactile information as a major contributor to increased cognitive load, prolonged training, and risk of tissue injury. Recent advances in force, tactile, vibroacoustic, audio, and optical sensor technologies enable quantitative assessment of tissue mechanical properties and often exceed human tactile sensitivity. However, clinical translation is limited by challenges in sensor miniaturisation, sterilisation, robustness and integration and the absence of standardised evaluation metrics. The integration of artificial intelligence and multimodal sensor fusion with intra-operative imaging and augmented visualisation is highlighted as a key strategy to compensate for sensor limitations and biological variability. Dedicated robotic palpation devices and wireless or magnetically coupled probes are discussed as promising transitional solutions. Overall, the restoration of palpation sensing is presented as a prerequisite for improving safety and efficiency and enabling higher levels of autonomy in future RAS platforms. Full article
(This article belongs to the Special Issue Intelligent Optical Sensors in Biomedicine and Robotics)
Show Figures

Figure 1

37 pages, 1544 KB  
Article
From Spontaneous Ignitions to Sensorimotor Cell Assemblies via Dopamine: A Spiking Neurocomputational Model of Infants’ Hand Action Acquisition
by Nick Griffin, Andrea Mattera, Gianluca Baldassarre and Max Garagnani
Brain Sci. 2026, 16(2), 158; https://doi.org/10.3390/brainsci16020158 - 29 Jan 2026
Viewed by 509
Abstract
Background/Objectives: From birth, infants learn how to interact with the world through exploration. It has been proposed that this early learning phase is driven by motor babbling: the spontaneous generation of exploratory movements that are progressively consolidated through associative mechanisms. This process [...] Read more.
Background/Objectives: From birth, infants learn how to interact with the world through exploration. It has been proposed that this early learning phase is driven by motor babbling: the spontaneous generation of exploratory movements that are progressively consolidated through associative mechanisms. This process leads to the acquisition of a repertoire of hand movements such as single- or multi-finger flexion, extension, touching, and pushing. Later, in a second phase, some of these movements (e.g., those that happen to enable access to biologically salient stimuli, such as grasping food) are further reinforced and consolidated through rewards obtained from the environment. However, the neural mechanisms underlying these processes remain unclear. Here, we used a fully neuroanatomically and neurophysiologically constrained neural network model to investigate the brain correlates of these processes. Methods: The model consists of six neural maps simulating six human brain areas, including three pre-central (motor-related) and three post-central (sensory-related) regions. Each map is composed of excitatory and inhibitory spiking neurons, with biologically constrained within- and between-area connectivity forming recurrent circuits. Hand action execution and corresponding haptic perception are simulated simply as activity in primary motor and somatosensory model areas, respectively. During an initial “exploratory” phase, the network learned, via Hebbian mechanisms, associations—as emerging distributed cell assembly (CA) circuits—linking “motor” to corresponding “haptic feedback” patterns. As a result of this initial training, the model began to exhibit spontaneous ignitions of these CA circuits, an emergent phenomenon taken to represent internally generated, non-stimulus-driven attempts at hand action exploitation. In a second phase, a global reward signal, simulating dopamine-mediated reward encoding, was applied to only a subset of “successful” actions upon their noise-driven ignition. Results: During the first exploratory phase, the neural architecture autonomously developed “action-perception” circuits corresponding to multiple possible hand actions. During the subsequent exploitation phase, positively reinforced circuits increased in size and, consequently, in frequency of spontaneous ignition, when compared to non-rewarded “actions”. Conclusions: These results provide a mechanistic account, at the cortical-circuit level, of the early acquisition of hand actions, of their subsequent consolidation, and of the spontaneous transition of an agent’s behavior from exploration to reward-seeking, as typically observed in humans and animals during development. Full article
Show Figures

Figure 1

16 pages, 395 KB  
Review
Haptic Signals as a Communication Tool Between Handlers and Dogs: Review of a New Field
by Hillary Jean-Joseph and Dalila Bovet
Animals 2026, 16(2), 323; https://doi.org/10.3390/ani16020323 - 21 Jan 2026
Viewed by 1270
Abstract
Developing new haptic communication tools to enhance communication between dogs and their handlers during field operations has garnered interest in recent years. It is a promising field that could ameliorate dog–handler interactions in the field while addressing practical challenges, such as the need [...] Read more.
Developing new haptic communication tools to enhance communication between dogs and their handlers during field operations has garnered interest in recent years. It is a promising field that could ameliorate dog–handler interactions in the field while addressing practical challenges, such as the need for discrete communication during operations. When extended to the public, such technology could improve communication with impaired dogs. With this review, we aim to (1) give an overview of dogs’ understanding and discrimination of haptic signals, (2) highlight the need to investigate the possible impact of such tools on dogs’ welfare, as well as (3) point out current caveats and future research directions. Full article
Show Figures

Figure 1

26 pages, 1616 KB  
Systematic Review
AI-Powered Procedural Haptics for Narrative VR: A Systematic Literature Review
by Vimala Perumal and Zeeshan Jawed Shah
Multimodal Technol. Interact. 2026, 10(1), 9; https://doi.org/10.3390/mti10010009 - 9 Jan 2026
Viewed by 1236
Abstract
Haptic feedback is important for narrative virtual reality (VR), yet authoring remains costly and difficult to scale due to device-specific tuning, placement constraints, and the need for semantically congruent timing. We systematically reviewed user studies on haptics in narrative VR to establish an [...] Read more.
Haptic feedback is important for narrative virtual reality (VR), yet authoring remains costly and difficult to scale due to device-specific tuning, placement constraints, and the need for semantically congruent timing. We systematically reviewed user studies on haptics in narrative VR to establish an empirical baseline and identify gaps for AI-powered procedural haptics. Following PRISMA 2020, we searched IEEE Xplore, ACM Digital Library, Scopus, Web of Science, PubMed, and PsycINFO (English; human participants; haptics synchronized to narrative events) and performed backward/forward citation chasing (final search: 31 July 2025). We also conducted a parallel scoping scan of grey literature (arXiv and CHI/SIGGRAPH workshops/demos), finalized on 7 September 2025; these records are summarized separately and were not included in the evidence synthesis. Of 493 records screened, 26 full texts were assessed, and 10 studies were included. Quantitatively, presence improved in 6/8 studies that measured it and immersion improved in 3/3; sample sizes ranged 8–108. Across varied modalities and placements, haptics improved presence and immersion and often enhanced affect; validated measures of narrative comprehension were rare. None of the included studies evaluated AI-generated procedural haptics in user studies. We conclude by proposing a structured, three-phase research roadmap designed to bridge this critical gap, moving the field from theoretical promise to the empirical validation of intelligent systems capable of making rich, adaptive, and scalable haptic narratives a reality. Full article
Show Figures

Graphical abstract

23 pages, 6094 KB  
Systematic Review
Toward Smart VR Education in Media Production: Integrating AI into Human-Centered and Interactive Learning Systems
by Zhi Su, Tse Guan Tan, Ling Chen, Hang Su and Samer Alfayad
Biomimetics 2026, 11(1), 34; https://doi.org/10.3390/biomimetics11010034 - 4 Jan 2026
Viewed by 1645
Abstract
Smart virtual reality (VR) systems are becoming central to media production education, where immersive practice, real-time feedback, and hands-on simulation are essential. This review synthesizes the integration of artificial intelligence (AI) into human-centered, interactive VR learning for television and media production. Searches in [...] Read more.
Smart virtual reality (VR) systems are becoming central to media production education, where immersive practice, real-time feedback, and hands-on simulation are essential. This review synthesizes the integration of artificial intelligence (AI) into human-centered, interactive VR learning for television and media production. Searches in Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and SpringerLink (2013–2024) identified 790 records; following PRISMA screening, 94 studies met the inclusion criteria and were synthesized using a systematic scoping review approach. Across this corpus, common AI components include learner modeling, adaptive task sequencing (e.g., RL-based orchestration), affect sensing (vision, speech, and biosignals), multimodal interaction (gesture, gaze, voice, haptics), and growing use of LLM/NLP assistants. Reported benefits span personalized learning trajectories, high-fidelity simulation of studio workflows, and more responsive feedback loops that support creative, technical, and cognitive competencies. Evaluation typically covers usability and presence, workload and affect, collaboration, and scenario-based learning outcomes, leveraging interaction logs, eye tracking, and biofeedback. Persistent challenges include latency and synchronization under multimodal sensing, data governance and privacy for biometric/affective signals, limited transparency/interpretability of AI feedback, and heterogeneous evaluation protocols that impede cross-system comparison. We highlight essential human-centered design principles—teacher-in-the-loop orchestration, timely and explainable feedback, and ethical data governance—and outline a research agenda to support standardized evaluation and scalable adoption of smart VR education in the creative industries. Full article
(This article belongs to the Special Issue Biomimetic Innovations for Human–Machine Interaction)
Show Figures

Figure 1

15 pages, 2369 KB  
Article
The Effect of Tactile Feedback on the Manipulation of a Remote Robotic Arm via a Haptic Glove
by Christos Papakonstantinou, Konstantinos Giannakos, George Kokkonis and Maria S. Papadopoulou
Electronics 2025, 14(24), 4964; https://doi.org/10.3390/electronics14244964 - 18 Dec 2025
Viewed by 1260
Abstract
This paper investigates the effect of tactile feedback on the power efficiency and timing of controlling a remote robotic arm using a custom-built haptic glove. The glove integrates flex sensors to monitor finger movements and vibration motors to provide tactile feedback to the [...] Read more.
This paper investigates the effect of tactile feedback on the power efficiency and timing of controlling a remote robotic arm using a custom-built haptic glove. The glove integrates flex sensors to monitor finger movements and vibration motors to provide tactile feedback to the user. Communication with the robotic arm is established via the ESP-NOW protocol using an Arduino Nano ESP32 microcontroller (Arduino, Turin, Italy). This study examines the impact of tactile feedback on task performance by comparing precision, completion time, and power efficiency in object manipulation tasks with and without feedback. Experimental results demonstrate that tactile feedback significantly enhances the user’s control accuracy, reduces task execution time, and enables the user to control hand movement during object grasping scenarios precisely. It also highlights its importance in teleoperation systems. These findings have implications for improving human–robot interaction in remote manipulation scenarios, such as assistive robotics, remote surgery, and hazardous environment operations. Full article
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)
Show Figures

Figure 1

20 pages, 2429 KB  
Article
The Effects of Pneumatic Stimulation on Human Tactile Perceptions
by Tzu-Ying Li, Tzu-Chieh Hsieh, Shana Smith, Chen-Tsai Yang, Hung-Hsien Ko and Wan-Hsin Hsieh
Appl. Sci. 2025, 15(24), 13087; https://doi.org/10.3390/app152413087 - 12 Dec 2025
Viewed by 872
Abstract
Pneumatic actuators are promising for wearable tactile interfaces, yet human perception of pneumatic stimulation is not well understood. This study examined how pressure and frequency affect tactile perception and emotional responses through three experiments. Experiment 1 measured the minimum perceivable pressure and just [...] Read more.
Pneumatic actuators are promising for wearable tactile interfaces, yet human perception of pneumatic stimulation is not well understood. This study examined how pressure and frequency affect tactile perception and emotional responses through three experiments. Experiment 1 measured the minimum perceivable pressure and just noticeable difference (JND). The perceptual threshold remained stable across low-frequency stimuli, while both upward and downward JNDs increased with pressure and frequency, indicating reduced sensitivity under stronger or faster stimulation. Experiment 2 evaluated perceived tactile intensity and found pressure to be the dominant factor, with frequency also contributing significantly. Experiment 3 examined emotional responses using the PAD model. Pressure and frequency jointly affected Pleasure and Arousal but minimally influenced Dominance. Moderate pressure and mid-range frequency (50 kPa, 5 Hz) produced the most positive, alert states; high-pressure, high-frequency stimulation (≥75 kPa, 10 Hz) generated unpleasant high-arousal responses; and low-pressure, low-frequency input (25 kPa, 1 Hz) led to low-arousal, negative affective states. These results offer quantitative and emotional insights that can inform the design of more realistic and expressive pneumatic haptic interfaces. Full article
(This article belongs to the Special Issue Emerging Technologies in Innovative Human–Computer Interactions)
Show Figures

Figure 1

Back to TopTop