Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = multisensory interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1635 KiB  
Article
Integrating AI-Driven Wearable Metaverse Technologies into Ubiquitous Blended Learning: A Framework Based on Embodied Interaction and Multi-Agent Collaboration
by Jiaqi Xu, Xuesong Zhai, Nian-Shing Chen, Usman Ghani, Andreja Istenic and Junyi Xin
Educ. Sci. 2025, 15(7), 900; https://doi.org/10.3390/educsci15070900 - 15 Jul 2025
Viewed by 162
Abstract
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory [...] Read more.
Ubiquitous blended learning, leveraging mobile devices, has democratized education by enabling autonomous and readily accessible knowledge acquisition. However, its reliance on traditional interfaces often limits learner immersion and meaningful interaction. The emergence of the wearable metaverse offers a compelling solution, promising enhanced multisensory experiences and adaptable learning environments that transcend the constraints of conventional ubiquitous learning. This research proposes a novel framework for ubiquitous blended learning in the wearable metaverse, aiming to address critical challenges, such as multi-source data fusion, effective human–computer collaboration, and efficient rendering on resource-constrained wearable devices, through the integration of embodied interaction and multi-agent collaboration. This framework leverages a real-time multi-modal data analysis architecture, powered by the MobileNetV4 and xLSTM neural networks, to facilitate the dynamic understanding of the learner’s context and environment. Furthermore, we introduced a multi-agent interaction model, utilizing CrewAI and spatio-temporal graph neural networks, to orchestrate collaborative learning experiences and provide personalized guidance. Finally, we incorporated lightweight SLAM algorithms, augmented using visual perception techniques, to enable accurate spatial awareness and seamless navigation within the metaverse environment. This innovative framework aims to create immersive, scalable, and cost-effective learning spaces within the wearable metaverse. Full article
Show Figures

Figure 1

37 pages, 7361 KiB  
Review
Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025)
by Gang Ren, Zhihuang Huang, Tianyang Huang, Gang Wang and Jee Hang Lee
Appl. Sci. 2025, 15(12), 6945; https://doi.org/10.3390/app15126945 - 19 Jun 2025
Viewed by 409
Abstract
This study presents a systematic bibliometric review of wearable technologies aimed at vulnerable road user (VRU) safety, covering publications from 2000 to 2025. Guided by PRISMA procedures and a PICo-based search strategy, 58 records were extracted and analyzed in CiteSpace, yielding visualizations of [...] Read more.
This study presents a systematic bibliometric review of wearable technologies aimed at vulnerable road user (VRU) safety, covering publications from 2000 to 2025. Guided by PRISMA procedures and a PICo-based search strategy, 58 records were extracted and analyzed in CiteSpace, yielding visualizations of collaboration networks, publication trajectories, and intellectual structures. The results indicate a clear evolution from single-purpose, stand-alone devices to integrated ecosystem solutions that address the needs of diverse VRU groups. Six dominant knowledge clusters emerged—street-crossing assistance, obstacle avoidance, human–computer interaction, cyclist safety, blind navigation, and smart glasses. Comparative analysis across pedestrians, cyclists and motorcyclists, and persons with disabilities shows three parallel transitions: single- to multisensory interfaces, reactive to predictive systems, and isolated devices to V2X-enabled ecosystems. Contemporary research emphasizes context-adaptive interfaces, seamless V2X integration, and user-centered design, and future work should focus on lightweight communication protocols, adaptive sensory algorithms, and personalized safety profiles. The review provides a consolidated knowledge map to inform researchers, practitioners, and policy-makers striving for inclusive and proactive road safety solutions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

55 pages, 552 KiB  
Article
Creating Non-Visual Non-Verbal Social Interactions in Virtual Reality
by Brandon Biggs, Steve Murgaski, Peter Coppin and Bruce N. Walker
Virtual Worlds 2025, 4(2), 25; https://doi.org/10.3390/virtualworlds4020025 - 4 Jun 2025
Viewed by 507
Abstract
Although virtual reality (VR) was originally conceived of as a multi-sensory experience, most developers of the technology have focused on its visual aspects to the detriment of other senses such as hearing. This paper presents design patterns to make virtual reality fully accessible [...] Read more.
Although virtual reality (VR) was originally conceived of as a multi-sensory experience, most developers of the technology have focused on its visual aspects to the detriment of other senses such as hearing. This paper presents design patterns to make virtual reality fully accessible to non-visual users, including totally blind users, especially with non-verbal social interactions. Non-visual VR has been present in the blindness audio game community since the early 2000s, but the conventions from those interfaces have never been described to a sighted audience, outside of a few limited sonification interface papers. This paper presents non-visual design patterns created by five of the top English-speaking audio game developers through a three round Delphi method, encompassing 29 non-verbal social interactions grouped into 12 categories in VR, including movement, emotes, and self-expression. This paper will be useful to developers of VR experiences who wish to represent non-verbal social information to their users through non-visual conventions. These methods have only been rigorously tested through the commercial market, and not through scientific approaches. These design patterns can serve as the foundation for future investigation in exploring non-visual non-verbal social interactions in VR. Full article
20 pages, 76650 KiB  
Article
Enhancing Cultural Heritage Engagement with Novel Interactive Extended-Reality Multisensory System
by Adolfo Muñoz, Juan José Climent-Ferrer, Ana Martí-Testón, J. Ernesto Solanes and Luis Gracia
Electronics 2025, 14(10), 2039; https://doi.org/10.3390/electronics14102039 - 16 May 2025
Viewed by 1017
Abstract
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase [...] Read more.
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase design methodology—spanning artifact selection, narrative framing, tangible-interface fabrication, spatial installation, software integration, validation, and deployment—that helps curators, designers, and technologists to co-create XR exhibitions in which co-presence, embodied action, and multisensory cues are treated as primary design goals rather than afterthoughts. Second, the paper reports LanternXR, a proof-of-concept built with the methodology: visitors share a 3D-printed replica of the fourteenth-century Virgin of Boixadors while wielding a tracked “camera” and a candle-like lantern that lets them illuminate, photograph, and annotate the sculpture inside a life-sized Gothic nave rendered on large 4K displays with spatial audio and responsive lighting. To validate the approach, the article presents an analytical synthesis of feedback from curators, museologists, and XR technologists, underscoring the system’s capacity to foster collaboration, deepen engagement, and broaden accessibility. The findings show how XR can move museum audiences from isolated immersion to collective, multisensory exploration. Full article
Show Figures

Figure 1

18 pages, 5529 KiB  
Article
Interactive Soundscape Mapping for 18th-Century Naples: A Historically Informed Approach
by Hasan Baran Firat, Massimiliano Masullo and Luigi Maffei
Acoustics 2025, 7(2), 28; https://doi.org/10.3390/acoustics7020028 - 15 May 2025
Viewed by 1635
Abstract
This paper explores the application of a specialized end-to-end framework, crafted to study historical soundscapes, with a specific focus on 18th-century Naples. The framework combines historical research, natural language processing, architectural acoustics, and virtual acoustic modelling to achieve historically accurate and physically based [...] Read more.
This paper explores the application of a specialized end-to-end framework, crafted to study historical soundscapes, with a specific focus on 18th-century Naples. The framework combines historical research, natural language processing, architectural acoustics, and virtual acoustic modelling to achieve historically accurate and physically based soundscape reconstructions. Central to this study is the development of a Historically Informed Soundscape (HIS) map, which concentrates on the urban spaces of Largo di Palazzo and Via Toledo in Naples. Using virtual and audio-augmented reality, the HIS map provides 3D spatialized audio, offering an immersive experience of the acoustic environment of 18th-century Naples. This interdisciplinary approach not only contributes to the field of sound studies but also represents a significant methodological innovation in the analysis and interpretation of historical urban soundscapes. By incorporating historical maps as interactive graphical user interfaces, the project fosters a dynamic, multisensory engagement with the past, offering a valuable tool for scholars, educators, and the public to explore and understand historical sensory environments. Full article
(This article belongs to the Special Issue The Past Has Ears: Archaeoacoustics and Acoustic Heritage)
Show Figures

Figure 1

22 pages, 13198 KiB  
Article
Design of an Environment for Virtual Training Based on Digital Reconstruction: From Real Vegetation to Its Tactile Simulation
by Alessandro Martinelli, Davide Fabiocchi, Francesca Picchio, Hermes Giberti and Marco Carnevale
Designs 2025, 9(2), 32; https://doi.org/10.3390/designs9020032 - 10 Mar 2025
Cited by 1 | Viewed by 992
Abstract
The exploitation of immersive simulation platforms to improve traditional training techniques in the agricultural industry sector would enable year-round accessibility, flexibility, safety, and consistent high-quality training for agricultural operators. An innovative workflow in virtual simulations for training and educational purposes includes an immersive [...] Read more.
The exploitation of immersive simulation platforms to improve traditional training techniques in the agricultural industry sector would enable year-round accessibility, flexibility, safety, and consistent high-quality training for agricultural operators. An innovative workflow in virtual simulations for training and educational purposes includes an immersive environment in which the operator can interact with plants through haptic interfaces, following instructions imparted by a non-playing character (NPC) instructor. This study allows simulating the pruning of a complex case study, a hazelnut tree, reproduced in very high detail to offer agricultural operators a more realistic and immersive training environment than those currently existing. The process of creating a multisensorial environment starts with the integrated survey of the plant with a laser scanner and photogrammetry and then generates a controllable parametric model from roots to leaves with the exact positioning of the original branches. The model is finally inserted into a simulation, where haptic gloves with tactile resistance responsive to model collisions are tested. The results of the experimentation demonstrate the correct execution of this innovative design simulation, in which branches and leaves can be cut using a shear, with immediate sensory feedback. The project therefore aims to finalize this product as a realistic training platform for pruning, but not limited to it, paving the way for high-fidelity simulation for many other types of operations and specializations. Full article
(This article belongs to the Special Issue Mixture of Human and Machine Intelligence in Digital Manufacturing)
Show Figures

Figure 1

25 pages, 4043 KiB  
Article
Interface Design for Responsible Remote Driving: A Study on Technological Mediation
by Gabriella Emma Variati, Fabio Fossa, Jai Prakash, Federico Cheli and Giandomenico Caruso
Appl. Sci. 2025, 15(5), 2611; https://doi.org/10.3390/app15052611 - 28 Feb 2025
Cited by 1 | Viewed by 999
Abstract
Remote driving, i.e., the capacity of controlling road vehicles at a distance, is an innovative transportation technology often associated with potential ethical benefits, especially when deployed to tackle urban traffic issues. However, prospected benefits could only be reaped if remote driving can be [...] Read more.
Remote driving, i.e., the capacity of controlling road vehicles at a distance, is an innovative transportation technology often associated with potential ethical benefits, especially when deployed to tackle urban traffic issues. However, prospected benefits could only be reaped if remote driving can be executed in a safe and responsible way. This paper builds on notions elaborated in the philosophical literature on technological mediation to offer a systematic examination of the extent to which current and emerging Human–Machine Interfaces contribute to hindering or supporting the exercise of responsibility behind the remote wheel. More specifically, the analysis discusses how video, audio, and haptic interfaces co-shape the remote driving experience and, at the same time, the operators’ capacity to drive responsibly. The multidisciplinary approach explored in this research offers a novel methodological framework to structure future empirical inquiries while identifying finely tuned multi-sensory HMIs and dedicated training as critical presuppositions to the remote drivers’ exercise of responsibility. Full article
(This article belongs to the Special Issue Trends and Prospects in Intelligent Automotive Systems)
Show Figures

Figure 1

20 pages, 1460 KiB  
Article
Using Tangible User Interfaces (TUIs): Preliminary Evidence on Memory and Comprehension Skills in Children with Autism Spectrum Disorder
by Mariagiovanna De Luca, Ciro Rosario Ilardi, Pasquale Dolce, Angelo Rega, Raffaele Di Fuccio, Franco Rubinacci, Maria Gallucci and Paola Marangolo
Behav. Sci. 2025, 15(3), 267; https://doi.org/10.3390/bs15030267 - 25 Feb 2025
Viewed by 965
Abstract
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition involving persistent challenges with social communication, as well as memory and language comprehension difficulties. This study investigated the effects of a storytelling paradigm on language comprehension and memory skills in children with ASD. A [...] Read more.
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition involving persistent challenges with social communication, as well as memory and language comprehension difficulties. This study investigated the effects of a storytelling paradigm on language comprehension and memory skills in children with ASD. A traditional approach, using an illustrated book to deliver the narrative, was compared to a novel paradigm based on Tangible User Interfaces (TUIs) combined with multisensory stimulation. A group of 28 children (ages between 6 and 10 years old) was asked to listen to a story over four weeks, two times a week, in two different experimental conditions. The experimental group (n = 14) engaged with the story using TUIs, while the control group (n = 14) interacted with a corresponding illustrated book. Pre- and post-intervention assessments were conducted using NEPSY-II subtests on language comprehension and memory. At the end of the intervention, a trend of improved performance was found. In particular, a greater number of subjects benefited from the intervention in the experimental group compared with the control group in instruction comprehension and narrative memory-cued recall. These preliminary findings suggest that TUIs may enhance learning outcomes for children with ASD, warranting further investigation into their potential benefits. Full article
(This article belongs to the Special Issue Neural Correlates of Cognitive and Affective Processing)
Show Figures

Figure 1

19 pages, 5533 KiB  
Article
An Innovative Coded Language for Transferring Data via a Haptic Thermal Interface
by Yosef Y. Shani and Simon Lineykin
Bioengineering 2025, 12(2), 209; https://doi.org/10.3390/bioengineering12020209 - 19 Feb 2025
Viewed by 579
Abstract
The objective of this research was to develop a coded language, similarly to Morse or Braille, via a haptic thermal interface. The method is based on the human thermal sense to receive and decode the messages, and is to be used as an [...] Read more.
The objective of this research was to develop a coded language, similarly to Morse or Braille, via a haptic thermal interface. The method is based on the human thermal sense to receive and decode the messages, and is to be used as an alternative or complementary channel for various scenarios in which conventional channels are not applicable or not sufficient (e.g., communication with the handicapped or in noisy/silent environments). For the method to be effective, it must include a large variety of short recognizable cues. Hence, we designed twenty-two temporally short (<3 s) cues, each composed of a sequence of thermal pulses, meaning a combination of warm and/or cool pulses with several levels of intensity. The thermal cues were generated using specially designed equipment in a laboratory environment and displayed in random order to eleven independent participants. The participants identified all 22 cues with 95% accuracy, and 16 of them with 98.3% accuracy. These results reflect extraordinary reliability, indicating that this method can be used to create an effective innovative capability. It has many potential implications and is applicable immediately in the development of a new communication capability, either as a single-modality thermal interface, or combined with tactile sensing to form a full haptic multisensory interface. This report presents the testing and evaluating process of the proposed set of thermal cues and lays out directions for possible implementation and further investigations. Full article
Show Figures

Figure 1

20 pages, 21227 KiB  
Article
ShapeBand: Design of a Shape-Changing Wristband with Soft Materials and Physiological Sensors for Anxiety Regulation
by Yanting Liu, Zihan Xu, Ben Oldfrey and Youngjun Cho
Information 2025, 16(2), 101; https://doi.org/10.3390/info16020101 - 4 Feb 2025
Viewed by 1157
Abstract
We introduce ShapeBand, a new shape-changing wristband designed for exploring multisensory and interactive anxiety regulation with soft materials and physiological sensing. Our approach takes a core principle of self-help psychotherapeutic intervention, aiming to help users to recognize anxiety triggers and engage in regulation [...] Read more.
We introduce ShapeBand, a new shape-changing wristband designed for exploring multisensory and interactive anxiety regulation with soft materials and physiological sensing. Our approach takes a core principle of self-help psychotherapeutic intervention, aiming to help users to recognize anxiety triggers and engage in regulation with attentional distraction. We conducted user-centered design activities to iteratively refine our design requirements and delve into users’ rich experiences, preferences, and feelings. With ShapeBand, we explored bidirectional and dynamic interaction flow in anxiety regulation and subjective factors influencing its use. Our findings suggest that integrating both active and passive modulations can significantly enhance user engagement for effective anxiety intervention. Further, different interactions, characterized by dynamic alterations in bubbles and water flow in the ShapeBand, can provide users with a gamified experience and convey more potent effects. This study provides valuable insights into the future design of tangible anxiety regulation interfaces that can be tailored to subjective feelings and individual needs. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Graphical abstract

53 pages, 23387 KiB  
Article
Design of a Gaze-Controlled Interactive Art System for the Elderly to Enjoy Life
by Chao-Ming Wang and Wei-Chih Hsu
Sensors 2024, 24(16), 5155; https://doi.org/10.3390/s24165155 - 9 Aug 2024
Cited by 2 | Viewed by 2050
Abstract
The impact of global population aging on older adults’ health and emotional well-being is examined in this study, emphasizing innovative technological solutions to address their diverse needs. Changes in physical and mental functions due to aging, along with emotional challenges that necessitate attention, [...] Read more.
The impact of global population aging on older adults’ health and emotional well-being is examined in this study, emphasizing innovative technological solutions to address their diverse needs. Changes in physical and mental functions due to aging, along with emotional challenges that necessitate attention, are highlighted. Gaze estimation and interactive art are utilized to develop an interactive system tailored for elderly users, where interaction is simplified through eye movements to reduce technological barriers and provide a soothing art experience. By employing multi-sensory stimulation, the system aims to evoke positive emotions and facilitate meaningful activities, promoting active aging. Named “Natural Rhythm through Eyes”, it allows for users to interact with nature-themed environments via eye movements. User feedback via questionnaires and expert interviews was collected during public demonstrations in elderly settings to validate the system’s effectiveness in providing usability, pleasure, and interactive experience for the elderly. Key findings include the following: (1) Enhanced usability of the gaze estimation interface for elderly users. (2) Increased enjoyment and engagement through nature-themed interactive art. (3) Positive influence on active aging through the integration of gaze estimation and interactive art. These findings underscore technology’s potential to enhance well-being and quality of life for older adults navigating aging challenges. Full article
Show Figures

Figure 1

21 pages, 4986 KiB  
Article
Optimization Approach for Multisensory Feedback in Robot-Assisted Pouring Task
by Mandira S. Marambe, Bradley S. Duerstock and Juan P. Wachs
Actuators 2024, 13(4), 152; https://doi.org/10.3390/act13040152 - 18 Apr 2024
Cited by 2 | Viewed by 3177
Abstract
Individuals with disabilities and persons operating in inaccessible environments can greatly benefit from the aid of robotic manipulators in performing daily living activities and other remote tasks. Users relying on robotic manipulators to interact with their environment are restricted by the lack of [...] Read more.
Individuals with disabilities and persons operating in inaccessible environments can greatly benefit from the aid of robotic manipulators in performing daily living activities and other remote tasks. Users relying on robotic manipulators to interact with their environment are restricted by the lack of sensory information available through traditional operator interfaces. These interfaces deprive users of somatosensory feedback that would typically be available through direct contact. Multimodal sensory feedback can bridge these perceptual gaps effectively. Given a set of object properties (e.g., temperature, weight) to be conveyed and sensory modalities (e.g., visual, haptic) available, it is necessary to determine which modality should be assigned to each property for an effective interface design. The goal of this study was to develop an effective multisensory interface for robot-assisted pouring tasks, which delivers nuanced sensory feedback while permitting the high visual demand necessary for precise teleoperation. To that end, an optimization approach was employed to generate a combination of feedback properties to modality assignments that maximizes effective feedback perception and minimizes cognitive load. A set of screening experiments tested twelve possible individual assignments to form this optimal combination. The resulting perceptual accuracy, load, and user preference measures were input into a cost function. Formulating and solving as a linear assignment problem, a minimum cost combination was generated. Results from experiments evaluating efficacy in practical use cases for pouring tasks indicate that the solution was significantly more effective than no feedback and had considerable advantage over an arbitrary design. Full article
Show Figures

Figure 1

31 pages, 1943 KiB  
Article
Review on Gaps and Challenges in Prediction Outdoor Thermal Comfort Indices: Leveraging Industry 4.0 and ‘Knowledge Translation’
by Mohamed H. Elnabawi and Neveen Hamza
Buildings 2024, 14(4), 879; https://doi.org/10.3390/buildings14040879 - 25 Mar 2024
Cited by 4 | Viewed by 2414
Abstract
The current outdoor thermal comfort index assessment is either based on thermal sensation votes collected through field surveys/questionnaires or using equations fundamentally backed by thermodynamics, such as the widely used UTCI and PET indices. The predictive ability of all methods suffers from discrepancies [...] Read more.
The current outdoor thermal comfort index assessment is either based on thermal sensation votes collected through field surveys/questionnaires or using equations fundamentally backed by thermodynamics, such as the widely used UTCI and PET indices. The predictive ability of all methods suffers from discrepancies as multi-sensory attributes, cultural, emotional, and psychological cognition factors are ignored. These factors are proven to influence the thermal sensation and duration people spend outdoors, and are equally prominent factors as air temperature, solar radiation, and relative humidity. The studies that adopted machine learning models, such as Artificial Neural Networks (ANNs), concentrated on improving the predictive capability of PET, thereby making the field of Artificial Intelligence (AI) domain underexplored. Furthermore, universally adopted outdoor thermal comfort indices under-predict a neutral thermal range, for a reason that is linked to the fact that all indices were validated on European/American subjects living in temperate, cold regions. The review highlighted gaps and challenges in outdoor thermal comfort prediction accuracy by comparing traditional methods and Industry 4.0. Additionally, a further recommendation to improve prediction accuracy by exploiting Industry 4.0 (machine learning, artificial reality, brain–computer interface, geo-spatial digital twin) is examined through Knowledge Translation. Full article
Show Figures

Figure 1

25 pages, 12415 KiB  
Article
EEG Investigation on the Tactile Perceptual Performance of a Pneumatic Wearable Display of Softness
by Federico Carpi, Michele C. Valles, Gabriele Frediani, Tanita Toci and Antonello Grippo
Actuators 2023, 12(12), 431; https://doi.org/10.3390/act12120431 - 21 Nov 2023
Cited by 3 | Viewed by 2309
Abstract
Multisensory human–machine interfaces for virtual- or augmented-reality systems are lacking wearable actuated devices that can provide users with tactile feedback on the softness of virtual objects. They are needed for a variety of uses, such as medical simulators, tele-operation systems and tele-presence environments. [...] Read more.
Multisensory human–machine interfaces for virtual- or augmented-reality systems are lacking wearable actuated devices that can provide users with tactile feedback on the softness of virtual objects. They are needed for a variety of uses, such as medical simulators, tele-operation systems and tele-presence environments. Such interfaces require actuators that can generate proper tactile feedback, by stimulating the fingertips via quasi-static (non-vibratory) forces, delivered through a deformable surface, so as to control both the contact area and the indentation depth. The actuators should combine a compact and lightweight structure with ease and safety of use, as well as low costs. Among the few actuation technologies that can comply with such requirements, pneumatic driving appears to be one of the most promising. Here, we present an investigation on a new type of pneumatic wearable tactile displays of softness, recently described by our group, which consist of small inflatable chambers arranged at the fingertips. In order to objectively assess the perceptual response that they can elicit, a systematic electroencephalographic study was conducted on ten healthy subjects. Somatosensory evoked potentials (SEPs) were recorded from eight sites above the somatosensory cortex (Fc2, Fc4, C2 and C4, and Fc1, Fc3, C1 and C3), in response to nine conditions of tactile stimulation delivered by the displays: stimulation of either only the thumb, the thumb and index finger simultaneously, or the thumb, index and middle finger simultaneously, each repeated at tactile pressures of 10, 20 and 30 kPa. An analysis of the latency and amplitude of the six components of SEP signals that typically characterise tactile sensing (P50, N100, P200, N300, P300 and N450) showed that this wearable pneumatic device is able to elicit predictable perceptual responses, consistent with the stimulation conditions. This proved that the device is capable of adequate actuation performance, which enables adequate tactile perceptual performance. Moreover, this shows that SEPs may effectively be used with this technology in the future, to assess variable perceptual experiences (especially with combinations of visual and tactile stimuli), in objective terms, complementing subjective information gathered from psychophysical tests. Full article
(This article belongs to the Special Issue Actuators for Haptic Feedback Applications)
Show Figures

Figure 1

25 pages, 3543 KiB  
Review
Review of Studies on User Research Based on EEG and Eye Tracking
by Ling Zhu and Jiufang Lv
Appl. Sci. 2023, 13(11), 6502; https://doi.org/10.3390/app13116502 - 26 May 2023
Cited by 20 | Viewed by 9350
Abstract
Under the development of interdisciplinary fusion, user research has been greatly influenced by technology-driven neuroscience and sensory science, in terms of thinking and methodology. The use of technical methods, such as EEG and eye-tracking, has gradually become a research trend and hotspot in [...] Read more.
Under the development of interdisciplinary fusion, user research has been greatly influenced by technology-driven neuroscience and sensory science, in terms of thinking and methodology. The use of technical methods, such as EEG and eye-tracking, has gradually become a research trend and hotspot in this field, in order to explore the deep cognitive states behind users’ objective behaviors. This review outlines the applications of EEG and eye-tracking technology in the field of user research, with the aim of promoting future research and proposing reliable reference indicators and a research scope. It provides important reference information for other researchers in the field. The article summarizes the key reference indicators and research paradigms of EEG and eye-tracking in current user research, focusing on the user research situation in industrial products, digital interfaces and spatial environments. The limitations and research trends in current technological applications are also discussed. The feasibility of experimental equipment in outdoor environments, the long preparation time of EEG experimental equipment, and the accuracy error of physiological signal acquisition are currently existing problems. In the future, research on multi-sensory and behavioral interactions and universal studies of multiple technology fusions will be the next stage of research topics. The measurement of different user differentiation needs can be explored by integrating various physiological measurements such as EEG signals and eye-tracking signals, skin electrical signals, respiration, and heart rate. Full article
Show Figures

Figure 1

Back to TopTop