Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = hands-free user interfaces

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1197 KB  
Article
An Inclusive Offline Learning Platform Integrating Gesture Recognition and Local AI Models
by Marius-Valentin Drăgoi, Ionuț Nisipeanu, Roxana-Adriana Puiu, Florentina-Geanina Tache, Teodora-Mihaela Spiridon-Mocioacă, Alexandru Hank and Cozmin Cristoiu
Biomimetics 2025, 10(10), 693; https://doi.org/10.3390/biomimetics10100693 - 14 Oct 2025
Viewed by 282
Abstract
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine [...] Read more.
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine to produce customized learning materials. Users can peruse educational documents, obtain topic summaries, and generate automated quizzes with intuitive gestures, including lateral finger movements, a two-finger gesture, or an open palm, without the need for conventional input devices. Upon selection of a file, the AI model analyzes its whole content, producing a structured summary and a multiple-choice assessment, both of which are immediately saved for subsequent inspection. A unified set of gestures facilitates seamless navigating within the user interface and the opened documents. The system underwent testing with university students and faculty (n = 31), utilizing assessment measures such as gesture detection accuracy, command-response latency, and user satisfaction. The findings demonstrate that the system offers a seamless, hands-free user experience with significant potential for usage in accessibility, human–computer interaction, and intelligent interface design. This work advances the creation of multimodal AI-driven educational aids, providing a pragmatic framework for gesture-based document navigation and intelligent content enhancement. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 3rd Edition)
Show Figures

Figure 1

11 pages, 1005 KB  
Proceeding Paper
Multimodal Fusion for Enhanced Human–Computer Interaction
by Ajay Sharma, Isha Batra, Shamneesh Sharma and Anggy Pradiftha Junfithrana
Eng. Proc. 2025, 107(1), 81; https://doi.org/10.3390/engproc2025107081 - 10 Sep 2025
Viewed by 597
Abstract
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, [...] Read more.
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, or hand gestures. This system’s main goal is to provide users who want a more natural, hands-free approach to interacting with their computers as well as those with impairments that limit their bodily motions, such as those with paralysis—with an easy and engaging interface. The system improves accessibility and usability by combining many input modalities, therefore providing a flexible answer for numerous users. While the speech recognition function permits hands-free operation via voice instructions, the eye-tracking component detects and responds to the user’s gaze, therefore providing exact cursor control. Gesture recognition enhances these features even further by letting users use their hands simply to execute mouse operations. This technology not only enhances personal user experience for people with impairments but also marks a major development in human–computer interaction. It shows how computer vision and machine learning may be used to provide more inclusive and flexible user interfaces, therefore improving the accessibility and efficiency of computer usage for everyone. Full article
Show Figures

Figure 1

15 pages, 2127 KB  
Article
Accessible Interface for Museum Geological Exhibitions: PETRA—A Gesture-Controlled Experience of Three-Dimensional Rocks and Minerals
by Andrei Ionuţ Apopei
Minerals 2025, 15(8), 775; https://doi.org/10.3390/min15080775 - 24 Jul 2025
Cited by 1 | Viewed by 854
Abstract
The increasing integration of 3D technologies and machine learning is fundamentally reshaping mineral sciences and cultural heritage, establishing the foundation for an emerging “Mineralogy 4.0” framework. However, public engagement with digital 3D collections is often limited by complex or costly interfaces, such as [...] Read more.
The increasing integration of 3D technologies and machine learning is fundamentally reshaping mineral sciences and cultural heritage, establishing the foundation for an emerging “Mineralogy 4.0” framework. However, public engagement with digital 3D collections is often limited by complex or costly interfaces, such as VR/AR systems and traditional touchscreen kiosks, creating a clear need for more intuitive, accessible, and more engaging and inclusive solutions. This paper presents PETRA, an open-source, gesture-controlled system for exploring 3D rocks and minerals. Developed in the TouchDesigner environment, PETRA utilizes a standard webcam and the MediaPipe framework to translate natural hand movements into real-time manipulation of digital specimens, requiring no specialized hardware. The system provides a customizable, node-based framework for creating touchless, interactive exhibits. Successfully evaluated during a “Long Night of Museums” public event with 550 visitors, direct qualitative observations confirmed high user engagement, rapid instruction-free learnability across diverse age groups, and robust system stability in a continuous-use setting. As a practical case study, PETRA demonstrates that low-cost, webcam-based gesture control is a viable solution for creating accessible and immersive learning experiences. This work offers a significant contribution to the fields of digital mineralogy, human–machine interaction, and cultural heritage by providing a hygienic, scalable, and socially engaging method for interacting with geological collections. This research confirms that as digital archives grow, the development of human-centered interfaces is paramount in unlocking their full scientific and educational potential. Full article
(This article belongs to the Special Issue 3D Technologies and Machine Learning in Mineral Sciences)
Show Figures

Figure 1

28 pages, 5168 KB  
Article
GazeHand2: A Gaze-Driven Virtual Hand Interface with Improved Gaze Depth Control for Distant Object Interaction
by Jaejoon Jeong, Soo-Hyung Kim, Hyung-Jeong Yang, Gun Lee and Seungwon Kim
Electronics 2025, 14(13), 2530; https://doi.org/10.3390/electronics14132530 - 22 Jun 2025
Viewed by 1222
Abstract
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have [...] Read more.
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have been proposed. However, current gaze + hand interfaces still have restrictions such as difficulty in translating along the gaze ray direction, using less realistic methods, or limited rotation support. This study aims to design a new distant object interaction technique that supports hand-based interaction with high freedom of object interaction in immersive VR. In this study, we developed GazeHand2, a hand-based object interaction technique, which features a new depth control that enables free object manipulation in VR. Building on the strength of the original GazeHand, GazeHand2 can control the change rate of the gaze depth by using the relative position of the hand, allowing users to translate the object to any position. To validate our design, we conducted a user study on object manipulation, which compares it with other gaze + hand interfaces (Gaze+Pinch and ImplicitGaze). Result showed that, compared to other conditions, GazeHand2 reduced 39.3% to 54.3% of hand movements and 27.8% to 47.1% of head movements under 3 m and 5 m tasks. It also significantly increased overall user experiences (0.69 to 1.12 pt higher than Gaze+Pinch and 1.18 to 1.62 pt higher than ImplicitGaze). Furthermore, over half of the participants preferred GazeHand2 because it supports convenient and efficient object translation and hand-based realistic object manipulation. We concluded that GazeHand2 can support simple and effective distant object interaction with reduced physical fatigue and higher user experiences compared to other interfaces in immersive VR. We suggested future designs to improve interaction accuracy and user convenience for future works. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 5112 KB  
Article
Gaze–Hand Steering for Travel and Multitasking in Virtual Environments
by Mona Zavichi, André Santos, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2025, 9(6), 61; https://doi.org/10.3390/mti9060061 - 13 Jun 2025
Viewed by 867
Abstract
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique [...] Read more.
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique that combines eye tracking with hand pointing: users steer only when gaze aligns with a hand-defined target, reducing unintended actions and enabling free look. Speed is controlled via either a joystick or a waist-level speed circle. We evaluated our method in a user study (n = 20) across multitasking and single-task scenarios, comparing it to a similar technique. Results show that gaze–hand steering maintains performance and enhances user comfort and spatial awareness during multitasking. Our findings support using gaze–hand steering in gaze-dominant VR applications requiring precision and simultaneous interaction. Our method significantly improves VR navigation in gaze–dominant, multitasking-intensive applications, supporting immersion and efficient control. Full article
Show Figures

Figure 1

17 pages, 914 KB  
Systematic Review
Systematic Review of Mecanum and Omni Wheel Technologies for Motor Impairments
by Michał Burkacki, Ilona Łysy, Sławomir Suchoń, Miłosz Chrzan and Rafał Kowolik
Appl. Sci. 2025, 15(9), 4773; https://doi.org/10.3390/app15094773 - 25 Apr 2025
Cited by 1 | Viewed by 2538
Abstract
Mecanum and omni wheel-based assistive technologies present an alternative to conventional mobility devices for individuals with motor impairments, owing to their omnidirectional movement capabilities and high maneuverability in constrained environments. This systematic review identifies and categorizes the key challenges and emerging trends in [...] Read more.
Mecanum and omni wheel-based assistive technologies present an alternative to conventional mobility devices for individuals with motor impairments, owing to their omnidirectional movement capabilities and high maneuverability in constrained environments. This systematic review identifies and categorizes the key challenges and emerging trends in the development of such systems. Primary obstacles include limited stability and maneuverability on uneven terrain, high energy consumption, complex control requirements, and elevated production costs. In response, recent studies have introduced several innovative approaches, such as advanced suspension systems to enhance terrain adaptability, modular mechanical designs to reduce manufacturing complexity, energy-efficient motor control strategies such as field-oriented control, AI-driven autonomous navigation, and hands-free user interfaces—including gesture recognition and brain–computer interfaces. By synthesizing findings from 26 peer-reviewed studies, this review outlines current technical limitations, surveys state-of-the-art solutions, and offers strategic recommendations to inform future research in intelligent assistive mobility technologies. Full article
Show Figures

Figure 1

25 pages, 2844 KB  
Article
Real-Time Gesture-Based Hand Landmark Detection for Optimized Mobile Photo Capture and Synchronization
by Pedro Marques, Paulo Váz, José Silva, Pedro Martins and Maryam Abbasi
Electronics 2025, 14(4), 704; https://doi.org/10.3390/electronics14040704 - 12 Feb 2025
Cited by 2 | Viewed by 2961
Abstract
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource [...] Read more.
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource constraints are critical, hand gesture recognition provides a compelling alternative to traditional touch-based interfaces. However, implementing effective gesture recognition in real-world mobile settings involves challenges such as limited computational power, varying environmental conditions, and the requirement for robust offline–online data management. In this study, we introduce ThumbsUp, which is a gesture-driven system, and employ a partially systematic literature review approach (inspired by core PRISMA guidelines) to identify the key research gaps in mobile gesture recognition. By incorporating insights from deep learning–based methods (e.g., CNNs and Transformers) while focusing on low resource consumption, we leverage Google’s MediaPipe in our framework for real-time detection of 21 hand landmarks and adaptive lighting pre-processing, enabling accurate recognition of a “thumbs-up” gesture. The system features a secure queue-based offline–cloud synchronization model, which ensures that the captured images and metadata (encrypted with AES-GCM) remain consistent and accessible even with intermittent connectivity. Experimental results under dynamic lighting, distance variations, and partially cluttered environments confirm the system’s superior low-light performance and decreased resource consumption compared to baseline camera applications. Additionally, we highlight the feasibility of extending ThumbsUp to incorporate AI-driven enhancements for abrupt lighting changes and, in the future, electromyographic (EMG) signals for users with motor impairments. Our comprehensive evaluation demonstrates that ThumbsUp maintains robust performance on typical mobile hardware, showing resilience to unstable network conditions and minimal reliance on high-end GPUs. These findings offer new perspectives for deploying gesture-based interfaces in the broader IoT ecosystem, thus paving the way toward secure, efficient, and inclusive mobile HCI solutions. Full article
(This article belongs to the Special Issue AI-Driven Digital Image Processing: Latest Advances and Prospects)
Show Figures

Figure 1

10 pages, 4558 KB  
Proceeding Paper
An IoT-Based Smart Wheelchair with EEG Control and Vital Sign Monitoring
by Rowida Meligy, Anton Royanto Ahmad and Samir Mekid
Eng. Proc. 2024, 82(1), 46; https://doi.org/10.3390/ecsa-11-20489 - 26 Nov 2024
Cited by 2 | Viewed by 5153
Abstract
This study introduces an innovative smart wheelchair designed to improve mobility and health monitoring for individuals with disabilities. Overcoming the limitations of traditional wheelchairs, this smart wheelchair integrates a tri-wheel mechanism, enabling smooth navigation across various terrains, including stairs, thus providing greater autonomy [...] Read more.
This study introduces an innovative smart wheelchair designed to improve mobility and health monitoring for individuals with disabilities. Overcoming the limitations of traditional wheelchairs, this smart wheelchair integrates a tri-wheel mechanism, enabling smooth navigation across various terrains, including stairs, thus providing greater autonomy and flexibility. The wheelchair is equipped with two smart Internet of Things (IoT)-based subsystems for control and vital sign monitoring. Besides a joystick, the wheelchair features an electroencephalography (EEG)-based brain–computer interface (BCI) for hands-free control. Utilizing support vector machine (SVM) algorithms has proven effective in classifying EEG signals. This feature is especially beneficial for users with severe physical disabilities, allowing them to navigate more independently. In addition, the smart wheelchair has comprehensive health monitoring capabilities, continuously tracking vital signs such as heart rate, blood oxygen levels (SpO2), and electrocardiogram (ECG) data. The system implements an SVM algorithm to recognize premature ventricular contractions (PVC) from ECG data. These metrics are transmitted to healthcare providers through a secure IoT platform, allowing for real-time monitoring and timely interventions. In the event of an emergency, the system is programmed to automatically send alerts, including the patient’s location, to caregivers and authorized relatives. This innovation is a step forward in developing assistive technologies that support independent living and proactive health management in smart cities. Full article
Show Figures

Figure 1

21 pages, 2546 KB  
Article
Assessing the Acceptance of a Mid-Air Gesture Syntax for Smart Space Interaction: An Empirical Study
by Ana M. Bernardos, Xian Wang, Luca Bergesio, Juan A. Besada and José R. Casar
J. Sens. Actuator Netw. 2024, 13(2), 25; https://doi.org/10.3390/jsan13020025 - 9 Apr 2024
Cited by 3 | Viewed by 3138
Abstract
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. [...] Read more.
Mid-gesture interfaces have become popular for specific scenarios, such as interactions with augmented reality via head-mounted displays, specific controls over smartphones, or gaming platforms. This article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. The syntax, inspired by human language, is built as a vocative case with an imperative structure. In a sentence like “Light, please switch on!”, the object being activated is invoked via making a gesture that mimics its initial letter/acronym (vocative, coincident with the sentence’s elliptical subject). A geometrical or directional gesture then identifies the action (imperative verb) and may include an object feature or a second object with which to network (complement), which also represented by the initial or acronym letter. Technically, an interpreter relying on a trainable multidevice gesture recognition layer makes the pair/triplet syntax decoding possible. The recognition layer works on acceleration and position input signals from graspable (smartphone) and free-hand devices (smartwatch and external depth cameras), as well as a specific compiler. On a specific deployment at a Living Lab facility, the syntax has been instantiated via the use of a lexicon derived from English (with respect to the initial letters and acronyms). A within-subject analysis with twelve users has enabled the analysis of the syntax acceptance (in terms of usability, gesture agreement for actions over objects, and social acceptance) and technology preference of the gesture syntax within its three device implementations (graspable, wearable, and device-free ones). Participants express consensus regarding the simplicity of learning the syntax and its potential effectiveness in managing smart resources. Socially, participants favoured the Watch for outdoor activities and the Phone for home and work settings, underscoring the importance of social context in technology design. The Phone emerged as the preferred option for gesture recognition due to its efficiency and familiarity. The system, which can be adapted to different sensing technologies, addresses the scalability concerns (as it can be easily extended for new objects and actions) and allows for personalised interaction. Full article
(This article belongs to the Special Issue Machine-Environment Interaction, Volume II)
Show Figures

Figure 1

35 pages, 11716 KB  
Article
Digital Twin for a Multifunctional Technology of Flexible Assembly on a Mechatronics Line with Integrated Robotic Systems and Mobile Visual Sensor—Challenges towards Industry 5.0
by Eugenia Mincă, Adrian Filipescu, Daniela Cernega, Răzvan Șolea, Adriana Filipescu, Dan Ionescu and Georgian Simion
Sensors 2022, 22(21), 8153; https://doi.org/10.3390/s22218153 - 25 Oct 2022
Cited by 36 | Viewed by 4695
Abstract
A digital twin for a multifunctional technology for flexible manufacturing on an assembly, disassembly, and repair mechatronics line (A/D/RML), assisted by a complex autonomous system (CAS), is presented in the paper. The hardware architecture consists of the A/D/RML and a six-workstation (WS) mechatronics [...] Read more.
A digital twin for a multifunctional technology for flexible manufacturing on an assembly, disassembly, and repair mechatronics line (A/D/RML), assisted by a complex autonomous system (CAS), is presented in the paper. The hardware architecture consists of the A/D/RML and a six-workstation (WS) mechatronics line (ML) connected to a flexible cell (FC) and equipped with a six-degree of freedom (DOF) industrial robotic manipulator (IRM). The CAS has in its structure two driving wheels and one free wheel (2DW/1FW)-wheeled mobile robot (WMR) equipped with a 7-DOF robotic manipulator (RM). On the end effector of the RM, a mobile visual servoing system (eye-in-hand MVSS) is mounted. The multifunctionality is provided by the three actions, assembly, disassembly, and repair, while the flexibility is due to the assembly of different products. After disassembly or repair, CAS picks up the disassembled components and transports them to the appropriate storage depots for reuse. Disassembling or repairing starts after assembling, and the final assembled product fails the quality test. The virtual world that serves as the digital counterpart consists of tasks assignment, planning and synchronization of A/D/RML with integrated robotic systems, IRM, and CAS. Additionally, the virtual world includes hybrid modeling with synchronized hybrid Petri nets (SHPN), simulation of the SHPN models, modeling of the MVSS, and simulation of the trajectory-tracking sliding-mode control (TTSMC) of the CAS. The real world, as counterpart of the digital twin, consists of communication, synchronization, and control of A/D/RML and CAS. In addition, the real world includes control of the MVSS, the inverse kinematic control (IKC) of the RM and graphic user interface (GUI) for monitoring and real-time control of the whole system. The “Digital twin” approach has been designed to meet all the requirements and attributes of Industry 4.0 and beyond towards Industry 5.0, the target being a closer collaboration between the human operator and the production line. Full article
(This article belongs to the Special Issue ICSTCC 2022: Advances in Monitoring and Control)
Show Figures

Figure 1

14 pages, 3820 KB  
Article
BARI: An Affordable Brain-Augmented Reality Interface to Support Human–Robot Collaboration in Assembly Tasks
by Andrea Sanna, Federico Manuri, Jacopo Fiorenza and Francesco De Pace
Information 2022, 13(10), 460; https://doi.org/10.3390/info13100460 - 28 Sep 2022
Cited by 19 | Viewed by 3983
Abstract
Human–robot collaboration (HRC) is a new and challenging discipline that plays a key role in Industry 4.0. Digital transformation of industrial plants aims to introduce flexible production lines able to adapt to different products quickly. In this scenario, HRC can be a booster [...] Read more.
Human–robot collaboration (HRC) is a new and challenging discipline that plays a key role in Industry 4.0. Digital transformation of industrial plants aims to introduce flexible production lines able to adapt to different products quickly. In this scenario, HRC can be a booster to support flexible manufacturing, thus introducing new interaction paradigms between humans and machines. Augmented reality (AR) can convey much important information to users: for instance, information related to the status and the intention of the robot/machine the user is collaborating with. On the other hand, traditional input interfaces based on physical devices, gestures, and voice might be precluded in industrial environments. Brain–computer interfaces (BCIs) can be profitably used with AR devices to provide technicians solutions to effectively collaborate with robots. This paper introduces a novel BCI–AR user interface based on the NextMind and the Microsoft Hololens 2. Compared to traditional BCI interfaces, the NextMind provides an intuitive selection mechanism based on visual cortex signals. This interaction paradigm is exploited to guide a collaborative robotic arm for a pick and place selection task. Since the ergonomic design of the NextMind allows its use in combination with the Hololens 2, users can visualize through AR the different parts composing the artifact to be assembled, the visual elements used by the NextMind to enable the selections, and the robot status. In this way, users’ hands are always free, and the focus can be always on the objects to be assembled. Finally, user tests are performed to evaluate the proposed system, assessing both its usability and the task’s workload; preliminary results are very encouraging, and the proposed solution can be considered a starting point to design and develop affordable hybrid-augmented interfaces to foster real-time human–robot collaboration. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

18 pages, 2761 KB  
Article
Effects of Low Mental Energy from Long Periods of Work on Brain-Computer Interfaces
by Kaixuan Liu, Yang Yu, Ling-Li Zeng, Xinbin Liang, Yadong Liu, Xingxing Chu, Gai Lu and Zongtan Zhou
Brain Sci. 2022, 12(9), 1152; https://doi.org/10.3390/brainsci12091152 - 29 Aug 2022
Cited by 2 | Viewed by 2230
Abstract
Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user’s mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI [...] Read more.
Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user’s mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI performance and how to reduce these effects. We defined the low-mental-energy (LME) condition as a combined condition of decreased mental energy and lack of sleep. We used a long period of work (>=18 h) to induce the LME condition, and then P300- and SSVEP-based BCI tasks were conducted in LME or normal conditions. Ten subjects were recruited in this study. Each subject participated in the LME- and normal-condition experiments within one week. For the P300-based BCI, we used two decoding algorithms: stepwise linear discriminant (SWLDA) and least square regression (LSR). For the SSVEP-based BCI, we used two decoding algorithms: canonical correlation analysis (CCA) and filter bank canonical correlation analysis (FBCCA). Accuracy and information transfer rate (ITR) were used as performance metrics. The experimental results showed that for the P300-based BCI, the average accuracy was reduced by approximately 35% (with a SWLDA classifier) and approximately 40% (with a LSR classifier); the average ITR was reduced by approximately 6 bits/min (with a SWLDA classifier) and approximately 7 bits/min (with an LSR classifier). For the SSVEP-based BCI, the average accuracy was reduced by approximately 40% (with a CCA classifier) and approximately 40% (with a FBCCA classifier); the average ITR was reduced by approximately 20 bits/min (with a CCA classifier) and approximately 19 bits/min (with a FBCCA classifier). Additionally, the amplitude and signal-to-noise ratio of the evoked electroencephalogram signals were lower in the LME condition, while the degree of fatigue and the task load of each subject were higher. Further experiments suggested that increasing stimulus size, flash duration, and flash number could improve BCI performance in LME conditions to some extent. Our experiments showed that the LME condition reduced BCI performance, the effects of LME on BCI did not rely on specific BCI types and specific decoding algorithms, and optimizing BCI parameters (e.g., stimulus size) can reduce these effects. Full article
(This article belongs to the Section Neurorehabilitation)
Show Figures

Figure 1

19 pages, 3083 KB  
Article
Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces
by Xiaolong Lou, Ziye Chen, Preben Hansen and Ren Peng
Symmetry 2022, 14(5), 928; https://doi.org/10.3390/sym14050928 - 2 May 2022
Cited by 6 | Viewed by 2766
Abstract
Hand motion sensing-based interaction, abbreviated as ‘free-hand interaction’, provides a natural and intuitive method for touch-less interaction on a large display. But due to inherent usability deficiencies of the unconventional size of the large display and the kinematic limitations of the user’s arm [...] Read more.
Hand motion sensing-based interaction, abbreviated as ‘free-hand interaction’, provides a natural and intuitive method for touch-less interaction on a large display. But due to inherent usability deficiencies of the unconventional size of the large display and the kinematic limitations of the user’s arm joint movement, a large display-based free-hand interaction is suspected to have different performance across the whole areas of the large display. To verify this, a multi-directional target pointing and selection experiment was designed and conducted based on the ISO 9241-9 evaluation criteria. Results show that (1) free-hand interaction in display areas close to the center of the body had a higher accuracy than that in peripheral-body areas; (2) free-hand interaction was asymmetric at the left side and the right side of the body. More specifically, left-hand interaction in the left-sided display area was more efficient and accurate than in the right-sided display area. For the right-hand interaction, the result was converse; moreover, (3) the dominant hand generated a higher interaction accuracy than the non-dominant hand. Lessons and strategies are discussed for designing user-friendly natural user interfaces in large displays-based interactive applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 2620 KB  
Article
The Investigation of Adoption of Voice-User Interface (VUI) in Smart Home Systems among Chinese Older Adults
by Yao Song, Yanpu Yang and Peiyao Cheng
Sensors 2022, 22(4), 1614; https://doi.org/10.3390/s22041614 - 18 Feb 2022
Cited by 59 | Viewed by 8616
Abstract
Driven by advanced voice interaction technology, the voice-user interface (VUI) has gained popularity in recent years. VUI has been integrated into various devices in the context of the smart home system. In comparison with traditional interaction methods, VUI provides multiple benefits. VUI allows [...] Read more.
Driven by advanced voice interaction technology, the voice-user interface (VUI) has gained popularity in recent years. VUI has been integrated into various devices in the context of the smart home system. In comparison with traditional interaction methods, VUI provides multiple benefits. VUI allows for hands-free and eyes-free interaction. It also enables users to perform multiple tasks while interacting. Moreover, as VUI is highly similar to a natural conversation in daily lives, it is intuitive to learn. The advantages provided by VUI are particularly beneficial to older adults, who suffer from decreases in physical and cognitive abilities, which hinder their interaction with electronic devices through traditional methods. However, the factors that influence older adults’ adoption of VUI remain unknown. This study addresses this research gap by proposing a conceptual model. On the basis of the technology adoption model (TAM) and the senior technology adoption model (STAM), this study considers the characteristic of VUI and the characteristic of older adults through incorporating the construct of trust and aging-related characteristics (i.e., perceived physical conditions, mobile self-efficacy, technology anxiety, self-actualization). A survey was designed and conducted. A total of 420 Chinese older adults participated in this survey, and they were current or potential users of VUI. Through structural equation modeling, data were analyzed. Results showed a good fit with the proposed conceptual model. Path analysis revealed that three factors determine Chinese older adults’ adoption of VUI: perceived usefulness, perceived ease of use, and trust. Aging-related characteristics also influence older adults’ adoption of VUI, but they are mediated by perceived usefulness, perceived ease of use, and trust. Specifically, mobile self-efficacy is demonstrated to positively influence trust and perceived ease of use but negatively influence perceived usefulness. Self-actualization exhibits positive influences on perceived usefulness and perceived ease of use. Technology anxiety only exerts influence on perceived ease of use in a marginal way. No significant influences of perceived physical conditions were found. This study extends the TAM and STAM by incorporating additional variables to explain Chinese older adults’ adoption of VUI. These results also provide valuable implications for developing suitable VUI for older adults as well as planning actionable communication strategies for promoting VUI among Chinese older adults. Full article
(This article belongs to the Special Issue Human–Smarthome Interaction)
Show Figures

Figure 1

21 pages, 3466 KB  
Article
AnyGesture: Arbitrary One-Handed Gestures for Augmented, Virtual, and Mixed Reality Applications
by Alexander Schäfer, Gerd Reis and Didier Stricker
Appl. Sci. 2022, 12(4), 1888; https://doi.org/10.3390/app12041888 - 11 Feb 2022
Cited by 17 | Viewed by 6227
Abstract
Natural user interfaces based on hand gestures are becoming increasingly popular. The need for expensive hardware left a wide range of interaction possibilities that hand tracking enables largely unexplored. Recently, hand tracking has been built into inexpensive and widely available hardware, allowing more [...] Read more.
Natural user interfaces based on hand gestures are becoming increasingly popular. The need for expensive hardware left a wide range of interaction possibilities that hand tracking enables largely unexplored. Recently, hand tracking has been built into inexpensive and widely available hardware, allowing more and more people access to this technology. This work provides researchers and users with a simple yet effective way to implement various one-handed gestures to enable deeper exploration of gesture-based interactions and interfaces. To this end, this work provides a framework for design, prototyping, testing, and implementation of one-handed gestures. The proposed framework was implemented with two main goals: First, it should be able to recognize any one-handed gesture. Secondly, the design and implementation of gestures should be as simple as performing the gesture and pressing a button to record it. The contribution of this paper is a simple yet unique way to record and recognize static and dynamic one-handed gestures. A static gesture can be captured with a template matching approach, while dynamic gestures use previously captured spatial information. The presented approach was evaluated in a user study with 33 participants and the implementable gestures received high accuracy and user acceptance. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Back to TopTop