Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = hands-free computer interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1287 KB  
Article
Comparative Evaluation of Two Dynamic Navigation Systems vs. Freehand Approach and Different Operator Skills in Endodontic Microsurgery: A Cadaver Study
by Umberto Gibello, Elina Mekhdieva, Mario Alovisi, Luca Cortese, Andrea Cemenasco, Anna Cassisa, Caterina Chiara Bianchi, Vittorio Monasterolo, Allegra Comba, Andrea Baldi, Vittorio Fenoglio, Elio Berutti and Damiano Pasqualini
Appl. Sci. 2025, 15(21), 11405; https://doi.org/10.3390/app152111405 - 24 Oct 2025
Viewed by 734
Abstract
Background/Objectives: The purpose of the study is to determine and compare the accuracy and efficiency of two dynamic navigation systems (DNS)—Navident (ClaroNav, Canada) and X-Guide (Nobel Biocare, Switzerland)—vs. a free-hand (FH) approach in performing endodontic microsurgery (EMS) on human cadavers. Methods: a total [...] Read more.
Background/Objectives: The purpose of the study is to determine and compare the accuracy and efficiency of two dynamic navigation systems (DNS)—Navident (ClaroNav, Canada) and X-Guide (Nobel Biocare, Switzerland)—vs. a free-hand (FH) approach in performing endodontic microsurgery (EMS) on human cadavers. Methods: a total of 119 roots of six cadavers were randomly divided into three groups (Navident/X-Guide/FH). The cadavers’ jaws were scanned pre-operatively with computed tomography. The DICOM data were uploaded and digitally managed with software interfaces for registration, calibration, and virtual planning of EMS. Osteotomy was performed under DNS control and using a dental operating microscope (FH control group). Post-operative scans were taken with same settings as preoperative. Accuracy was then determined by comparing pre- and post-scans of coronal and apical linear, angular deviation, angle, length, and depth of apical resection. Efficiency was determined by measuring the procedural time of osteotomy, apicectomy, retro-cavity preparation, the volume of substance, and cortical bone loss, as well as iatrogenic complications. Outcomes were also evaluated in relation to different operators’ skill levels. Descriptive statistics and inferential analyses were conducted using R software (4.2.1). Results: DNS demonstrated better efficiency in osteotomy and apicectomy, second only to FH in substance and cortical bone loss. Both DNS approaches had similar accuracy. Experts were faster and more accurate than non-experts in FH, apart from resection angle, length and depth, and retro-cavity preparation time, for which comparison was not statistically significant. The Navident and X-guide groups had similar trends in increasing efficiency and accuracy of EMS. All complications in the FH group were performed by non-experts. The X-guide group demonstrated fewer complications than the Navident group. Conclusions: Both DNS appear beneficial for EMS in terms of accuracy and efficacy in comparison with FH, also demonstrating the decreasing gap of skill expertise between experts and novice operators. Through convenient use X-guide diminishes the level of iatrogenic complications compared to Navident. Full article
Show Figures

Figure 1

14 pages, 1197 KB  
Article
An Inclusive Offline Learning Platform Integrating Gesture Recognition and Local AI Models
by Marius-Valentin Drăgoi, Ionuț Nisipeanu, Roxana-Adriana Puiu, Florentina-Geanina Tache, Teodora-Mihaela Spiridon-Mocioacă, Alexandru Hank and Cozmin Cristoiu
Biomimetics 2025, 10(10), 693; https://doi.org/10.3390/biomimetics10100693 - 14 Oct 2025
Viewed by 778
Abstract
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine [...] Read more.
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine to produce customized learning materials. Users can peruse educational documents, obtain topic summaries, and generate automated quizzes with intuitive gestures, including lateral finger movements, a two-finger gesture, or an open palm, without the need for conventional input devices. Upon selection of a file, the AI model analyzes its whole content, producing a structured summary and a multiple-choice assessment, both of which are immediately saved for subsequent inspection. A unified set of gestures facilitates seamless navigating within the user interface and the opened documents. The system underwent testing with university students and faculty (n = 31), utilizing assessment measures such as gesture detection accuracy, command-response latency, and user satisfaction. The findings demonstrate that the system offers a seamless, hands-free user experience with significant potential for usage in accessibility, human–computer interaction, and intelligent interface design. This work advances the creation of multimodal AI-driven educational aids, providing a pragmatic framework for gesture-based document navigation and intelligent content enhancement. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 3rd Edition)
Show Figures

Figure 1

11 pages, 1005 KB  
Proceeding Paper
Multimodal Fusion for Enhanced Human–Computer Interaction
by Ajay Sharma, Isha Batra, Shamneesh Sharma and Anggy Pradiftha Junfithrana
Eng. Proc. 2025, 107(1), 81; https://doi.org/10.3390/engproc2025107081 - 10 Sep 2025
Cited by 1 | Viewed by 1441
Abstract
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, [...] Read more.
Our paper introduces a novel idea of a virtual mouse character driven by gesture detection, eye-tracking, and voice monitoring. This system uses cutting-edge computer vision and machine learning technology to let users command and control the mouse pointer using eye motions, voice commands, or hand gestures. This system’s main goal is to provide users who want a more natural, hands-free approach to interacting with their computers as well as those with impairments that limit their bodily motions, such as those with paralysis—with an easy and engaging interface. The system improves accessibility and usability by combining many input modalities, therefore providing a flexible answer for numerous users. While the speech recognition function permits hands-free operation via voice instructions, the eye-tracking component detects and responds to the user’s gaze, therefore providing exact cursor control. Gesture recognition enhances these features even further by letting users use their hands simply to execute mouse operations. This technology not only enhances personal user experience for people with impairments but also marks a major development in human–computer interaction. It shows how computer vision and machine learning may be used to provide more inclusive and flexible user interfaces, therefore improving the accessibility and efficiency of computer usage for everyone. Full article
Show Figures

Figure 1

20 pages, 2732 KB  
Article
Redesigning Multimodal Interaction: Adaptive Signal Processing and Cross-Modal Interaction for Hands-Free Computer Interaction
by Bui Hong Quan, Nguyen Dinh Tuan Anh, Hoang Van Phi and Bui Trung Thanh
Sensors 2025, 25(17), 5411; https://doi.org/10.3390/s25175411 - 2 Sep 2025
Viewed by 1321
Abstract
Hands-free computer interaction is a key topic in assistive technology, with camera-based and voice-based systems being the most common methods. Recent camera-based solutions leverage facial expressions or head movements to simulate mouse clicks or key presses, while voice-based systems enable control via speech [...] Read more.
Hands-free computer interaction is a key topic in assistive technology, with camera-based and voice-based systems being the most common methods. Recent camera-based solutions leverage facial expressions or head movements to simulate mouse clicks or key presses, while voice-based systems enable control via speech commands, wake-word detection, and vocal gestures. However, existing systems often suffer from limitations in responsiveness and accuracy, especially under real-world conditions. In this paper, we present 3-Modal Human-Computer Interaction (3M-HCI), a novel interaction system that dynamically integrates facial, vocal, and eye-based inputs through a new signal processing pipeline and a cross-modal coordination mechanism. This approach not only enhances recognition accuracy but also reduces interaction latency. Experimental results demonstrate that 3M-HCI outperforms several recent hands-free interaction solutions in both speed and precision, highlighting its potential as a robust assistive interface. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 914 KB  
Systematic Review
Systematic Review of Mecanum and Omni Wheel Technologies for Motor Impairments
by Michał Burkacki, Ilona Łysy, Sławomir Suchoń, Miłosz Chrzan and Rafał Kowolik
Appl. Sci. 2025, 15(9), 4773; https://doi.org/10.3390/app15094773 - 25 Apr 2025
Cited by 2 | Viewed by 4105
Abstract
Mecanum and omni wheel-based assistive technologies present an alternative to conventional mobility devices for individuals with motor impairments, owing to their omnidirectional movement capabilities and high maneuverability in constrained environments. This systematic review identifies and categorizes the key challenges and emerging trends in [...] Read more.
Mecanum and omni wheel-based assistive technologies present an alternative to conventional mobility devices for individuals with motor impairments, owing to their omnidirectional movement capabilities and high maneuverability in constrained environments. This systematic review identifies and categorizes the key challenges and emerging trends in the development of such systems. Primary obstacles include limited stability and maneuverability on uneven terrain, high energy consumption, complex control requirements, and elevated production costs. In response, recent studies have introduced several innovative approaches, such as advanced suspension systems to enhance terrain adaptability, modular mechanical designs to reduce manufacturing complexity, energy-efficient motor control strategies such as field-oriented control, AI-driven autonomous navigation, and hands-free user interfaces—including gesture recognition and brain–computer interfaces. By synthesizing findings from 26 peer-reviewed studies, this review outlines current technical limitations, surveys state-of-the-art solutions, and offers strategic recommendations to inform future research in intelligent assistive mobility technologies. Full article
Show Figures

Figure 1

25 pages, 2844 KB  
Article
Real-Time Gesture-Based Hand Landmark Detection for Optimized Mobile Photo Capture and Synchronization
by Pedro Marques, Paulo Váz, José Silva, Pedro Martins and Maryam Abbasi
Electronics 2025, 14(4), 704; https://doi.org/10.3390/electronics14040704 - 12 Feb 2025
Cited by 4 | Viewed by 4154
Abstract
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource [...] Read more.
Gesture recognition technology has emerged as a transformative solution for natural and intuitive human–computer interaction (HCI), offering touch-free operation across diverse fields such as healthcare, gaming, and smart home systems. In mobile contexts, where hygiene, convenience, and the ability to operate under resource constraints are critical, hand gesture recognition provides a compelling alternative to traditional touch-based interfaces. However, implementing effective gesture recognition in real-world mobile settings involves challenges such as limited computational power, varying environmental conditions, and the requirement for robust offline–online data management. In this study, we introduce ThumbsUp, which is a gesture-driven system, and employ a partially systematic literature review approach (inspired by core PRISMA guidelines) to identify the key research gaps in mobile gesture recognition. By incorporating insights from deep learning–based methods (e.g., CNNs and Transformers) while focusing on low resource consumption, we leverage Google’s MediaPipe in our framework for real-time detection of 21 hand landmarks and adaptive lighting pre-processing, enabling accurate recognition of a “thumbs-up” gesture. The system features a secure queue-based offline–cloud synchronization model, which ensures that the captured images and metadata (encrypted with AES-GCM) remain consistent and accessible even with intermittent connectivity. Experimental results under dynamic lighting, distance variations, and partially cluttered environments confirm the system’s superior low-light performance and decreased resource consumption compared to baseline camera applications. Additionally, we highlight the feasibility of extending ThumbsUp to incorporate AI-driven enhancements for abrupt lighting changes and, in the future, electromyographic (EMG) signals for users with motor impairments. Our comprehensive evaluation demonstrates that ThumbsUp maintains robust performance on typical mobile hardware, showing resilience to unstable network conditions and minimal reliance on high-end GPUs. These findings offer new perspectives for deploying gesture-based interfaces in the broader IoT ecosystem, thus paving the way toward secure, efficient, and inclusive mobile HCI solutions. Full article
(This article belongs to the Special Issue AI-Driven Digital Image Processing: Latest Advances and Prospects)
Show Figures

Figure 1

10 pages, 4558 KB  
Proceeding Paper
An IoT-Based Smart Wheelchair with EEG Control and Vital Sign Monitoring
by Rowida Meligy, Anton Royanto Ahmad and Samir Mekid
Eng. Proc. 2024, 82(1), 46; https://doi.org/10.3390/ecsa-11-20489 - 26 Nov 2024
Cited by 3 | Viewed by 6976
Abstract
This study introduces an innovative smart wheelchair designed to improve mobility and health monitoring for individuals with disabilities. Overcoming the limitations of traditional wheelchairs, this smart wheelchair integrates a tri-wheel mechanism, enabling smooth navigation across various terrains, including stairs, thus providing greater autonomy [...] Read more.
This study introduces an innovative smart wheelchair designed to improve mobility and health monitoring for individuals with disabilities. Overcoming the limitations of traditional wheelchairs, this smart wheelchair integrates a tri-wheel mechanism, enabling smooth navigation across various terrains, including stairs, thus providing greater autonomy and flexibility. The wheelchair is equipped with two smart Internet of Things (IoT)-based subsystems for control and vital sign monitoring. Besides a joystick, the wheelchair features an electroencephalography (EEG)-based brain–computer interface (BCI) for hands-free control. Utilizing support vector machine (SVM) algorithms has proven effective in classifying EEG signals. This feature is especially beneficial for users with severe physical disabilities, allowing them to navigate more independently. In addition, the smart wheelchair has comprehensive health monitoring capabilities, continuously tracking vital signs such as heart rate, blood oxygen levels (SpO2), and electrocardiogram (ECG) data. The system implements an SVM algorithm to recognize premature ventricular contractions (PVC) from ECG data. These metrics are transmitted to healthcare providers through a secure IoT platform, allowing for real-time monitoring and timely interventions. In the event of an emergency, the system is programmed to automatically send alerts, including the patient’s location, to caregivers and authorized relatives. This innovation is a step forward in developing assistive technologies that support independent living and proactive health management in smart cities. Full article
Show Figures

Figure 1

18 pages, 7087 KB  
Article
Steady-State Visual Evoked Potential-Based Brain–Computer Interface System for Enhanced Human Activity Monitoring and Assessment
by Yuankun Chen, Xiyu Shi, Varuna De Silva and Safak Dogan
Sensors 2024, 24(21), 7084; https://doi.org/10.3390/s24217084 - 3 Nov 2024
Cited by 2 | Viewed by 3177
Abstract
Advances in brain–computer interfaces (BCIs) have enabled direct and functional connections between human brains and computing systems. Recent developments in artificial intelligence have also significantly improved the ability to detect brain activity patterns. In particular, using steady-state visual evoked potentials (SSVEPs) in BCIs [...] Read more.
Advances in brain–computer interfaces (BCIs) have enabled direct and functional connections between human brains and computing systems. Recent developments in artificial intelligence have also significantly improved the ability to detect brain activity patterns. In particular, using steady-state visual evoked potentials (SSVEPs) in BCIs has enabled noticeable advances in human activity monitoring and identification. However, the lack of publicly available electroencephalogram (EEG) datasets has limited the development of SSVEP-based BCI systems (SSVEP-BCIs) for human activity monitoring and assisted living. This study aims to provide an open-access multicategory EEG dataset created under the SSVEP-BCI paradigm, with participants performing forward, backward, left, and right movements to simulate directional control commands in a virtual environment developed in Unity. The purpose of these actions is to explore how the brain responds to visual stimuli of control commands. An SSVEP-BCI system is proposed to enable hands-free control of a virtual target in the virtual environment allowing participants to maneuver the virtual target using only their brain activity. This work demonstrates the feasibility of using SSVEP-BCIs in human activity monitoring and assessment. The preliminary experiment results indicate the effectiveness of the developed system with high accuracy, successfully classifying 89.88% of brainwave activity. Full article
Show Figures

Figure 1

18 pages, 10460 KB  
Article
Free Surface Motion of a Liquid Pool with Isothermal Sidewalls as a Benchmark for Marangoni Convection Problems
by Bruce E. Ciccotosto and Caleb S. Brooks
Energies 2023, 16(19), 6824; https://doi.org/10.3390/en16196824 - 26 Sep 2023
Cited by 1 | Viewed by 2056
Abstract
In single phase flows, benchmarks like the lid driven cavity have become recognized as fundamental tests for newly developed computational fluid dynamics, CFD, codes. For multiphase free surface flows with variable surface tension, the presently studied pool with isothermal sidewalls is suggested as [...] Read more.
In single phase flows, benchmarks like the lid driven cavity have become recognized as fundamental tests for newly developed computational fluid dynamics, CFD, codes. For multiphase free surface flows with variable surface tension, the presently studied pool with isothermal sidewalls is suggested as it is the simplest domain where Marangoni effects can dominate. It was also chosen due to its strange sensitivity to the initial setup which is discussed at length from a chosen number of ‘scenarios’. It was found that the fluid interface can reverse deformation by a change in the top boundary condition, the liquid equation of state, and the gravity level. For the top boundary condition, this reversal is due to vapor expansion within the closed volume, creating an additional convection mechanism. Not only does the interface reverse, but the peak height changes by more than an order of magnitude at the same Marangoni number. When including gravity, the peak velocity can increase significantly, but it can also cause a decrease when done in combination with a change in the top wall boundary condition. Finally, thermal expansion of the liquid phase causes the peak velocity to be reduced, with additional reductions from the gravity and top wall condition. The differences in each scenario could lead to significant errors in analyzing a practical application of Marangoni flows. Therefore, it is important to demonstrate that a new CFD code can not only resolve Marangoni convection, but also has the capability to resolve the scenario most relevant to the application at hand. Full article
(This article belongs to the Special Issue Research on Fluid Mechanics and Heat Transfer)
Show Figures

Graphical abstract

21 pages, 5162 KB  
Article
Experimental Evaluation of EMKEY: An Assistive Technology for People with Upper Limb Disabilities
by Mireya Zapata, Kevin Valencia-Aragón and Carlos Ramos-Galarza
Sensors 2023, 23(8), 4049; https://doi.org/10.3390/s23084049 - 17 Apr 2023
Cited by 8 | Viewed by 3156
Abstract
Assistive technology can help people with disabilities to use computers more effectively and can enable them to access the same information and resources as people without disabilities. To obtain more insight into the factors that can bring about the design of an Emulator [...] Read more.
Assistive technology can help people with disabilities to use computers more effectively and can enable them to access the same information and resources as people without disabilities. To obtain more insight into the factors that can bring about the design of an Emulator of Mouse and Keyboard (EMKEY) to higher levels of user satisfaction, an experimental study was conducted in order to analyse its effectiveness and efficiency. The experimental study involved 27 participants (Mage = 20.81, SD = 1.14) who performed three experimental games under different conditions (using the mouse and using EMKEY with head movements and voice commands). According to the results, the use of EMKEY allowed for the successful performance of tasks such as matching stimuli (F(2,78) = 2.39, p = 0.10, η2 = 0.06). However, the execution times of a task were found to be higher when using the emulator to drag an object on the screen (t(52,1) = −18.45, p ≤ 0.001, d = 9.60). These results indicate the effectiveness of technological development for people with upper limb disabilities; however, there is room for improvement in terms of efficiency. The findings are discussed in relation to previous research and are based on future studies aimed at improving the operation of the EMKEY emulator. Full article
(This article belongs to the Special Issue Human Computer Interaction in Emerging Technologies)
Show Figures

Figure 1

14 pages, 3820 KB  
Article
BARI: An Affordable Brain-Augmented Reality Interface to Support Human–Robot Collaboration in Assembly Tasks
by Andrea Sanna, Federico Manuri, Jacopo Fiorenza and Francesco De Pace
Information 2022, 13(10), 460; https://doi.org/10.3390/info13100460 - 28 Sep 2022
Cited by 21 | Viewed by 4200
Abstract
Human–robot collaboration (HRC) is a new and challenging discipline that plays a key role in Industry 4.0. Digital transformation of industrial plants aims to introduce flexible production lines able to adapt to different products quickly. In this scenario, HRC can be a booster [...] Read more.
Human–robot collaboration (HRC) is a new and challenging discipline that plays a key role in Industry 4.0. Digital transformation of industrial plants aims to introduce flexible production lines able to adapt to different products quickly. In this scenario, HRC can be a booster to support flexible manufacturing, thus introducing new interaction paradigms between humans and machines. Augmented reality (AR) can convey much important information to users: for instance, information related to the status and the intention of the robot/machine the user is collaborating with. On the other hand, traditional input interfaces based on physical devices, gestures, and voice might be precluded in industrial environments. Brain–computer interfaces (BCIs) can be profitably used with AR devices to provide technicians solutions to effectively collaborate with robots. This paper introduces a novel BCI–AR user interface based on the NextMind and the Microsoft Hololens 2. Compared to traditional BCI interfaces, the NextMind provides an intuitive selection mechanism based on visual cortex signals. This interaction paradigm is exploited to guide a collaborative robotic arm for a pick and place selection task. Since the ergonomic design of the NextMind allows its use in combination with the Hololens 2, users can visualize through AR the different parts composing the artifact to be assembled, the visual elements used by the NextMind to enable the selections, and the robot status. In this way, users’ hands are always free, and the focus can be always on the objects to be assembled. Finally, user tests are performed to evaluate the proposed system, assessing both its usability and the task’s workload; preliminary results are very encouraging, and the proposed solution can be considered a starting point to design and develop affordable hybrid-augmented interfaces to foster real-time human–robot collaboration. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

18 pages, 2761 KB  
Article
Effects of Low Mental Energy from Long Periods of Work on Brain-Computer Interfaces
by Kaixuan Liu, Yang Yu, Ling-Li Zeng, Xinbin Liang, Yadong Liu, Xingxing Chu, Gai Lu and Zongtan Zhou
Brain Sci. 2022, 12(9), 1152; https://doi.org/10.3390/brainsci12091152 - 29 Aug 2022
Cited by 2 | Viewed by 2429
Abstract
Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user’s mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI [...] Read more.
Brain-computer interfaces (BCIs) provide novel hands-free interaction strategies. However, the performance of BCIs is affected by the user’s mental energy to some extent. In this study, we aimed to analyze the combined effects of decreased mental energy and lack of sleep on BCI performance and how to reduce these effects. We defined the low-mental-energy (LME) condition as a combined condition of decreased mental energy and lack of sleep. We used a long period of work (>=18 h) to induce the LME condition, and then P300- and SSVEP-based BCI tasks were conducted in LME or normal conditions. Ten subjects were recruited in this study. Each subject participated in the LME- and normal-condition experiments within one week. For the P300-based BCI, we used two decoding algorithms: stepwise linear discriminant (SWLDA) and least square regression (LSR). For the SSVEP-based BCI, we used two decoding algorithms: canonical correlation analysis (CCA) and filter bank canonical correlation analysis (FBCCA). Accuracy and information transfer rate (ITR) were used as performance metrics. The experimental results showed that for the P300-based BCI, the average accuracy was reduced by approximately 35% (with a SWLDA classifier) and approximately 40% (with a LSR classifier); the average ITR was reduced by approximately 6 bits/min (with a SWLDA classifier) and approximately 7 bits/min (with an LSR classifier). For the SSVEP-based BCI, the average accuracy was reduced by approximately 40% (with a CCA classifier) and approximately 40% (with a FBCCA classifier); the average ITR was reduced by approximately 20 bits/min (with a CCA classifier) and approximately 19 bits/min (with a FBCCA classifier). Additionally, the amplitude and signal-to-noise ratio of the evoked electroencephalogram signals were lower in the LME condition, while the degree of fatigue and the task load of each subject were higher. Further experiments suggested that increasing stimulus size, flash duration, and flash number could improve BCI performance in LME conditions to some extent. Our experiments showed that the LME condition reduced BCI performance, the effects of LME on BCI did not rely on specific BCI types and specific decoding algorithms, and optimizing BCI parameters (e.g., stimulus size) can reduce these effects. Full article
(This article belongs to the Section Neurorehabilitation)
Show Figures

Figure 1

16 pages, 1435 KB  
Article
Brain–Computer Interface and Hand-Guiding Control in a Human–Robot Collaborative Assembly Task
by Yevheniy Dmytriyev, Federico Insero, Marco Carnevale and Hermes Giberti
Machines 2022, 10(8), 654; https://doi.org/10.3390/machines10080654 - 5 Aug 2022
Cited by 16 | Viewed by 5271
Abstract
Collaborative robots (Cobots) are compact machines programmable for a wide variety of tasks and able to ease operators’ working conditions. They can be therefore adopted in small and medium enterprises, characterized by small production batches and a multitude of different and complex tasks. [...] Read more.
Collaborative robots (Cobots) are compact machines programmable for a wide variety of tasks and able to ease operators’ working conditions. They can be therefore adopted in small and medium enterprises, characterized by small production batches and a multitude of different and complex tasks. To develop an actual collaborative application, a suitable task design and a suitable interaction strategy between human and cobot are required. The achievement of an effective and efficient communication strategy between human and cobot is one of the milestones of collaborative approaches, which can be based on several communication technologies, possibly in a multimodal way. In this work, we focus on a cooperative assembly task. A brain–computer interface (BCI) is exploited to supply commands to the cobot, to allow the operator the possibility to switch, with the desired timing, between independent and cooperative modality of assistance. The two kinds of control can be activated based on the brain commands gathered when the operator looks at two blinking screens corresponding to different commands, so that the operator does not need to have his hands free to give command messages to the cobot, and the assembly process can be sped up. The feasibility of the proposed approach is validated by developing and testing the interaction in an assembly application. Cycle times for the same assembling task, carried out with and without the cobot support, are compared in terms of average times, variability and learning trends. The usability and effectiveness of the proposed interaction strategy are therefore evaluated, to assess the advantages of the proposed solution in an actual industrial environment. Full article
(This article belongs to the Special Issue Industrial Process Improvement by Automation and Robotics)
Show Figures

Figure 1

15 pages, 2936 KB  
Article
A System for Neuromotor Based Rehabilitation on a Passive Robotic Aid
by Marco Righi, Massimo Magrini, Cristina Dolciotti and Davide Moroni
Sensors 2021, 21(9), 3130; https://doi.org/10.3390/s21093130 - 30 Apr 2021
Cited by 3 | Viewed by 4313
Abstract
In the aging world population, the occurrence of neuromotor deficits arising from stroke and other medical conditions is expected to grow, demanding the design of new and more effective approaches to rehabilitation. In this paper, we show how the combination of robotic technologies [...] Read more.
In the aging world population, the occurrence of neuromotor deficits arising from stroke and other medical conditions is expected to grow, demanding the design of new and more effective approaches to rehabilitation. In this paper, we show how the combination of robotic technologies with progress in exergaming methodologies may lead to the creation of new rehabilitation protocols favoring motor re-learning. To this end, we introduce the Track-Hold system for neuromotor rehabilitation based on a passive robotic arm and integrated software. A special configuration of weights on the robotic arm fully balances the weight of the patients’ arm, allowing them to perform a purely neurological task, overcoming the muscular effort of similar free-hand exercises. A set of adaptive and configurable exercises are proposed to patients through a large display and a graphical user interface. Common everyday tasks are also proposed for patients to learn again the associated actions in a persistent way, thus improving life independence. A data analysis module was also designed to monitor progress and compute indices of post-stroke neurological damage and Parkinsonian-type disorders. The system was tested in the lab and in a pilot project involving five patients in the post-stroke chronic stage with partial paralysis of the right upper limb, showing encouraging preliminary results. Full article
(This article belongs to the Special Issue Feedback-Based Balance, Gait Assistive and Rehabilitation Aids)
Show Figures

Figure 1

21 pages, 1942 KB  
Article
Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study
by Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Haoru Xiao, Kim King-Tong Lau and Khalad Hasan
Appl. Sci. 2019, 9(15), 3177; https://doi.org/10.3390/app9153177 - 5 Aug 2019
Cited by 15 | Viewed by 7370
Abstract
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch [...] Read more.
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch display, device camera, and built-in sensors of these handheld devices, which suffer from obtrusive interactions with AR content. Wearable fabric-based interfaces promote subtle, natural, and eyes-free interactions which are needed when performing interactions in dynamic environments. Prior studies explored the possibilities of using fabric-based wearable interfaces for head-mounted AR display (HMD) devices. The interface metaphors of HMD AR devices are inadequate for handheld AR devices as a typical HAR application require users to use only one hand to perform interactions. In this paper, we aim to investigate the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We elicited user-preferred gestures which are socially acceptable and comfortable to use for HAR devices. We also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures, and present broader design guidelines for fabric-based wearable interfaces for handheld augmented reality applications. Full article
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)
Show Figures

Figure 1

Back to TopTop