Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (131)

Search Parameters:
Keywords = human-machine interaction (HMI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2229 KiB  
Article
Assessing the Impact of Risk-Warning eHMI Information Content on Pedestrian Mental Workload, Situation Awareness, and Gap Acceptance in Full and Partial eHMI Penetration Vehicle Platoons
by Fang Yang, Xu Sun, Jiming Bai, Bingjian Liu, Luis Felipe Moreno Leyva and Sheng Zhang
Appl. Sci. 2025, 15(15), 8250; https://doi.org/10.3390/app15158250 - 24 Jul 2025
Viewed by 220
Abstract
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level [...] Read more.
External Human–Machine Interfaces (eHMIs) enhance pedestrian safety in interactions with autonomous vehicles (AVs) by signaling crossing risk based on time-to-arrival (TTA), categorized as low, medium, or high. This study compared five eHMI configurations (single-level low, medium, high; two-level low-medium, medium-high) against a three-level (low-medium-high) configuration to assess their impact on pedestrians’ crossing decisions, mental workload (MW), and situation awareness (SA) in vehicle platoon scenarios under full and partial eHMI penetration. In a video-based experiment with 24 participants, crossing decisions were evaluated via temporal gap selection, MW via P300 event-related potentials in an auditory oddball task, and SA via the Situation Awareness Rating Technique. The three-level configuration outperformed single-level medium, single-level high, two-level low-medium, and two-level medium-high in gap acceptance, promoting safer decisions by rejecting smaller gaps and accepting larger ones, and exhibited lower MW than the two-level medium-high configuration under partial penetration. No SA differences were observed. Although the three-level configuration was generally appreciated, future research should optimize presentation to mitigate issues from rapid signal changes. Notably, the single-level low configuration showed comparable performance, suggesting a simpler alternative for real-world eHMI deployment. Full article
Show Figures

Figure 1

37 pages, 1823 KiB  
Review
Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0
by Andreea-Ruxandra Ioniță, Daniel-Constantin Anghel and Toufik Boudouh
Appl. Sci. 2025, 15(14), 7703; https://doi.org/10.3390/app15147703 - 9 Jul 2025
Viewed by 825
Abstract
In the context of rapidly evolving industrial ecosystems, the human–machine interaction (HMI) has shifted from basic interface control toward complex, adaptive, and human-centered systems. This review explores the multidisciplinary foundations and technological advancements driving this transformation within Industry 4.0 and the emerging paradigm [...] Read more.
In the context of rapidly evolving industrial ecosystems, the human–machine interaction (HMI) has shifted from basic interface control toward complex, adaptive, and human-centered systems. This review explores the multidisciplinary foundations and technological advancements driving this transformation within Industry 4.0 and the emerging paradigm of Industry 5.0. Through a comprehensive synthesis of the recent literature, we examine the cognitive, physiological, psychological, and organizational factors that shape operator performance, safety, and satisfaction. A particular emphasis is placed on ergonomic interface design, real-time physiological sensing (e.g., EEG, EMG, and eye-tracking), and the integration of collaborative robots, exoskeletons, and extended reality (XR) systems. We further analyze methodological frameworks such as RULA, OWAS, and Human Reliability Analysis (HRA), highlighting their digital extensions and applicability in industrial contexts. This review also discusses challenges related to cognitive overload, trust in automation, and the ethical implications of adaptive systems. Our findings suggest that an effective HMI must go beyond usability and embrace a human-centric philosophy that aligns technological innovation with sustainability, personalization, and resilience. This study provides a roadmap for researchers, designers, and practitioners seeking to enhance interaction quality in smart manufacturing through cognitive ergonomics and intelligent system integration. Full article
Show Figures

Figure 1

12 pages, 8520 KiB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Viewed by 455
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

22 pages, 4828 KiB  
Article
High-Fidelity Interactive Motorcycle Driving Simulator with Motion Platform Equipped with Tension Sensors
by Josef Svoboda, Přemysl Toman, Petr Bouchner, Stanislav Novotný and Vojtěch Thums
Sensors 2025, 25(13), 4237; https://doi.org/10.3390/s25134237 - 7 Jul 2025
Viewed by 440
Abstract
The paper presents the innovative approach to a high-fidelity motorcycle riding simulator based on VR (Virtual Reality)-visualization, equipped with a Gough-Stewart 6-DOF (Degrees of Freedom) motion platform. Such a solution integrates a real-time tension sensor system as a source for highly realistic motion [...] Read more.
The paper presents the innovative approach to a high-fidelity motorcycle riding simulator based on VR (Virtual Reality)-visualization, equipped with a Gough-Stewart 6-DOF (Degrees of Freedom) motion platform. Such a solution integrates a real-time tension sensor system as a source for highly realistic motion cueing control as well as the servomotor integrated into the steering system. Tension forces are measured at four points on the mock-up chassis, allowing a comprehensive analysis of rider interaction during various maneuvers. The simulator is developed to simulate realistic riding scenarios with immersive motion and visual feedback, enhanced with the simulation of external influences—headwind. This paper presents results of a validation study—pilot experiments conducted to evaluate selected riding scenarios and validate the innovative simulator setup, focusing on force distribution and system responsiveness to support further research in motorcycle HMI (Human–Machine Interaction), rider behavior, and training. Full article
Show Figures

Figure 1

20 pages, 2409 KiB  
Article
Spatio-Temporal Deep Learning with Adaptive Attention for EEG and sEMG Decoding in Human–Machine Interaction
by Tianhao Fu, Zhiyong Zhou and Wenyu Yuan
Electronics 2025, 14(13), 2670; https://doi.org/10.3390/electronics14132670 - 1 Jul 2025
Viewed by 410
Abstract
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and [...] Read more.
Electroencephalography (EEG) and surface electromyography (sEMG) signals are widely used in human–machine interaction (HMI) systems due to their non-invasive acquisition and real-time responsiveness, particularly in neurorehabilitation and prosthetic control. However, existing deep learning approaches often struggle to capture both fine-grained local patterns and long-range spatio-temporal dependencies within these signals, which limits classification performance. To address these challenges, we propose a lightweight deep learning framework that integrates adaptive spatial attention with multi-scale temporal feature extraction for end-to-end EEG and sEMG signal decoding. The architecture includes two core components: (1) an adaptive attention mechanism that dynamically reweights multi-channel time-series features based on spatial relevance, and (2) a multi-scale convolutional module that captures diverse temporal patterns through parallel convolutional filters. The proposed method achieves classification accuracies of 79.47% on the BCI-IV 2a EEG dataset (9 subjects, 22 channels) for motor intent decoding and 85.87% on the NinaPro DB2 sEMG dataset (40 subjects, 12 channels) for gesture recognition. Ablation studies confirm the effectiveness of each module, while comparative evaluations demonstrate that the proposed framework outperforms existing state-of-the-art methods across all tested scenarios. Together, these results demonstrate that our model not only achieves strong performance but also maintains a lightweight and resource-efficient design for EEG and sEMG decoding. Full article
Show Figures

Figure 1

14 pages, 2424 KiB  
Article
Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness
by Antonio Di Tecco, Daniele Leonardis, Antonio Frisoli and Claudio Loconsole
Robotics 2025, 14(7), 92; https://doi.org/10.3390/robotics14070092 - 30 Jun 2025
Viewed by 405
Abstract
This research study investigates the impact of a virtual dashboard on the quality of task execution in robotic teleoperation. More specifically, this study investigates how a virtual dashboard improves user awareness and grasp precision in a teleoperated pick-and-place task by providing users with [...] Read more.
This research study investigates the impact of a virtual dashboard on the quality of task execution in robotic teleoperation. More specifically, this study investigates how a virtual dashboard improves user awareness and grasp precision in a teleoperated pick-and-place task by providing users with critical information in real-time. An experiment was conducted with 30 participants in a robotic teleoperated task to measure their task performance in two different experimental conditions: a control group used conventional interfaces, and an experimental group utilized the virtual dashboard with additional information. Research findings indicate that integrating a virtual dashboard improves grasping accuracy, reduces user fatigue, and speeds up task completion, thereby improving task effectiveness and the quality of the experience. Full article
(This article belongs to the Special Issue Extended Reality and AI Empowered Robots)
Show Figures

Figure 1

21 pages, 480 KiB  
Perspective
Towards Predictive Communication: The Fusion of Large Language Models and Brain–Computer Interface
by Andrea Carìa
Sensors 2025, 25(13), 3987; https://doi.org/10.3390/s25133987 - 26 Jun 2025
Viewed by 799
Abstract
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language [...] Read more.
Integration of advanced artificial intelligence with neurotechnology offers transformative potential for assistive communication. This perspective article examines the emerging convergence between non-invasive brain–computer interface (BCI) spellers and large language models (LLMs), with a focus on predictive communication for individuals with motor or language impairments. First, I will review the evolution of language models—from early rule-based systems to contemporary deep learning architectures—and their role in enhancing predictive writing. Second, I will survey existing implementations of BCI spellers that incorporate language modeling and highlight recent pilot studies exploring the integration of LLMs into BCI. Third, I will examine how, despite advancements in typing speed, accuracy, and user adaptability, the fusion of LLMs and BCI spellers still faces key challenges such as real-time processing, robustness to noise, and the integration of neural decoding outputs with probabilistic language generation frameworks. Finally, I will discuss how fully integrating LLMs with BCI technology could substantially improve the speed and usability of BCI-mediated communication, offering a path toward more intuitive, adaptive, and effective neurotechnological solutions for both clinical and non-clinical users. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

15 pages, 6626 KiB  
Article
A Self-Powered Smart Glove Based on Triboelectric Sensing for Real-Time Gesture Recognition and Control
by Shuting Liu, Xuanxuan Duan, Jing Wen, Qiangxing Tian, Lin Shi, Shurong Dong and Liang Peng
Electronics 2025, 14(12), 2469; https://doi.org/10.3390/electronics14122469 - 18 Jun 2025
Viewed by 564
Abstract
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove [...] Read more.
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove HMI based on a minimalist triboelectric nanogenerator (TENG) sensor composed of a conductive fabric electrode and textured Ecoflex layer. Surface micro-structuring via 3D-printed molds enhances triboelectric performance without added complexity, achieving a peak power density of 75.02 μW/cm2 and stable operation over 13,000 cycles. The glove system enables real-time LED brightness control via finger-bending kinematics and supports intelligent recognition applications. A convolutional neural network (CNN) achieves 99.2% accuracy in user identification and 97.0% in object classification. By combining energy autonomy, mechanical simplicity, and machine learning capabilities, this work advances scalable, multi-functional HMIs for applications in assistive robotics, augmented reality (AR)/(virtual reality) VR environments, and secure interactive systems. Full article
Show Figures

Figure 1

44 pages, 5969 KiB  
Article
iRisk: Towards Responsible AI-Powered Automated Driving by Assessing Crash Risk and Prevention
by Naomi Y. Mbelekani and Klaus Bengler
Electronics 2025, 14(12), 2433; https://doi.org/10.3390/electronics14122433 - 14 Jun 2025
Viewed by 710
Abstract
Advanced technology systems and neuroelectronics for crash risk assessment and anticipation may be a promising field for advancing responsible automated driving on urban roads. In principle, there are prospects of an artificially intelligent (AI)-powered automated vehicle (AV) system that tracks the degree of [...] Read more.
Advanced technology systems and neuroelectronics for crash risk assessment and anticipation may be a promising field for advancing responsible automated driving on urban roads. In principle, there are prospects of an artificially intelligent (AI)-powered automated vehicle (AV) system that tracks the degree of perceived crash risk (as either low, mid, or high) and perceived safety. As a result, communicating (verbally or nonverbally) this information to the user based on human factor aspects should be reflected. As humans and vehicle automation systems are prone to error, we need to design advanced information and communication technologies that monitor risks and act as a mediator when necessary. One possible approach is towards designing a crash risk classification and management system. This would be through responsible AI that monitors the user’s mental states associated with risk-taking behaviour and communicates this information to the user, in conjunction with the driving environment and AV states. This concept is based on a literature review and industry experts’ perspectives on designing advanced technology systems that support users in preventing crash risk encounters due to long-term effects. Equally, learning strategies for responsible automated driving on urban roads were designed. In a sense, this paper offers the reader a meticulous discussion on conceptualising a safety-inspired ‘ergonomically responsible AI’ concept in the form of an intelligent risk assessment system (iRisk) and an AI-powered Risk information Human–Machine Interface (AI rHMI) as a useful concept for responsible automated driving and safe human–automation interaction. Full article
Show Figures

Figure 1

30 pages, 5512 KiB  
Article
Making Autonomous Taxis Understandable: A Comparative Study of eHMI Feedback Modes and Display Positions for Pickup Guidance
by Gang Ren, Zhihuang Huang, Yaning Zhu, Wenshuo Lin, Tianyang Huang, Gang Wang and Jeehang Lee
Electronics 2025, 14(12), 2387; https://doi.org/10.3390/electronics14122387 - 11 Jun 2025
Viewed by 513
Abstract
Passengers often struggle to identify intended pickup locations when autonomous taxis (ATs) arrive, leading to confusion and delays. While prior external human–machine interface (eHMI) studies have focused on pedestrian crossings, few have systematically compared feedback modes and display positions for AT pickup guidance [...] Read more.
Passengers often struggle to identify intended pickup locations when autonomous taxis (ATs) arrive, leading to confusion and delays. While prior external human–machine interface (eHMI) studies have focused on pedestrian crossings, few have systematically compared feedback modes and display positions for AT pickup guidance at varying distances. This study investigates the effectiveness of three eHMI feedback modes (Eye, Arrow, and Number) displayed at two positions (Body and Top) for communicating AT pickup locations. Through a controlled virtual reality experiment, we examined how these design variations impact user performance across key metrics including selection time, error rates, and decision confidence across varied parking distances. The results revealed distinct advantages for each feedback mode: Number feedback provided the fastest response times, particularly when displayed at the top position; Arrow feedback facilitated more confident decisions with lower error rates in close-range scenarios; and Eye feedback demonstrated superior performance in distant conditions by preventing severe identification errors. Body position displays consistently outperformed top-mounted ones, improving users’ understanding of the vehicle’s intended actions. These findings highlight the importance of context-aware eHMI systems that dynamically adapt to interaction distances and operational requirements. Based on our evidence, we propose practical design strategies for implementing these feedback modes in real-world AT services to optimize both system efficiency and user experience in urban mobility environments. Future work should address user learning challenges and validate these findings across diverse environmental conditions and implementation frameworks. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 807 KiB  
Article
Frequently Used Vehicle Controls While Driving: A Real-World Driving Study Assessing Internal Human–Machine Interface Task Frequencies and Influencing Factors
by Ilse M. Harms, Daniël A. M. Auerbach, Eleonora Papadimitriou and Marjan P. Hagenzieker
Appl. Sci. 2025, 15(10), 5230; https://doi.org/10.3390/app15105230 - 8 May 2025
Viewed by 690
Abstract
Human–Machine Interfaces (HMIs) in passenger cars have become more complex over the years, with touch screens replacing physical buttons and with layered menu-structures. This can lead to distractions. The purpose of this study is to investigate how often vehicle controls are used while [...] Read more.
Human–Machine Interfaces (HMIs) in passenger cars have become more complex over the years, with touch screens replacing physical buttons and with layered menu-structures. This can lead to distractions. The purpose of this study is to investigate how often vehicle controls are used while driving and which underlying factors contribute to usage. Thirty drivers were observed during driving a familiar route twice, in their own car and in an unfamiliar car. In a 2 × 1 within-subject design, the experimenter drove along with each participant and used a predefined checklist to record how often participants interacted with specific functions of their vehicle while driving. The results showed that, in the familiar car, direction indicators are the most frequently used controls, followed by adjusting radio volume, moving the sun visor, adjusting temperature and changing wiper speed. Factors that influenced task frequencies included car familiarity, gender, age and weather conditions. The type of car also appears to impact task frequency. Participants interacted less with the unfamiliar car, compared to their own car, which may indicate drivers are regulating their mental load. These results are relevant for vehicle HMI designers to understand which functions should be easily and swiftly available while driving to reduce distraction by the HMI design. Full article
(This article belongs to the Special Issue Human–Vehicle Interactions)
Show Figures

Figure 1

24 pages, 2269 KiB  
Review
A Review of the Performance of Smart Lawnmower Development: Theoretical and Practical Implications
by Elwin Nesan Selvanesan, Kia Wai Liew, Chai Hua Tay, Jian Ai Yeow, Yu Jin Ng, Peng Lean Chong and Chun Quan Kang
Designs 2025, 9(3), 55; https://doi.org/10.3390/designs9030055 - 2 May 2025
Cited by 1 | Viewed by 2383
Abstract
Smart lawnmowers are becoming increasingly integrated into daily life as their performance continues to improve. To ensure consistent advancement, it is important to conduct a comprehensive analysis of the performance of various modern smart lawnmowers. However, there appears to be a lack of [...] Read more.
Smart lawnmowers are becoming increasingly integrated into daily life as their performance continues to improve. To ensure consistent advancement, it is important to conduct a comprehensive analysis of the performance of various modern smart lawnmowers. However, there appears to be a lack of thorough performance evaluation and analysis of their broader impact. This review explores the key performance indicators influencing smart lawnmower performance, particularly in navigation and obstacle avoidance, operational efficiency, and human–machine interaction (HMI). Key performance indicators identified for evaluation include operating time, Effective Field Capacity (FCe), and field efficiency (%). Additionally, it examines the theoretical and practical implications of smart lawnmower development. Smart lawnmowers have been found to contribute to advancements in machine learning algorithms and possibly swarm robotics. Environmental benefits, such as reduced emissions and noise pollution, were also highlighted in this review. Future research directions are discussed, both in the short and long term, to further optimize smart lawnmower performance. This review serves as a foundation for future studies and experimental investigations aimed at enhancing the real-world applicability of smart lawnmowers. Full article
Show Figures

Figure 1

22 pages, 1742 KiB  
Systematic Review
Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review
by Debora Firmino de Souza, Sonia Sousa, Kadri Kristjuhan-Ling, Olga Dunajeva, Mare Roosileht, Avar Pentel, Mati Mõttus, Mustafa Can Özdemir and Žanna Gratšjova
Electronics 2025, 14(8), 1557; https://doi.org/10.3390/electronics14081557 - 11 Apr 2025
Cited by 2 | Viewed by 1863
Abstract
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous [...] Read more.
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous robots, for economic advantage. It requires societal and educational shifts toward a human-centric research vision, revising how we perceive technological advancements to improve the benefits and convenience for individuals. Furthermore, it also requires determining which priority is given to user preferences and needs to feel safe while collaborating with autonomous intelligent systems. This proposed human-centric vision aims to enhance human creativity and problem-solving abilities by leveraging machine precision and data processing, all while protecting human agency. Aligned with this perspective, we conducted a systematic literature review focusing on trust and trustworthiness in relation to characteristics of humans and systems in human–robot interaction (HRI). Our research explores the aspects that impact the potential for designing and fostering machine trustworthiness from a human-centered standpoint. A systematic analysis was conducted to review 34 articles in recent HRI-related studies. Then, through a standardized screening, we identified and categorized factors influencing trust in automation that can act as trust barriers and facilitators when implementing autonomous intelligent systems. Our study comments on the application areas in which trust is considered, how it is conceptualized, and how it is evaluated within the field. Our analysis underscores the significance of examining users’ trust and the related factors impacting it as foundational elements for promoting secure and trustworthy HRI. Full article
(This article belongs to the Special Issue Emerging Trends in Multimodal Human-Computer Interaction)
Show Figures

Figure 1

41 pages, 3049 KiB  
Review
Hydrogel-Based Biointerfaces: Recent Advances, Challenges, and Future Directions in Human–Machine Integration
by Aziz Ullah, Do Youn Kim, Sung In Lim and Hyo-Ryoung Lim
Gels 2025, 11(4), 232; https://doi.org/10.3390/gels11040232 - 23 Mar 2025
Cited by 6 | Viewed by 2489
Abstract
Human–machine interfacing (HMI) has emerged as a critical technology in healthcare, robotics, and wearable electronics, with hydrogels offering unique advantages as multifunctional materials that seamlessly connect biological systems with electronic devices. This review provides a detailed examination of recent advancements in hydrogel design, [...] Read more.
Human–machine interfacing (HMI) has emerged as a critical technology in healthcare, robotics, and wearable electronics, with hydrogels offering unique advantages as multifunctional materials that seamlessly connect biological systems with electronic devices. This review provides a detailed examination of recent advancements in hydrogel design, focusing on their properties and potential applications in HMI. We explore the key characteristics such as biocompatibility, mechanical flexibility, and responsiveness, which are essential for effective and long-term integration with biological tissues. Additionally, we highlight innovations in conductive hydrogels, hybrid and composite materials, and fabrication techniques such as 3D/4D printing, which allow for the customization of hydrogel properties to meet the demands of specific HMI applications. Further, we discuss the diverse classes of polymers that contribute to hydrogel conductivity, including conducting, natural, synthetic, and hybrid polymers, emphasizing their role in enhancing electrical performance and mechanical adaptability. In addition to material design, we examine the regulatory landscape governing hydrogel-based biointerfaces for HMI applications, addressing the key considerations for clinical translation and commercialization. An analysis of the patent landscape provides insights into emerging trends and innovations shaping the future of hydrogel technologies in human–machine interactions. The review also covers a range of applications, including wearable electronics, neural interfaces, soft robotics, and haptic systems, where hydrogels play a transformative role in enhancing human–machine interactions. Thereafter, the review addresses the challenges hydrogels face in HMI applications, including issues related to stability, biocompatibility, and scalability, while offering future perspectives on the continued evolution of hydrogel-based systems for HMI technologies. Full article
(This article belongs to the Special Issue Gel-Based Materials for Sensing and Monitoring)
Show Figures

Graphical abstract

25 pages, 1869 KiB  
Review
Envisioning Human–Machine Relationship Towards Mining of the Future: An Overview
by Peter Kolapo, Nafiu Olanrewaju Ogunsola, Kayode Komolafe and Dare Daniel Omole
Mining 2025, 5(1), 5; https://doi.org/10.3390/mining5010005 - 6 Jan 2025
Cited by 2 | Viewed by 2229
Abstract
Automation is increasingly gaining attention as the global industry moves toward intelligent, unmanned approaches to perform hazardous tasks. Although the integration of autonomous technologies has revolutionized various industries for decades, the mining sector has only recently started to harness the potential of autonomous [...] Read more.
Automation is increasingly gaining attention as the global industry moves toward intelligent, unmanned approaches to perform hazardous tasks. Although the integration of autonomous technologies has revolutionized various industries for decades, the mining sector has only recently started to harness the potential of autonomous technology. Lately, the mining industry has been transforming by implementing automated systems to shape the future of mining and minimize human involvement in the process. Automated systems such as robotics, artificial intelligence (AI), the Industrial Internet of Things (IIOT), and data analytics have contributed immensely towards ensuring improved productivity and safety and promoting sustainable mineral industry. Despite the substantial benefits and promising potential of automation in the mining sector, its adoption faces challenges due to concerns about human–machine interaction. This paper extensively reviews the current trends, attempts, and trials in converting traditional mining machines to automated systems with no or less human involvement. It also delves into the application of AI in mining operations from the exploration phase to the processing stage. To advance the knowledge base in this domain, the study describes the method used to develop the human–machine interface (HMI) that controls and monitors the activity of a six-degrees-of-freedom robotic arm, a roof bolter machine, and the status of the automated machine. The notable findings in this study draw attention to the critical roles of humans in automated mining operations. This study shows that human operators are still relevant and must control, operate, and maintain these innovative technologies in mining operations. Thus, establishing an effective interaction between human operators and machines can promote the acceptability and implementation of autonomous technologies in mineral extraction processes. Full article
(This article belongs to the Special Issue Envisioning the Future of Mining, 2nd Edition)
Show Figures

Figure 1

Back to TopTop