Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (358)

Search Parameters:
Keywords = human–machine interface performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4223 KB  
Article
The Influence of Information Redundancy on Driving Behavior and Psychological Responses Under Different Fog and Risk Conditions: An Analysis of AR-HUD Interface Designs
by Junfeng Li, Kexin Chen and Mo Chen
Appl. Sci. 2025, 15(20), 11072; https://doi.org/10.3390/app152011072 - 15 Oct 2025
Abstract
Adverse road conditions, particularly foggy weather, significantly impair drivers’ abilities to gather information and make judgments in response to unexpected events. To investigate the impact of different Augmented Reality-Head-Up Display (AR-HUD) interfaces (words-only, symbols-only, and words + symbols) on driving behavior, this study [...] Read more.
Adverse road conditions, particularly foggy weather, significantly impair drivers’ abilities to gather information and make judgments in response to unexpected events. To investigate the impact of different Augmented Reality-Head-Up Display (AR-HUD) interfaces (words-only, symbols-only, and words + symbols) on driving behavior, this study simulated driving scenarios under varying visibility and risk levels in foggy conditions, measuring reaction time (RT), time-to-collision (TTC), the maximum lateral acceleration, the maximum longitudinal acceleration, and subjective data. The results indicated that risk levels significantly affected drivers’ RT, TTC, and maximum longitudinal and lateral accelerations. The three interfaces significantly differed in RT and TTC across different risk levels in heavy fog. In light fog, words-only and redundant interfaces significantly affected RT across different risk levels; words-only and symbols-only interfaces significantly affected TTC across different risk levels. In addition, participants responded faster when using text-related interfaces in the subject’s native language. After analyzing data on perceived usability across the three interfaces, the results indicated that under high-risk conditions, both in light fog and heavy fog, participants rated the redundant interface as having higher usability and preferred the redundant interfaces. Based on these findings, this paper proposes the following design strategies for AR-HUD visual interfaces: (1) Under low-risk foggy driving conditions, all three interface types are effective and applicable. (2) Under high-risk foggy driving conditions, redundant interface design is recommended. Although it may not significantly improve driving performance, this interface type was subjectively perceived as more useful and preferred by the subjects. The findings of this study provide support for design of AR-HUD interfaces, contributing to enhanced driving safety and human–machine interaction experience under complex meteorological conditions. This offers practical implications for the development and optimization of intelligent vehicle systems. Full article
Show Figures

Figure 1

28 pages, 13934 KB  
Article
Integration of Industrial Internet of Things (IIoT) and Digital Twin Technology for Intelligent Multi-Loop Oil-and-Gas Process Control
by Ali Saleh Allahloh, Mohammad Sarfraz, Atef M. Ghaleb, Abdulmajeed Dabwan, Adeeb A. Ahmed and Adel Al-Shayea
Machines 2025, 13(10), 940; https://doi.org/10.3390/machines13100940 (registering DOI) - 13 Oct 2025
Viewed by 155
Abstract
The convergence of Industrial Internet of Things (IIoT) and digital twin technology offers new paradigms for process automation and control. This paper presents an integrated IIoT and digital twin framework for intelligent control of a gas–liquid separation unit with interacting flow, pressure, and [...] Read more.
The convergence of Industrial Internet of Things (IIoT) and digital twin technology offers new paradigms for process automation and control. This paper presents an integrated IIoT and digital twin framework for intelligent control of a gas–liquid separation unit with interacting flow, pressure, and differential pressure loops. A comprehensive dynamic model of the three-loop separator process is developed, linearized, and validated. Classical stability analyses using the Routh–Hurwitz criterion and Nyquist plots are employed to ensure stability of the control system. Decentralized multi-loop proportional–integral–derivative (PID) controllers are designed and optimized using the Integral Absolute Error (IAE) performance index. A digital twin of the separator is implemented to run in parallel with the physical process, synchronized via a Kalman filter to real-time sensor data for state estimation and anomaly detection. The digital twin also incorporates structured singular value (μ) analysis to assess robust stability under model uncertainties. The system architecture is realized with low-cost hardware (Arduino Mega 2560, MicroMotion Coriolis flowmeter, pneumatic control valves, DAC104S085 digital-to-analog converter, and ENC28J60 Ethernet module) and software tools (Proteus VSM 8.4 for simulation, VB.Net 2022 version based human–machine interface, and ML.Net 2022 version for predictive analytics). Experimental results demonstrate improved control performance with reduced overshoot and faster settling times, confirming the effectiveness of the IIoT–digital twin integration in handling loop interactions and disturbances. The discussion includes a comparative analysis with conventional control and outlines how advanced strategies such as model predictive control (MPC) can further augment the proposed approach. This work provides a practical pathway for applying IIoT and digital twins to industrial process control, with implications for enhanced autonomy, reliability, and efficiency in oil and gas operations. Full article
(This article belongs to the Special Issue Digital Twins Applications in Manufacturing Optimization)
Show Figures

Figure 1

26 pages, 1958 KB  
Article
Real-Time Heartbeat Classification on Distributed Edge Devices: A Performance and Resource Utilization Study
by Eko Sakti Pramukantoro, Kasyful Amron, Putri Annisa Kamila and Viera Wardhani
Sensors 2025, 25(19), 6116; https://doi.org/10.3390/s25196116 - 3 Oct 2025
Viewed by 215
Abstract
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces [...] Read more.
Early detection is crucial for preventing heart disease. Advances in health technology, particularly wearable devices for automated heartbeat detection and machine learning, can enhance early diagnosis efforts. However, previous studies on heartbeat classification inference systems have primarily relied on batch processing, which introduces delays. To address this limitation, a real-time system utilizing stream processing with a distributed computing architecture is needed for continuous, immediate, and scalable data analysis. Real-time ECG inference is particularly crucial for immediate heartbeat classification, as human heartbeats occur with durations between 0.6 and 1 s, requiring inference times significantly below this threshold for effective real-time processing. This study implements a real-time heartbeat classification inference system using distributed stream processing with LSTM-512, LSTM-256, and FCN models, incorporating RR-interval, morphology, and wavelet features. The system is developed as a distributed web-based application using the Flask framework with distributed backend processing, integrating Polar H10 sensors via Bluetooth and Web Bluetooth API in JavaScript. The implementation consists of a frontend interface, distributed backend services, and coordinated inference processing. The frontend handles sensor pairing and manages real-time streaming for continuous ECG data transmission. The backend processes incoming ECG streams, performing preprocessing and model inference. Performance evaluations demonstrate that LSTM-based heartbeat classification can achieve real-time performance on distributed edge devices by carefully selecting features and models. Wavelet-based features with an LSTM-Sequential architecture deliver optimal results, achieving 99% accuracy with balanced precision-recall metrics and an inference time of 0.12 s—well below the 0.6–1 s heartbeat duration requirement. Resource analysis on Jetson Orin devices reveals that Wavelet-FCN models offer exceptional efficiency with 24.75% CPU usage, minimal GPU utilization (0.34%), and 293 MB memory consumption. The distributed architecture’s dynamic load balancing ensures resilience under varying workloads, enabling effective horizontal scaling. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

30 pages, 6459 KB  
Article
FREQ-EER: A Novel Frequency-Driven Ensemble Framework for Emotion Recognition and Classification of EEG Signals
by Dibya Thapa and Rebika Rai
Appl. Sci. 2025, 15(19), 10671; https://doi.org/10.3390/app151910671 - 2 Oct 2025
Viewed by 354
Abstract
Emotion recognition using electroencephalogram (EEG) signals has gained significant attention due to its potential applications in human–computer interaction (HCI), brain computer interfaces (BCIs), mental health monitoring, etc. Although deep learning (DL) techniques have shown impressive performance in this domain, they often require large [...] Read more.
Emotion recognition using electroencephalogram (EEG) signals has gained significant attention due to its potential applications in human–computer interaction (HCI), brain computer interfaces (BCIs), mental health monitoring, etc. Although deep learning (DL) techniques have shown impressive performance in this domain, they often require large datasets and high computational resources and offer limited interpretability, limiting their practical deployment. To address these issues, this paper presents a novel frequency-driven ensemble framework for electroencephalogram-based emotion recognition (FREQ-EER), an ensemble of lightweight machine learning (ML) classifiers with a frequency-based data augmentation strategy tailored for effective emotion recognition in low-data EEG scenarios. Our work focuses on the targeted analysis of specific EEG frequency bands and brain regions, enabling a deeper understanding of how distinct neural components contribute to the emotional states. To validate the robustness of the proposed FREQ-EER, the widely recognized DEAP (database for emotion analysis using physiological signals) dataset, SEED (SJTU emotion EEG dataset), and GAMEEMO (database for an emotion recognition system based on EEG signals and various computer games) were considered for the experiment. On the DEAP dataset, classification accuracies of up to 96% for specific emotion classes were achieved, while on the SEED and GAMEEMO, it maintained 97.04% and 98.6% overall accuracies, respectively, with nearly perfect AUC values confirming the frameworks efficiency, interpretability, and generalizability. Full article
Show Figures

Figure 1

17 pages, 4058 KB  
Article
Medical Imaging-Based Kinematic Modeling for Biomimetic Finger Joints and Hand Exoskeleton Validation
by Xiaochan Wang, Cheolhee Cho, Peng Zhang, Shuyuan Ge and Jiadi Chen
Biomimetics 2025, 10(10), 652; https://doi.org/10.3390/biomimetics10100652 - 1 Oct 2025
Viewed by 279
Abstract
Hand rehabilitation exoskeletons play a critical role in restoring motor function in patients with stroke or hand injuries. However, most existing designs rely on fixed-axis assumptions, neglecting the rolling–sliding coupling of finger joints that causes instantaneous center of rotation (ICOR) drift, leading to [...] Read more.
Hand rehabilitation exoskeletons play a critical role in restoring motor function in patients with stroke or hand injuries. However, most existing designs rely on fixed-axis assumptions, neglecting the rolling–sliding coupling of finger joints that causes instantaneous center of rotation (ICOR) drift, leading to kinematic misalignment and localized pressure concentrations. This study proposes the Instant Radius Method (IRM) based on medical imaging to continuously model ICOR trajectories of the MCP, PIP, and DIP joints, followed by the construction of an equivalent ICOR through curve fitting. Crossing-type biomimetic kinematic pairs were designed according to the equivalent ICOR and integrated into a three-loop ten-linkage exoskeleton capable of dual DOFs per finger (flexion–extension and abduction–adduction, 10 DOFs in total). Kinematic validation was performed using IMU sensors (Delsys) to capture joint angles, and interface pressure distribution at MCP and PIP was measured using thin-film pressure sensors. Experimental results demonstrated that with biomimetic kinematic pairs, the exoskeleton’s fingertip trajectories matched physiological trajectories more closely, with significantly reduced RMSE. Pressure measurements showed a reduction of approximately 15–25% in mean pressure and 20–30% in peak pressure at MCP and PIP, with more uniform distributions. The integrated framework of IRM-based modeling–equivalent ICOR–biomimetic kinematic pairs–multi-DOF exoskeleton design effectively enhanced kinematic alignment and human–machine compatibility. This work highlights the importance and feasibility of ICOR alignment in rehabilitation robotics and provides a promising pathway toward personalized rehabilitation and clinical translation. Full article
(This article belongs to the Special Issue Bionic Wearable Robotics and Intelligent Assistive Technologies)
Show Figures

Graphical abstract

12 pages, 4545 KB  
Article
Wearable Flexible Wireless Pressure Sensor Based on Poly(vinyl alcohol)/Carbon Nanotube/MXene Composite for Health Monitoring
by Lei Zhang, Junqi Pang, Xiaoling Lu, Xiaohai Zhang and Xinru Zhang
Micromachines 2025, 16(10), 1132; https://doi.org/10.3390/mi16101132 - 30 Sep 2025
Viewed by 394
Abstract
Accurate pressure monitoring is crucial for both human body applications and intelligent robotic arms, particularly for whole-body motion monitoring in human–machine interfaces. Conventional wearable electronic devices, however, often suffer from rigid connections, non-conformity, and inaccuracies. In this study, we propose a high-precision wireless [...] Read more.
Accurate pressure monitoring is crucial for both human body applications and intelligent robotic arms, particularly for whole-body motion monitoring in human–machine interfaces. Conventional wearable electronic devices, however, often suffer from rigid connections, non-conformity, and inaccuracies. In this study, we propose a high-precision wireless flexible sensor using a poly(vinyl alcohol)/single-walled carbon nanotube/MXene composite as the sensitive material, combined with a randomly distributed wrinkle structure to accurately monitor pressure parameters. To validate the sensor’s performance, it was used to monitor movements of the vocal cords, bent fingers, and human pulse. The sensor exhibits a pressure measurement range of approximately 0–130 kPa and a minimum resolution of 20 Pa. At pressures below 1 kPa, the sensor exhibits high sensitivity, enabling the detection of transient pressure changes. Within the pressure range of 1–10 kPa, the sensitivity decreases to approximately 54.71 kPa−1. Additionally, the sensor demonstrates response times of 12.5 ms at 10 kPa. For wireless signal acquisition, the pressure sensor was integrated with a Bluetooth chip, enabling real-time high-precision pressure monitoring. A deep learning-based training model was developed, achieving over 98% accuracy in motion recognition without additional computing equipment. This advancement is significant for streamlined human motion monitoring systems and intelligent components. Full article
Show Figures

Figure 1

16 pages, 5609 KB  
Article
Deep Learning-Enabled Flexible PVA/CNPs Hydrogel Film Sensor for Abdominal Respiration Monitoring
by Chengcheng Peng, Xinjiang Zhang, Ziyan Shu, Cailiu Yin and Baorong Liu
Gels 2025, 11(9), 743; https://doi.org/10.3390/gels11090743 - 16 Sep 2025
Viewed by 455
Abstract
In this study, a flexible hydrogel film sensor based on the intermixing of poly(vinyl alcohol) (PVA) and biomass-derived carbon nanoparticles (CNPs) was prepared and microstructures were constructed by replicating sandpaper templates on its surface. The sensor thus has good overall sensing performance with [...] Read more.
In this study, a flexible hydrogel film sensor based on the intermixing of poly(vinyl alcohol) (PVA) and biomass-derived carbon nanoparticles (CNPs) was prepared and microstructures were constructed by replicating sandpaper templates on its surface. The sensor thus has good overall sensing performance with a sensitivity of 101 kPa−1, a fast response/recovery time of 22 ms and 20,000 fatigue cycles. The sensor was experimentally verified to accurately capture human joint movements, current signals of written letters, and weight differences in the size of spherical objects. Based on this, a breathing phase classification framework was constructed using the 1D-CNN algorithm, achieving a synergistic enhancement effect between environmentally scalable materials and Deep learning algorithms. This approach not only improves the signal discrimination function, but also provides new ideas for wearable medical monitoring, haptic feedback and intelligent robot human–machine interface. Full article
(This article belongs to the Section Gel Applications)
Show Figures

Figure 1

23 pages, 15956 KB  
Article
A Photovoltaic Light Sensor-Based Self-Powered Real-Time Hover Gesture Recognition System for Smart Home Control
by Nora Almania, Sarah Alhouli and Deepak Sahoo
Electronics 2025, 14(18), 3576; https://doi.org/10.3390/electronics14183576 - 9 Sep 2025
Viewed by 493
Abstract
Many gesture recognition systems with innovative interfaces have emerged for smart home control. However, these systems tend to be energy-intensive, bulky, and expensive. There is also a lack of real-time demonstrations of gesture recognition and subsequent evaluation of the user experience. Photovoltaic light [...] Read more.
Many gesture recognition systems with innovative interfaces have emerged for smart home control. However, these systems tend to be energy-intensive, bulky, and expensive. There is also a lack of real-time demonstrations of gesture recognition and subsequent evaluation of the user experience. Photovoltaic light sensors are self-powered, battery-free, flexible, portable, and easily deployable on various surfaces throughout the home. They enable natural, intuitive, hover-based interaction, which could create a positive user experience. In this paper, we present the development and evaluation of a real-time, hover gesture recognition system that can control multiple smart home devices via a self-powered photovoltaic interface. Five popular supervised machine learning algorithms were evaluated using gesture data from 48 participants. The random forest classifier achieved high accuracies. However, a one-size-fits-all model performed poorly in real-time testing. User-specific random forest models performed well with 10 participants, showing no significant difference in offline and real-time performance and under normal indoor lighting conditions. This paper demonstrates the technical feasibility of using photovoltaic surfaces as self-powered interfaces for gestural interaction systems that are perceived to be useful and easy to use. It establishes a foundation for future work in hover-based interaction and sustainable sensing, enabling human–computer interaction researchers to explore further applications. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Intelligent Systems, 2nd Edition)
Show Figures

Figure 1

22 pages, 4937 KB  
Article
Multimodal AI for UAV: Vision–Language Models in Human– Machine Collaboration
by Maroš Krupáš, Ľubomír Urblík and Iveta Zolotová
Electronics 2025, 14(17), 3548; https://doi.org/10.3390/electronics14173548 - 6 Sep 2025
Viewed by 1328
Abstract
Recent advances in multimodal large language models (MLLMs)—particularly vision– language models (VLMs)—introduce new possibilities for integrating visual perception with natural-language understanding in human–machine collaboration (HMC). Unmanned aerial vehicles (UAVs) are increasingly deployed in dynamic environments, where adaptive autonomy and intuitive interaction are essential. [...] Read more.
Recent advances in multimodal large language models (MLLMs)—particularly vision– language models (VLMs)—introduce new possibilities for integrating visual perception with natural-language understanding in human–machine collaboration (HMC). Unmanned aerial vehicles (UAVs) are increasingly deployed in dynamic environments, where adaptive autonomy and intuitive interaction are essential. Traditional UAV autonomy has relied mainly on visual perception or preprogrammed planning, offering limited adaptability and explainability. This study introduces a novel reference architecture, the multimodal AI–HMC system, based on which a dedicated UAV use case architecture was instantiated and experimentally validated in a controlled laboratory environment. The architecture integrates VLM-powered reasoning, real-time depth estimation, and natural-language interfaces, enabling UAVs to perform context-aware actions while providing transparent explanations. Unlike prior approaches, the system generates navigation commands while also communicating the underlying rationale and associated confidence levels, thereby enhancing situational awareness and fostering user trust. The architecture was implemented in a real-time UAV navigation platform and evaluated through laboratory trials. Quantitative results showed a 70% task success rate in single-obstacle navigation and 50% in a cluttered scenario, with safe obstacle avoidance at flight speeds of up to 0.6 m/s. Users approved 90% of the generated instructions and rated explanations as significantly clearer and more informative when confidence visualization was included. These findings demonstrate the novelty and feasibility of embedding VLMs into UAV systems, advancing explainable, human-centric autonomy and establishing a foundation for future multimodal AI applications in HMC, including robotics. Full article
Show Figures

Figure 1

24 pages, 4050 KB  
Article
Maritime Operational Intelligence: AR-IoT Synergies for Energy Efficiency and Emissions Control
by Christos Spandonidis, Zafiris Tzioridis, Areti Petsa and Nikolaos Charanas
Sustainability 2025, 17(17), 7982; https://doi.org/10.3390/su17177982 - 4 Sep 2025
Viewed by 968
Abstract
In response to mounting regulatory and environmental pressures, the maritime sector must urgently improve energy efficiency and reduce greenhouse gas emissions. However, conventional operational interfaces often fail to deliver real-time, actionable insights needed for informed decision-making onboard. This work presents an innovative Augmented [...] Read more.
In response to mounting regulatory and environmental pressures, the maritime sector must urgently improve energy efficiency and reduce greenhouse gas emissions. However, conventional operational interfaces often fail to deliver real-time, actionable insights needed for informed decision-making onboard. This work presents an innovative Augmented Reality (AR) interface integrated with an established shipboard data collection system to enhance real-time monitoring and operational decision-making on commercial vessels. The baseline data acquisition infrastructure is currently installed on over 800 vessels across various ship types, providing a robust foundation for this development. To validate the AR interface’s feasibility and performance, a field trial was conducted on a representative dry bulk carrier. Through hands-free AR smart glasses, crew members access real-time overlays of key performance indicators, such as fuel consumption, engine status, emissions levels, and energy load balancing, directly within their field of view. Field evaluations and scenario-based workshops demonstrate significant gains in energy efficiency (up to 28% faster decision-making), predictive maintenance accuracy, and emissions awareness. The system addresses human–machine interaction challenges in high-pressure maritime settings, bridging the gap between complex sensor data and crew responsiveness. By contextualizing IoT data within the physical environment, the AR-IoT platform transforms traditional workflows into proactive, data-driven practices. This study contributes to the emerging paradigm of digitally enabled sustainable operations and offers practical insights for scaling AR-IoT solutions across global fleets. Findings suggest that such convergence of AR and IoT not only enhances vessel performance but also accelerates compliance with decarbonization targets set by the International Maritime Organization (IMO). Full article
Show Figures

Figure 1

22 pages, 7105 KB  
Article
Design of Control System for Underwater Inspection Robot in Hydropower Dam Structures
by Bing Zhao, Shuo Li, Xiangbin Wang, Mingyu Yang, Xin Yu, Zhaoxu Meng and Gang Wan
J. Mar. Sci. Eng. 2025, 13(9), 1656; https://doi.org/10.3390/jmse13091656 - 29 Aug 2025
Viewed by 670
Abstract
As critical infrastructure, hydropower dams require efficient and accurate detection of underwater structural surface defects to ensure their safety. This paper presents the design and implementation of a robotic control system specifically developed for underwater dam inspection in hydropower stations, aiming to enhance [...] Read more.
As critical infrastructure, hydropower dams require efficient and accurate detection of underwater structural surface defects to ensure their safety. This paper presents the design and implementation of a robotic control system specifically developed for underwater dam inspection in hydropower stations, aiming to enhance the robot’s operational capability under harsh hydraulic conditions. The study includes the hardware design of the control system and the development of a surface human–machine interface unit. At the software level, a modular architecture is adopted to ensure real-time performance and reliability. The solution employs a hierarchical architecture comprising hardware sensing, real-time interaction protocols, and an adaptive controller, and the integrated algorithm combining a fixed-time disturbance observer with adaptive super-twisting controller compensates for complex hydrodynamic forces. To validate the system’s effectiveness, field tests were conducted at the Baihetan Hydropower Station. Experimental results demonstrate that the proposed control system enables stable and precise dam inspection, with standard deviations of multi-degree-of-freedom automatic control below 0.5 and hovering control below 0.1. These findings confirm the system’s feasibility and superiority in performing high-precision, high-stability inspection tasks in complex underwater environments of real hydropower dams. The developed system provides reliable technical support for intelligent underwater dam inspection and holds significant practical value for improving the safety and maintenance of major hydraulic infrastructure. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 10383 KB  
Review
Flexible and Wearable Tactile Sensors for Intelligent Interfaces
by Xu Cui, Wei Zhang, Menghui Lv, Tianci Huang, Jianguo Xi and Zuqing Yuan
Materials 2025, 18(17), 4010; https://doi.org/10.3390/ma18174010 - 27 Aug 2025
Viewed by 1394
Abstract
Rapid developments in intelligent interfaces across service, healthcare, and industry have led to unprecedented demands for advanced tactile perception systems. Traditional tactile sensors often struggle with adaptability on curved surfaces and lack sufficient feedback for delicate interactions. Flexible and wearable tactile sensors are [...] Read more.
Rapid developments in intelligent interfaces across service, healthcare, and industry have led to unprecedented demands for advanced tactile perception systems. Traditional tactile sensors often struggle with adaptability on curved surfaces and lack sufficient feedback for delicate interactions. Flexible and wearable tactile sensors are emerging as a revolutionary solution, driven by innovations in flexible electronics and micro-engineered materials. This paper reviews recent advancements in flexible tactile sensors, focusing on their mechanisms, multifunctional performance and applications in health monitoring, human–machine interactions, and robotics. The first section outlines the primary transduction mechanisms of piezoresistive (resistance changes), capacitive (capacitance changes), piezoelectric (piezoelectric effect), and triboelectric (contact electrification) sensors while examining material selection strategies for performance optimization. Next, we explore the structural design of multifunctional flexible tactile sensors and highlight potential applications in motion detection and wearable systems. Finally, a detailed discussion covers specific applications of these sensors in health monitoring, human–machine interactions, and robotics. This review examines their promising prospects across various fields, including medical care, virtual reality, precision agriculture, and ocean monitoring. Full article
(This article belongs to the Special Issue Advances in Flexible Electronics and Electronic Devices)
Show Figures

Figure 1

31 pages, 2447 KB  
Article
Design and Development of Cost-Effective Humanoid Robots for Enhanced Human–Robot Interaction
by Khaled M. Salem, Mostafa S. Mohamed, Mohamed H. ElMessmary, Amira Ehsan, A. O. Elgharib and Haitham ElShimy
Automation 2025, 6(3), 41; https://doi.org/10.3390/automation6030041 - 27 Aug 2025
Viewed by 1512
Abstract
Industry Revolution Five (Industry 5.0) will shift the focus away from technology and rely more on to the collaboration between humans and AI-powered robots. This approach emphasizes a more human-centric perspective, enhanced resilience, optimized workplace processes, and a stronger commitment to sustainability. The [...] Read more.
Industry Revolution Five (Industry 5.0) will shift the focus away from technology and rely more on to the collaboration between humans and AI-powered robots. This approach emphasizes a more human-centric perspective, enhanced resilience, optimized workplace processes, and a stronger commitment to sustainability. The humanoid robot market has experienced substantial growth, fueled by technological advancements and the increasing need for automation in industries such as service, customer support, and education. However, challenges like high costs, complex maintenance, and societal concerns about job displacement remain. Despite these issues, the market is expected to continue expanding, supported by innovations that enhance both accessibility and performance. Therefore, this article proposes the design and implementation of low-cost, remotely controlled humanoid robots via a mobile application for home-assistant applications. The humanoid robot boasts an advanced mechanical structure, high-performance actuators, and an array of sensors that empower it to execute a wide range of tasks with human-like dexterity and mobility. Incorporating sophisticated control algorithms and a user-friendly Graphical User Interface (GUI) provides precise and stable robot operation and control. Through an in-house developed code, our research contributes to the growing field of humanoid robotics and underscores the significance of advanced control systems in fully harnessing the capabilities of these human-like machines. The implications of our findings extend to the future development and deployment of humanoid robots across various industries and societal contexts, making this an ideal area for students and researchers to explore innovative solutions. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

12 pages, 5061 KB  
Article
A Programmable Soft Electrothermal Actuator Based on a Functionally Graded Structure for Multiple Deformations
by Fan Bu, Feng Zhu, Zhengyan Zhang and Hanbin Xiao
Polymers 2025, 17(17), 2288; https://doi.org/10.3390/polym17172288 - 24 Aug 2025
Viewed by 753
Abstract
Soft electrothermal actuators have attracted increasing attention in soft robotics and wearable systems due to their simple structure, low driving voltage, and ease of integration. However, traditional designs based on homogeneous or layered composites often suffer from interfacial failure and limited deformation modes, [...] Read more.
Soft electrothermal actuators have attracted increasing attention in soft robotics and wearable systems due to their simple structure, low driving voltage, and ease of integration. However, traditional designs based on homogeneous or layered composites often suffer from interfacial failure and limited deformation modes, restricting their long-term stability and actuation versatility. In this study, we present a programmable soft electrothermal actuator based on a functionally graded structure composed of polydimethylsiloxane (PDMS)/multiwalled carbon nanotube (MWCNTs) composite material and an embedded EGaIn conductive circuit. Rheological and mechanical characterization confirms the enhancement of viscosity, modulus, and tensile strength with increasing MWCNTs content, confirming that the gradient structure improves mechanical performance. The device shows excellent actuation performance (bending angle up to 117°), fast response (8 s), and durability (100 cycles). The actuator achieves L-shaped, U-shaped, and V-shaped bending deformations through circuit pattern design, demonstrating precise programmability and reconfigurability. This work provides a new strategy for realizing programmable, multimodal deformation in soft systems and offers promising applications in adaptive robotics, smart devices, and human–machine interfaces. Full article
Show Figures

Figure 1

28 pages, 1856 KB  
Article
Trust-Based Modular Cyber–Physical–Human Robotic System for Collaborative Manufacturing: Modulating Communications
by S. M. Mizanoor Rahman
Machines 2025, 13(8), 731; https://doi.org/10.3390/machines13080731 - 17 Aug 2025
Viewed by 488
Abstract
The objective was to propose a human–robot bidirectional trust-triggered cyber–physical–human (CPH) system framework for human–robot collaborative assembly in flexible manufacturing and investigate the impact of modulating communications in the CPH system on system performance and human–robot interactions (HRIs). As the research method, we [...] Read more.
The objective was to propose a human–robot bidirectional trust-triggered cyber–physical–human (CPH) system framework for human–robot collaborative assembly in flexible manufacturing and investigate the impact of modulating communications in the CPH system on system performance and human–robot interactions (HRIs). As the research method, we developed a one human–one robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operation of different manufacturing components in a flexible manufacturing setup. We configured the human–robot collaborative system in three interconnected components of a CPH system: (i) cyber system, (ii) physical system, and (iii) human system. We divided the functions of the CPH system into three interconnected modules: (i) communication, (ii) computing or computation, and (iii) control. We derived a model to compute the human and robot’s bidirectional trust in each other in real time. We implemented the trust-triggered CPH framework on the human–robot collaborative assembly setup and modulated the communication methods among the cyber, physical, and human components of the CPH system in different innovative ways in three separate experiments. The research results show that modulating the communication methods triggered by bidirectional trust impacts on the effectiveness of the CPH system in terms of human–robot interactions, and task performance (efficiency and quality) differently. The results show that communication methods with an appropriate combination of a higher number of communication modes (cues) produces better HRIs and task performance. Based on a comparative study, it was concluded that the results prove the efficacy and superiority of configuring the HRC system in the form of a modular CPH system over using conventional HRC systems in terms of HRI and task performance. Configuring human–robot collaborative systems in the form of a CPH system can transform the design, development, analysis, and control of the systems and enhance their scope, ease, and effectiveness for various applications, such as industrial manufacturing, construction, transport and logistics, forestry, etc. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Back to TopTop