Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,603)

Search Parameters:
Keywords = human-robot interactions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5104 KiB  
Article
Integrating OpenPose for Proactive Human–Robot Interaction Through Upper-Body Pose Recognition
by Shih-Huan Tseng, Jhih-Ciang Chiang, Cheng-En Shiue and Hsiu-Ping Yueh
Electronics 2025, 14(15), 3112; https://doi.org/10.3390/electronics14153112 - 5 Aug 2025
Abstract
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of [...] Read more.
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of this paper can be summarized into three main features. Firstly, we conducted a comprehensive data collection process, capturing five different table-front poses: looking down, looking at the screen, looking at the robot, resting the head on hands, and stretching both hands. These poses were selected to represent common interaction scenarios. Secondly, we designed the robot’s dialog content and movement patterns to correspond with the identified table-front poses. By aligning the robot’s responses with the specific pose, we aimed to create a more engaging and intuitive interaction experience for users. Finally, we performed an extensive evaluation by exploring the performance of three classification models—non-linear Support Vector Machine (SVM), Artificial Neural Network (ANN), and convolutional neural network (CNN)—for accurately recognizing table-front poses. We used an Asus Zenbo Junior robot to acquire images and leveraged OpenPose to extract 12 upper-body skeleton points as input for training the classification models. The experimental results indicate that the ANN model outperformed the other models, demonstrating its effectiveness in pose recognition. Overall, the proposed system not only showcases the potential of utilizing OpenPose for proactive human–robot interaction but also demonstrates its real-world applicability. By combining advanced pose recognition techniques with carefully designed dialog and movement patterns, the tabletop robot successfully engages with humans in a proactive manner. Full article
Show Figures

Figure 1

27 pages, 427 KiB  
Article
ROS-Compatible Robotics Simulators for Industry 4.0 and Industry 5.0: A Systematic Review of Trends and Technologies
by Jose M. Flores Gonzalez, Enrique Coronado and Natsuki Yamanobe
Appl. Sci. 2025, 15(15), 8637; https://doi.org/10.3390/app15158637 (registering DOI) - 4 Aug 2025
Abstract
Simulators play a critical role in the development and testing of Industry 4.0 and Industry 5.0 applications. However, few studies have examined their capabilities beyond physics modeling, particularly in terms of connectivity and integration within broader robotic ecosystems. This review addresses this gap [...] Read more.
Simulators play a critical role in the development and testing of Industry 4.0 and Industry 5.0 applications. However, few studies have examined their capabilities beyond physics modeling, particularly in terms of connectivity and integration within broader robotic ecosystems. This review addresses this gap by focusing on ROS-compatible simulators. Using the SEGRESS methodology in combination with the PICOC framework, this study systematically analyzes 65 peer-reviewed articles published between 2021 and 2025 to identify key trends, capabilities, and application domains of ROS-integrated robotic simulators in industrial and manufacturing contexts. Our findings indicate that Gazebo is the most commonly used simulator in Industry 4.0, primarily due to its strong compatibility with ROS, while Unity is most prevalent in Industry 5.0 for its advanced visualization, support for human interaction, and extended reality (XR) features. Additionally, the study examines the adoption of ROS and ROS 2, and identifies complementary communication and integration technologies that help address the current interoperability challenges of ROS. These insights are intended to inform researchers and practitioners about the current landscape of simulation platforms and the core technologies frequently incorporated into robotics research. Full article
(This article belongs to the Special Issue Intelligent Robotics in the Era of Industry 5.0)
Show Figures

Figure 1

28 pages, 21813 KiB  
Article
Adaptive RGB-D Semantic Segmentation with Skip-Connection Fusion for Indoor Staircase and Elevator Localization
by Zihan Zhu, Henghong Lin, Anastasia Ioannou and Tao Wang
J. Imaging 2025, 11(8), 258; https://doi.org/10.3390/jimaging11080258 - 4 Aug 2025
Abstract
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature [...] Read more.
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature fusion module, Skip-Connection Fusion (SCF), that dynamically integrates RGB (Red, Green, Blue) and depth features through an adaptive weighting mechanism and skip-connection integration. This approach enables the model to selectively emphasize informative regions while suppressing noise, effectively addressing challenging conditions such as partially blocked staircases, glossy elevator doors, and dimly lit stair edges, which improves obstacle detection and supports reliable human–robot interaction in complex environments. Extensive experiments on a newly collected dataset demonstrate that SCF consistently outperforms state-of-the-art methods, including PSPNet and DeepLabv3, in both overall mIoU (mean Intersection over Union) and challenging-case performance. Specifically, our SCF module improves segmentation accuracy by 5.23% in the top 10% of challenging samples, highlighting its robustness in real-world conditions. Furthermore, we conduct a sensitivity analysis on the learnable weights, demonstrating their impact on segmentation quality across varying scene complexities. Our work provides a strong foundation for real-world applications in autonomous navigation, assistive robotics, and smart surveillance. Full article
Show Figures

Figure 1

22 pages, 1470 KiB  
Article
An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human–Robot Collaboration
by Dianhao Zhang, Mien Van, Pantelis Sopasakis and Seán McLoone
Machines 2025, 13(8), 672; https://doi.org/10.3390/machines13080672 - 1 Aug 2025
Viewed by 257
Abstract
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes [...] Read more.
To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, it is critical to seamlessly integrate sensing, cognition, and prediction into the robot controller for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The proposed approach takes advantage of the prediction capabilities of nonlinear model predictive control (NMPC) to execute safe path planning based on feedback from a vision system. To satisfy the requirements of real-time path planning, an embedded solver based on a penalty method is applied. However, due to tight sampling times, NMPC solutions are approximate; therefore, the safety of the system cannot be guaranteed. To address this, we formulate a novel safety-critical paradigm that uses an exponential control barrier function (ECBF) as a safety filter. Several common human–robot assembly subtasks have been integrated into a real-life HRC assembly task to validate the performance of the proposed controller and to investigate whether integrating human pose prediction can help with safe and efficient collaboration. The robot uses OptiTrack cameras for perception and dynamically generates collision-free trajectories to the predicted target interactive position. Results for a number of different configurations confirm the efficiency of the proposed motion planning and execution framework, with a 23.2% reduction in execution time achieved for the HRC task compared to an implementation without human motion prediction. Full article
(This article belongs to the Special Issue Visual Measurement and Intelligent Robotic Manufacturing)
Show Figures

Figure 1

15 pages, 10795 KiB  
Article
DigiHortiRobot: An AI-Driven Digital Twin Architecture for Hydroponic Greenhouse Horticulture with Dual-Arm Robotic Automation
by Roemi Fernández, Eduardo Navas, Daniel Rodríguez-Nieto, Alain Antonio Rodríguez-González and Luis Emmi
Future Internet 2025, 17(8), 347; https://doi.org/10.3390/fi17080347 - 31 Jul 2025
Viewed by 209
Abstract
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, [...] Read more.
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, task planning, and dual-arm robotic execution within a modular, IoT-enabled infrastructure. DigiHortiRobot is structured into three progressive implementation phases: (i) monitoring and data acquisition through a multimodal perception system; (ii) decision support and virtual simulation for scenario analysis and intervention planning; and (iii) autonomous execution with feedback-based model refinement. The Physical Layer encompasses crops, infrastructure, and a mobile dual-arm robot; the virtual layer incorporates semantic modeling and simulation environments; and the synchronization layer enables continuous bi-directional communication via a nine-tier IoT architecture inspired by FIWARE standards. A robot task assignment algorithm is introduced to support operational autonomy while maintaining human oversight. The system is designed to optimize horticultural workflows such as seeding and harvesting while allowing farmers to interact remotely through cloud-based interfaces. Compared to previous digital agriculture approaches, DigiHortiRobot enables closed-loop coordination among perception, simulation, and action, supporting real-time task adaptation in dynamic environments. Experimental validation in a hydroponic greenhouse confirmed robust performance in both seeding and harvesting operations, achieving over 90% accuracy in localizing target elements and successfully executing planned tasks. The platform thus provides a strong foundation for future research in predictive control, semantic environment modeling, and scalable deployment of autonomous systems for high-value crop production. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

19 pages, 1753 KiB  
Article
EMG-Driven Shared Control Architecture for Human–Robot Co-Manipulation Tasks
by Francesca Patriarca, Paolo Di Lillo and Filippo Arrichiello
Machines 2025, 13(8), 669; https://doi.org/10.3390/machines13080669 - 31 Jul 2025
Viewed by 211
Abstract
The paper presents a shared control strategy that allows a human operator to physically guide the end-effector of a robotic manipulator to perform different tasks, possibly in interaction with the environment. To switch among different operational modes referring to a finite state machine [...] Read more.
The paper presents a shared control strategy that allows a human operator to physically guide the end-effector of a robotic manipulator to perform different tasks, possibly in interaction with the environment. To switch among different operational modes referring to a finite state machine algorithm, ElectroMyoGraphic (EMG) signals from the user’s arm are used to detect muscular contractions and to interact with a variable admittance control strategy. Specifically, a Support Vector Machine (SVM) classifier processes the raw EMG data to identify three classes of contractions that trigger the activation of different sets of admittance control parameters corresponding to the envisaged operational modes. The proposed architecture has been experimentally validated using a Kinova Jaco2 manipulator, equipped with force/torque sensor at the end-effector, and with a limited group of users wearing Delsys Trigno Avanti EMG sensors on the dominant upper limb, demonstrating promising results. Full article
(This article belongs to the Special Issue Design and Control of Assistive Robots)
Show Figures

Figure 1

26 pages, 6831 KiB  
Article
Human–Robot Interaction and Tracking System Based on Mixed Reality Disassembly Tasks
by Raúl Calderón-Sesmero, Adrián Lozano-Hernández, Fernando Frontela-Encinas, Guillermo Cabezas-López and Mireya De-Diego-Moro
Robotics 2025, 14(8), 106; https://doi.org/10.3390/robotics14080106 - 30 Jul 2025
Viewed by 177
Abstract
Disassembly is a crucial process in industrial operations, especially in tasks requiring high precision and strict safety standards when handling components with collaborative robots. However, traditional methods often rely on rigid and sequential task planning, which makes it difficult to adapt to unforeseen [...] Read more.
Disassembly is a crucial process in industrial operations, especially in tasks requiring high precision and strict safety standards when handling components with collaborative robots. However, traditional methods often rely on rigid and sequential task planning, which makes it difficult to adapt to unforeseen changes or dynamic environments. This rigidity not only limits flexibility but also leads to prolonged execution times, as operators must follow predefined steps that do not allow for real-time adjustments. Although techniques like teleoperation have attempted to address these limitations, they often hinder direct human–robot collaboration within the same workspace, reducing effectiveness in dynamic environments. In response to these challenges, this research introduces an advanced human–robot interaction (HRI) system leveraging a mixed-reality (MR) interface embedded in a head-mounted device (HMD). The system enables operators to issue real-time control commands using multimodal inputs, including voice, gestures, and gaze tracking. These inputs are synchronized and processed via the Robot Operating System (ROS2), enabling dynamic and flexible task execution. Additionally, the integration of deep learning algorithms ensures precise detection and validation of disassembly components, enhancing accuracy. Experimental evaluations demonstrate significant improvements, including reduced task completion times, enhanced operator experience, and compliance with strict adherence to safety standards. This scalable solution offers broad applicability for general-purpose disassembly tasks, making it well-suited for complex industrial scenarios. Full article
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)
Show Figures

Figure 1

28 pages, 3441 KiB  
Article
Which AI Sees Like Us? Investigating the Cognitive Plausibility of Language and Vision Models via Eye-Tracking in Human-Robot Interaction
by Khashayar Ghamati, Maryam Banitalebi Dehkordi and Abolfazl Zaraki
Sensors 2025, 25(15), 4687; https://doi.org/10.3390/s25154687 - 29 Jul 2025
Viewed by 343
Abstract
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception [...] Read more.
As large language models (LLMs) and vision–language models (VLMs) become increasingly used in robotics area, a crucial question arises: to what extent do these models replicate human-like cognitive processes, particularly within socially interactive contexts? Whilst these models demonstrate impressive multimodal reasoning and perception capabilities, their cognitive plausibility remains underexplored. In this study, we address this gap by using human visual attention as a behavioural proxy for cognition in a naturalistic human-robot interaction (HRI) scenario. Eye-tracking data were previously collected from participants engaging in social human-human interactions, providing frame-level gaze fixations as a human attentional ground truth. We then prompted a state-of-the-art VLM (LLaVA) to generate scene descriptions, which were processed by four LLMs (DeepSeek-R1-Distill-Qwen-7B, Qwen1.5-7B-Chat, LLaMA-3.1-8b-instruct, and Gemma-7b-it) to infer saliency points. Critically, we evaluated each model in both stateless and memory-augmented (short-term memory, STM) modes to assess the influence of temporal context on saliency prediction. Our results presented that whilst stateless LLaVA most closely replicates human gaze patterns, STM confers measurable benefits only for DeepSeek, whose lexical anchoring mirrors human rehearsal mechanisms. Other models exhibited degraded performance with memory due to prompt interference or limited contextual integration. This work introduces a novel, empirically grounded framework for assessing cognitive plausibility in generative models and underscores the role of short-term memory in shaping human-like visual attention in robotic systems. Full article
Show Figures

Figure 1

32 pages, 6323 KiB  
Article
Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving
by Irene Bouzón, Jimena Pascual, Cayetana Costales, Aser Crespo, Covadonga Cima and David Melendi
Sensors 2025, 25(15), 4679; https://doi.org/10.3390/s25154679 - 29 Jul 2025
Viewed by 324
Abstract
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to [...] Read more.
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to support remote interventions in emergency scenarios. Built on a modular ROS2 architecture, the system allows seamless transition between simulated and physical platforms, enabling safe and reproducible testing. The experimental results show a high task success rate and user satisfaction, highlighting the importance of intuitive controls, gesture recognition accuracy, and low-latency feedback. Our findings contribute to the understanding of human-robot interaction (HRI) in immersive teleoperation contexts and provide insights into the role of multisensory feedback and control modalities in building trust and situational awareness for remote operators. Ultimately, this approach is intended to support the broader acceptability of autonomous driving technologies by enhancing human supervision, control, and confidence. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

20 pages, 3334 KiB  
Article
Brush Stroke-Based Writing Trajectory Control Model for Robotic Chinese Calligraphy
by Dongmei Guo, Wenjun Fang and Wenwen Yang
Electronics 2025, 14(15), 3000; https://doi.org/10.3390/electronics14153000 - 28 Jul 2025
Viewed by 270
Abstract
Engineering innovations play a critical role in achieving the United Nations’ Sustainable Development Goals, especially in human–robotic interaction and precise engineering. For the robot, writing Chinese calligraphy with hairy brush pen is a form of precision operation. Existing writing trajectory control models mainly [...] Read more.
Engineering innovations play a critical role in achieving the United Nations’ Sustainable Development Goals, especially in human–robotic interaction and precise engineering. For the robot, writing Chinese calligraphy with hairy brush pen is a form of precision operation. Existing writing trajectory control models mainly focus on writing trajectory models, and the fine-grained trajectory control model based on brush strokes is not studied. The problem of how to establish writing trajectory control based on brush stroke model needs to be solved. On the basis of the proposed composite-curve-dilation brush stroke model (CCD-BSM), this study investigates the control methods of intelligent calligraphy robots and proposed fine-grained writing trajectory control models that conform to the rules of brush calligraphy to reflect the local writing characteristics. By decomposing and refining each writing process, control models in the process of brush movement are analyzed and modeled. According to the writing rules, fine-grained writing trajectory control models of strokes are established based on the CCD-BSM. The parametric representations of the control models are built for the three stages of initiation, execution, and completion of strokes writing. Experimental results demonstrate that the proposed fine-grained control models can exhibit excellent performances in basic strokes and Chinese characters with better writing capabilities. Compared with existing models, the writing results demonstrate the advantages of our proposed model in terms of high average similarity with two quantitative indicators Cosine similarity (CSIM) and Structural similarity index measure (SSIM), which are 99.54% and 97.57%, respectively. Full article
Show Figures

Figure 1

27 pages, 31172 KiB  
Article
Digital Twin for Analog Mars Missions: Investigating Local Positioning Alternatives for GNSS-Denied Environments
by Benjamin Reimeir, Amelie Leininger, Raimund Edlinger, Andreas Nüchter and Gernot Grömer
Sensors 2025, 25(15), 4615; https://doi.org/10.3390/s25154615 - 25 Jul 2025
Viewed by 208
Abstract
Future planetary exploration missions will rely heavily on efficient human–robot interaction to ensure astronaut safety and maximize scientific return. In this context, digital twins offer a promising tool for planning, simulating, and optimizing extravehicular activities. This study presents the development and evaluation of [...] Read more.
Future planetary exploration missions will rely heavily on efficient human–robot interaction to ensure astronaut safety and maximize scientific return. In this context, digital twins offer a promising tool for planning, simulating, and optimizing extravehicular activities. This study presents the development and evaluation of a digital twin for the AMADEE-24 analog Mars mission, organized by the Austrian Space Forum and conducted in Armenia in March 2024. Alternative local positioning methods were evaluated to enhance the system’s utility in Global Navigation Satellite System (GNSS)-denied environments. The digital twin integrates telemetry from the Aouda space suit simulators, inertial measurement unit motion capture (IMU-MoCap), and sensor data from the Intuitive Rover Operation and Collecting Samples (iROCS) rover. All nine experiment runs were reconstructed successfully by the developed digital twin. A comparative analysis of localization methods found that Simultaneous Localization and Mapping (SLAM)-based rover positioning and IMU-MoCap localization of the astronaut matched Global Positioning System (GPS) performance. Adaptive Cluster Detection showed significantly higher deviations compared to the previous GNSS alternatives. However, the IMU-MoCap method was limited by discontinuous segment-wise measurements, which required intermittent GPS recalibration. Despite these limitations, the results highlight the potential of alternative localization techniques for digital twin integration. Full article
Show Figures

Figure 1

30 pages, 2228 KiB  
Article
Controlling Industrial Robotic Arms Using Gyroscopic and Gesture Inputs from a Smartwatch
by Carmen-Cristiana Cazacu, Mihail Hanga, Florina Chiscop, Dragos-Alexandru Cazacu and Costel Emil Cotet
Appl. Sci. 2025, 15(15), 8297; https://doi.org/10.3390/app15158297 - 25 Jul 2025
Viewed by 213
Abstract
This paper presents a novel interface that leverages a smartwatch for controlling industrial robotic arms. By harnessing the gyroscope and advanced gesture recognition capabilities of the smartwatch, our solution facilitates intuitive, real-time manipulation that caters to users ranging from novices to seasoned professionals. [...] Read more.
This paper presents a novel interface that leverages a smartwatch for controlling industrial robotic arms. By harnessing the gyroscope and advanced gesture recognition capabilities of the smartwatch, our solution facilitates intuitive, real-time manipulation that caters to users ranging from novices to seasoned professionals. A dedicated application is implemented to aggregate sensor data via an open-source library, providing a streamlined alternative to conventional control systems. The experimental setup consists of a smartwatch equipped with a data collection application, a robotic arm, and a communication module programmed in Python. Our aim is to evaluate the practicality and effectiveness of smartwatch-based control in a real-world industrial context. The experimental results indicate that this approach significantly enhances accessibility while concurrently minimizing the complexity typically associated with automation systems. Full article
Show Figures

Figure 1

24 pages, 6228 KiB  
Article
Quantification of the Mechanical Properties in the Human–Exoskeleton Upper Arm Interface During Overhead Work Postures in Healthy Young Adults
by Jonas Schiebl, Nawid Elsner, Paul Birchinger, Jonas Aschenbrenner, Christophe Maufroy, Mark Tröster, Urs Schneider and Thomas Bauernhansl
Sensors 2025, 25(15), 4605; https://doi.org/10.3390/s25154605 - 25 Jul 2025
Viewed by 406
Abstract
Exoskeletons transfer loads to the human body via physical human–exoskeleton interfaces (pHEI). However, the human–exoskeleton interaction remains poorly understood, and the mechanical properties of the pHEI are not well characterized. Therefore, we present a novel methodology to precisely characterize pHEI interaction stiffnesses under [...] Read more.
Exoskeletons transfer loads to the human body via physical human–exoskeleton interfaces (pHEI). However, the human–exoskeleton interaction remains poorly understood, and the mechanical properties of the pHEI are not well characterized. Therefore, we present a novel methodology to precisely characterize pHEI interaction stiffnesses under various loading conditions. Forces and torques were applied in three orthogonal axes to the upper arm pHEI of 21 subjects using an electromechanical apparatus. Interaction loads and displacements were measured, and stiffness data were derived as well as mathematically described using linear and non-linear regression models, yielding all the diagonal elements of the stiffness tensor. We find that the non-linear nature of pHEI stiffness is best described using exponential functions, though we also provide linear approximations for simplified modeling. We identify statistically significant differences between loading conditions and report median translational stiffnesses between 2.1 N/mm along and 4.5 N/mm perpendicular to the arm axis, as well as rotational stiffnesses of 0.2 N·m/° perpendicular to the arm, while rotations around the longitudinal axis are almost an order of magnitude smaller (0.03 N·m/°). The resulting stiffness models are suitable for use in digital human–exoskeleton models, potentially leading to more accurate estimations of biomechanical efficacy and discomfort of exoskeletons. Full article
Show Figures

Figure 1

25 pages, 13994 KiB  
Article
A Semi-Autonomous Aerial Platform Enhancing Non-Destructive Tests
by Simone D’Angelo, Salvatore Marcellini, Alessandro De Crescenzo, Michele Marolla, Vincenzo Lippiello and Bruno Siciliano
Drones 2025, 9(8), 516; https://doi.org/10.3390/drones9080516 - 23 Jul 2025
Viewed by 509
Abstract
The use of aerial robots for inspection and maintenance in industrial settings demands high maneuverability, precise control, and reliable measurements. This study explores the development of a fully customized unmanned aerial manipulator (UAM), composed of a tilting drone and an articulated robotic arm, [...] Read more.
The use of aerial robots for inspection and maintenance in industrial settings demands high maneuverability, precise control, and reliable measurements. This study explores the development of a fully customized unmanned aerial manipulator (UAM), composed of a tilting drone and an articulated robotic arm, designed to perform non-destructive in-contact inspections of iron structures. The system is intended to operate in complex and potentially hazardous environments, where autonomous execution is supported by shared-control strategies that include human supervision. A parallel force–impedance control framework is implemented to enable smooth and repeatable contact between a sensor for ultrasonic testing (UT) and the inspected surface. During interaction, the arm applies a controlled push to create a vacuum seal, allowing accurate thickness measurements. The control strategy is validated through repeated trials in both indoor and outdoor scenarios, demonstrating consistency and robustness. The paper also addresses the mechanical and control integration of the complex robotic system, highlighting the challenges and solutions in achieving a responsive and reliable aerial platform. The combination of semi-autonomous control and human-in-the-loop operation significantly improves the effectiveness of inspection tasks in hard-to-reach environments, enhancing both human safety and task performance. Full article
(This article belongs to the Special Issue Unmanned Aerial Manipulation with Physical Interaction)
Show Figures

Figure 1

13 pages, 2020 KiB  
Article
Micro-Gas Flow Sensor Utilizing Surface Network Density Regulation for Humidity-Modulated Ion Transport
by Chuanjie Liu and Zhihong Liu
Gels 2025, 11(8), 570; https://doi.org/10.3390/gels11080570 - 23 Jul 2025
Viewed by 252
Abstract
As a bridge for human–machine interaction, the performance improvement of sensors relies on the in-depth understanding of ion transport mechanisms. This study focuses on the surface effect of resistive gel sensors and designs a polyacrylic acid/ferric ion hydrogel (PAA/Fe3+) gas flow [...] Read more.
As a bridge for human–machine interaction, the performance improvement of sensors relies on the in-depth understanding of ion transport mechanisms. This study focuses on the surface effect of resistive gel sensors and designs a polyacrylic acid/ferric ion hydrogel (PAA/Fe3+) gas flow sensor. Prepared by one-pot polymerization, PAA/Fe3+ forms a three-dimensional network through the entanglement of crosslinked and uncrosslinked PAA chains, where the coordination between Fe3+ and carboxyl groups endows the material with excellent mechanical properties (tensile strength of 80 kPa and elongation at break of 1100%). Experiments show that when a gas flow acts on the hydrogel surface, changes in surface humidity alter the density of the network structure, thereby regulating ion migration rates: the network loosens to promote ion transport during water absorption, while it tightens to hinder transport during water loss. This mechanism enables the sensor to exhibit significant resistance responses (ΔR/R0 up to 0.55) to gentle breezes (0–13 m/s), with a response time of approximately 166 ms and a sensitivity 40 times higher than that of bulk deformation. The surface ion transport model proposed in this study provides a new strategy for ultrasensitive gas flow sensing, showing potential application values in intelligent robotics, electronic skin, and other fields. Full article
(This article belongs to the Special Issue Polymer Gels for Sensor Applications)
Show Figures

Graphical abstract

Back to TopTop