Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (145)

Search Parameters:
Keywords = assistive robotic manipulator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 12851 KiB  
Article
Evaluation of a Vision-Guided Shared-Control Robotic Arm System with Power Wheelchair Users
by Breelyn Kane Styler, Wei Deng, Cheng-Shiu Chung and Dan Ding
Sensors 2025, 25(15), 4768; https://doi.org/10.3390/s25154768 - 2 Aug 2025
Viewed by 165
Abstract
Wheelchair-mounted assistive robotic manipulators can provide reach and grasp functions for power wheelchair users. This in-lab study evaluated a vision-guided shared control (VGS) system with twelve users completing two multi-step kitchen tasks: a drinking task and a popcorn making task. Using a mixed [...] Read more.
Wheelchair-mounted assistive robotic manipulators can provide reach and grasp functions for power wheelchair users. This in-lab study evaluated a vision-guided shared control (VGS) system with twelve users completing two multi-step kitchen tasks: a drinking task and a popcorn making task. Using a mixed methods approach participants compared VGS and manual joystick control, providing performance metrics, qualitative insights, and lessons learned. Data collection included demographic questionnaires, the System Usability Scale (SUS), NASA Task Load Index (NASA-TLX), and exit interviews. No significant SUS differences were found between control modes, but NASA-TLX scores revealed VGS control significantly reduced workload during the drinking task and the popcorn task. VGS control reduced operation time and improved task success but was not universally preferred. Six participants preferred VGS, five preferred manual, and one had no preference. In addition, participants expressed interest in robotic arms for daily tasks and described two main operation challenges: distinguishing wrist orientation from rotation modes and managing depth perception. They also shared perspectives on how a personal robotic arm could complement caregiver support in their home. Full article
(This article belongs to the Special Issue Intelligent Sensors and Robots for Ambient Assisted Living)
Show Figures

Figure 1

24 pages, 5534 KiB  
Article
Enhancing Healthcare Assistance with a Self-Learning Robotics System: A Deep Imitation Learning-Based Solution
by Yagna Jadeja, Mahmoud Shafik, Paul Wood and Aaisha Makkar
Electronics 2025, 14(14), 2823; https://doi.org/10.3390/electronics14142823 - 14 Jul 2025
Viewed by 390
Abstract
This paper presents a Self-Learning Robotic System (SLRS) for healthcare assistance using Deep Imitation Learning (DIL). The proposed SLRS solution can observe and replicate human demonstrations, thereby acquiring complex skills without the need for explicit task-specific programming. It incorporates modular components for perception [...] Read more.
This paper presents a Self-Learning Robotic System (SLRS) for healthcare assistance using Deep Imitation Learning (DIL). The proposed SLRS solution can observe and replicate human demonstrations, thereby acquiring complex skills without the need for explicit task-specific programming. It incorporates modular components for perception (i.e., advanced computer vision methodologies), actuation (i.e., dynamic interaction with patients and healthcare professionals in real time), and learning. The innovative approach of implementing a hybrid model approach (i.e., deep imitation learning and pose estimation algorithms) facilitates autonomous learning and adaptive task execution. The environmental awareness and responsiveness were also enhanced using both a Convolutional Neural Network (CNN)-based object detection mechanism using YOLOv8 (i.e., with 94.3% accuracy and 18.7 ms latency) and pose estimation algorithms, alongside a MediaPipe and Long Short-Term Memory (LSTM) framework for human action recognition. The developed solution was tested and validated in healthcare, with the aim to overcome some of the current challenges, such as workforce shortages, ageing populations, and the rising prevalence of chronic diseases. The CAD simulation, validation, and verification tested functions (i.e., assistive functions, interactive scenarios, and object manipulation) of the system demonstrated the robot’s adaptability and operational efficiency, achieving an 87.3% task completion success rate and over 85% grasp success rate. This approach highlights the potential use of an SLRS for healthcare assistance. Further work will be undertaken in hospitals, care homes, and rehabilitation centre environments to generate complete holistic datasets to confirm the system’s reliability and efficiency. Full article
Show Figures

Figure 1

15 pages, 7978 KiB  
Article
Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators
by Jin Woong Lee, Jae Min Rho, Sun Gene Park, Hyuk Mo An, Minhyuk Kim and Seok Young Lee
Sensors 2025, 25(14), 4252; https://doi.org/10.3390/s25144252 - 8 Jul 2025
Viewed by 284
Abstract
This study presents an adaptive sliding mode control strategy tailored for robotic manipulators, featuring a quasi-convex function-based control gain and a time-delay estimation (TDE) enhanced by neural networks. To compensate for TDE errors, the proposed method utilizes both the previous TDE error and [...] Read more.
This study presents an adaptive sliding mode control strategy tailored for robotic manipulators, featuring a quasi-convex function-based control gain and a time-delay estimation (TDE) enhanced by neural networks. To compensate for TDE errors, the proposed method utilizes both the previous TDE error and radial basis function neural networks with a weight update law that includes damping terms to prevent divergence. Additionally, a continuous gain function that is quasi-convex function dependent on the magnitude of the sliding variable is proposed to replace the traditional switching control gain. This continuous function-based gain has effectiveness in suppressing chattering phenomenon while guaranteeing the stability of the robotic manipulator in terms of uniform ultimate boundedness, which is demonstrated through both simulation and experiment results. Full article
Show Figures

Figure 1

27 pages, 10314 KiB  
Article
Immersive Teleoperation via Collaborative Device-Agnostic Interfaces for Smart Haptics: A Study on Operational Efficiency and Cognitive Overflow for Industrial Assistive Applications
by Fernando Hernandez-Gobertti, Ivan D. Kudyk, Raul Lozano, Giang T. Nguyen and David Gomez-Barquero
Sensors 2025, 25(13), 3993; https://doi.org/10.3390/s25133993 - 26 Jun 2025
Viewed by 476
Abstract
This study presents a novel investigation into immersive teleoperation systems using collaborative, device-agnostic interfaces for advancing smart haptics in industrial assistive applications. The research focuses on evaluating the quality of experience (QoE) of users interacting with a teleoperation system comprising a local robotic [...] Read more.
This study presents a novel investigation into immersive teleoperation systems using collaborative, device-agnostic interfaces for advancing smart haptics in industrial assistive applications. The research focuses on evaluating the quality of experience (QoE) of users interacting with a teleoperation system comprising a local robotic arm, a robot gripper, and heterogeneous remote tracking and haptic feedback devices. By employing a modular device-agnostic framework, the system supports flexible configurations, including one-user-one-equipment (1U-1E), one-user-multiple-equipment (1U-ME), and multiple-users-multiple-equipment (MU-ME) scenarios. The experimental set-up involves participants manipulating predefined objects and placing them into designated baskets by following specified 3D trajectories. Performance is measured using objective QoE metrics, including temporal efficiency (time required to complete the task) and spatial accuracy (trajectory similarity to the predefined path). In addition, subjective QoE metrics are assessed through detailed surveys, capturing user perceptions of presence, engagement, control, sensory integration, and cognitive load. To ensure flexibility and scalability, the system integrates various haptic configurations, including (1) a Touch kinaesthetic device for precision tracking and grounded haptic feedback, (2) a DualSense tactile joystick as both a tracker and mobile haptic device, (3) a bHaptics DK2 vibrotactile glove with a camera tracker, and (4) a SenseGlove Nova force-feedback glove with VIVE trackers. The modular approach enables comparative analysis of how different device configurations influence user performance and experience. The results indicate that the objective QoE metrics varied significantly across device configurations, with the Touch and SenseGlove Nova set-ups providing the highest trajectory similarity and temporal efficiency. Subjective assessments revealed a strong correlation between presence and sensory integration, with users reporting higher engagement and control in scenarios utilizing force feedback mechanisms. Cognitive load varied across the set-ups, with more complex configurations (e.g., 1U-ME) requiring longer adaptation periods. This study contributes to the field by demonstrating the feasibility of a device-agnostic teleoperation framework for immersive industrial applications. It underscores the critical interplay between objective task performance and subjective user experience, providing actionable insights into the design of next-generation teleoperation systems. Full article
(This article belongs to the Special Issue Recent Development of Flexible Tactile Sensors and Their Applications)
Show Figures

Figure 1

17 pages, 5666 KiB  
Article
Mechatronic and Robotic Systems Utilizing Pneumatic Artificial Muscles as Actuators
by Željko Šitum, Juraj Benić and Mihael Cipek
Inventions 2025, 10(4), 44; https://doi.org/10.3390/inventions10040044 - 23 Jun 2025
Viewed by 405
Abstract
This article presents a series of innovative systems developed through student laboratory projects, comprising two autonomous vehicles, a quadrupedal walking robot, an active ankle-foot orthosis, a ball-on-beam balancing mechanism, a ball-on-plate system, and a manipulator arm, all actuated by pneumatic artificial muscles (PAMs). [...] Read more.
This article presents a series of innovative systems developed through student laboratory projects, comprising two autonomous vehicles, a quadrupedal walking robot, an active ankle-foot orthosis, a ball-on-beam balancing mechanism, a ball-on-plate system, and a manipulator arm, all actuated by pneumatic artificial muscles (PAMs). Due to their flexibility, low weight, and compliance, fluidic muscles demonstrate substantial potential for integration into various mechatronic systems, robotic platforms, and manipulators. Their capacity to generate smooth and adaptive motion is particularly advantageous in applications requiring natural and human-like movements, such as rehabilitation technologies and assistive devices. Despite the inherent challenges associated with nonlinear behavior in PAM-actuated control systems, their biologically inspired design remains promising for a wide range of future applications. Potential domains include industrial automation, the automotive and aerospace sectors, as well as sports equipment, medical assistive devices, entertainment systems, and animatronics. The integration of self-constructed laboratory systems powered by PAMs into control systems education provides a comprehensive pedagogical framework that merges theoretical instruction with practical implementation. This methodology enhances the skillset of future engineers by deepening their understanding of core technical principles and equipping them to address emerging challenges in engineering practice. Full article
(This article belongs to the Section Inventions and Innovation in Advanced Manufacturing)
Show Figures

Figure 1

17 pages, 18945 KiB  
Article
Collaborative Robot Control Based on Human Gaze Tracking
by Francesco Di Stefano, Alice Giambertone, Laura Salamina, Matteo Melchiorre and Stefano Mauro
Sensors 2025, 25(10), 3103; https://doi.org/10.3390/s25103103 - 14 May 2025
Viewed by 595
Abstract
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a [...] Read more.
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a practical and non-intrusive setup made up of a vision system and gaze-tracking software. After presenting a comparison between the major available systems on the market, OpenFace 2.0 was selected as the primary gaze-tracking software and integrated with a UR5 collaborative robot through a MATLAB-based control framework. Validation was conducted through real-world experiments, analyzing the effects of raw and filtered gaze data on system accuracy and responsiveness. The results indicate that gaze tracking can effectively guide robot motion, though signal processing significantly impacts responsiveness and control precision. This work establishes a foundation for future research on gaze-assisted robotic control, highlighting its potential benefits and challenges in enhancing human–robot collaboration. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
Show Figures

Figure 1

23 pages, 4734 KiB  
Article
Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality
by Kyosuke Kameyama, Kazuki Horie and Kosuke Sekiyama
Sensors 2025, 25(10), 3002; https://doi.org/10.3390/s25103002 - 9 May 2025
Viewed by 631
Abstract
This study proposes a D-optimality-based viewpoint selection method to improve visual assistance for a manipulator by optimizing camera placement. The approach maximizes the information gained from visual observations, reducing uncertainty in object recognition and localization. A mathematical framework utilizing D-optimality criteria is developed [...] Read more.
This study proposes a D-optimality-based viewpoint selection method to improve visual assistance for a manipulator by optimizing camera placement. The approach maximizes the information gained from visual observations, reducing uncertainty in object recognition and localization. A mathematical framework utilizing D-optimality criteria is developed to determine the most informative camera viewpoint in real time. The proposed method is integrated into a robotic system where a mobile robot adjusts its viewpoint to support the manipulator in grasping and placing tasks. Experimental evaluations demonstrate that D-optimality-based viewpoint selection improves recognition accuracy and task efficiency. The results suggest that optimal viewpoint planning can enhance perception robustness, leading to better manipulation performance. Although tested in structured environments, the approach has the potential to be extended to dynamic or unstructured settings. This research contributes to the integration of viewpoint optimization in vision-based robotic cooperation, with promising applications in industrial automation, service robotics, and human–robot collaboration. Full article
Show Figures

Graphical abstract

19 pages, 8698 KiB  
Article
The Design of a Vision-Assisted Dynamic Antenna Positioning Radio Frequency Identification-Based Inventory Robot Utilizing a 3-Degree-of-Freedom Manipulator
by Abdussalam A. Alajami and Rafael Pous
Sensors 2025, 25(8), 2418; https://doi.org/10.3390/s25082418 - 11 Apr 2025
Viewed by 806
Abstract
In contemporary warehouse logistics, the demand for efficient and precise inventory management is increasingly critical, yet traditional Radio Frequency Identification (RFID)-based systems often falter due to static antenna configurations that limit tag detection efficacy in complex environments with diverse object arrangements. Addressing this [...] Read more.
In contemporary warehouse logistics, the demand for efficient and precise inventory management is increasingly critical, yet traditional Radio Frequency Identification (RFID)-based systems often falter due to static antenna configurations that limit tag detection efficacy in complex environments with diverse object arrangements. Addressing this challenge, we introduce an advanced RFID-based inventory robot that integrates a 3-degree-of-freedom (3DOF) manipulator with vision-assisted dynamic antenna positioning to optimize tag detection performance. This autonomous system leverages a pretrained You Only Look Once (YOLO) model to detect objects in real time, employing forward and inverse kinematics to dynamically orient the RFID antenna toward identified items. The manipulator subsequently executes a tailored circular scanning motion, ensuring comprehensive coverage of each object’s surface and maximizing RFID tag readability. To evaluate the system’s efficacy, we conducted a comparative analysis of three scanning strategies: (1) a conventional fixed antenna approach, (2) a predefined path strategy with preprogrammed manipulator movements, and (3) our proposed vision-assisted dynamic positioning method. Experimental results, derived from controlled laboratory tests and Gazebo-based simulations, unequivocally demonstrate the superiority of the dynamic positioning approach. This method achieved detection rates of up to 98.0% across varied shelf heights and spatial distributions, significantly outperforming the fixed antenna (21.6%) and predefined path (88.5%) strategies, particularly in multitiered and cluttered settings. Furthermore, the approach balances energy efficiency, consuming 22.1 Wh per mission—marginally higher than the fixed antenna (18.2 Wh) but 9.8% less than predefined paths (24.5 Wh). By overcoming the limitations of static and preprogrammed systems, our robot offers a scalable, adaptable solution poised to elevate warehouse automation in the era of Industry 4.0. Full article
Show Figures

Figure 1

22 pages, 10948 KiB  
Article
Method of Forearm Muscles 3D Modeling Using Robotic Ultrasound Scanning
by Vladislava Kapravchuk, Albert Ishkildin, Andrey Briko, Anna Borde, Maria Kodenko, Anastasia Nasibullina and Sergey Shchukin
Sensors 2025, 25(7), 2298; https://doi.org/10.3390/s25072298 - 4 Apr 2025
Viewed by 1268
Abstract
The accurate assessment of muscle morphology and function is crucial for medical diagnostics, rehabilitation, and biomechanical research. This study presents a novel methodology for constructing volumetric models of forearm muscles based on three-dimensional ultrasound imaging integrated with a robotic system to ensure precise [...] Read more.
The accurate assessment of muscle morphology and function is crucial for medical diagnostics, rehabilitation, and biomechanical research. This study presents a novel methodology for constructing volumetric models of forearm muscles based on three-dimensional ultrasound imaging integrated with a robotic system to ensure precise probe positioning and controlled pressure application. The proposed ultrasound scanning approach combined with a collaborative six-degrees-of-freedom robotic manipulator enabled reproducible and high-resolution imaging of muscle structures in both relaxed and contracted states. A custom-built phantom, acoustically similar to biological tissues, was developed to validate the method. The cross-sectional area of the muscles and the coordinates of the center of mass of the sections, as well as the volume and center of gravity of each muscle, were calculated for each cross-section of the reconstructed forearm muscle models at contraction. The method’s feasibility was confirmed by comparing the reconstructed volumes with anatomical data and phantom measurements. This study highlights the advantages of robotic-assisted ultrasound imaging for non-invasive muscle assessment and suggests its potential applications in neuromuscular diagnostics, prosthetics design, and rehabilitation monitoring. Full article
(This article belongs to the Special Issue 3D Sensing and Imaging for Biomedical Investigations: Second Edition)
Show Figures

Figure 1

9 pages, 1287 KiB  
Proceeding Paper
Pioneering Sustainable Space Ecosystems Through Intelligent Robotics and Collaborative Effort
by Amrita Suresh, Mehmed Yüksel, Manuel Meder, Raúl Domínguez and Wiebke Brinkmann
Eng. Proc. 2025, 90(1), 76; https://doi.org/10.3390/engproc2025090076 - 26 Mar 2025
Viewed by 654
Abstract
Humanity’s long-term presence in space entails the establishment of sustainable space ecosystems in both orbital and planetary environments. Sustainable ecosystems are characterized by minimal resource depletion, reduction in space debris, reusable and renewable materials and components, among other factors. However, achieving sustainability in [...] Read more.
Humanity’s long-term presence in space entails the establishment of sustainable space ecosystems in both orbital and planetary environments. Sustainable ecosystems are characterized by minimal resource depletion, reduction in space debris, reusable and renewable materials and components, among other factors. However, achieving sustainability in space is challenging due to limited resources, harsh environmental conditions, and the need for continuous operation. Intelligent robotic systems with diverse manipulation and locomotion capabilities using artificial intelligence (AI) are capable of In-Situ Resource Utilization and carrying out autonomous production and maintenance operations. Modular reconfigurable systems and heterogeneous teams allow for optimized task allocation strategies, thus expanding the task domain. Efficient human–robot interaction methods can assist astronauts and future space inhabitants in their routine tasks as well as during critical missions. We also emphasize the importance of collaboration among space agencies, roboticists and AI scientists for shared resources and knowledge, and the development of technology standards and interfaces for systems collaboration. Such cooperative efforts are vital to ensure the long-term viability of space exploration and settlement. This paper explores how AI-driven autonomous robots are being developed at the German Research Center for Artificial Intelligence and University of Bremen (Germany) to address these challenges. Full article
Show Figures

Figure 1

21 pages, 15017 KiB  
Article
End-to-End Intelligent Adaptive Grasping for Novel Objects Using an Assistive Robotic Manipulator
by Zhangchi Ding, Amirhossein Jabalameli, Mushtaq Al-Mohammed and Aman Behal
Machines 2025, 13(4), 275; https://doi.org/10.3390/machines13040275 - 26 Mar 2025
Viewed by 577
Abstract
This paper presents the design and implementation of the motion controller and adaptive interface for the second generation of the UCF-MANUS intelligent assistive robotic manipulator system. Based on extensive user studies of the system, several features were implemented in the interface that could [...] Read more.
This paper presents the design and implementation of the motion controller and adaptive interface for the second generation of the UCF-MANUS intelligent assistive robotic manipulator system. Based on extensive user studies of the system, several features were implemented in the interface that could reduce the complexity of the human–robot interaction while also compensating for the deficits in different human factors, such as working memory, response inhibition, processing speed, depth perception, spatial awareness, and contrast sensitivity. To effectively and safely control the robotic arm, we designed several new features, including an adaptive human–robot interaction framework. To provide the user with a less complex and safer interaction with the robot, we added new functionalities such as ‘One-click mode’, ‘Move suggestion mode’, and ‘Gripper Control Assistant’. Furthermore, to equip our assistive robotic system with an adaptive User Interface, we designed and implemented compensators such as ‘Contrast Enhancement’, ‘Object Proximity Velocity Reduction’, and ‘Orientation Indicator’. Results from a multitude of experiments show that the system is indeed robust, safe, and computationally efficient in addition to addressing the user’s highly desired capabilities. Full article
(This article belongs to the Special Issue Advances in Assistive Robotics)
Show Figures

Figure 1

22 pages, 13329 KiB  
Article
Intelligent Human–Robot Interaction Assistant for Collaborative Robots
by Oleksandr Sokolov, Vladyslav Andrusyshyn, Angelina Iakovets and Vitalii Ivanov
Electronics 2025, 14(6), 1160; https://doi.org/10.3390/electronics14061160 - 16 Mar 2025
Viewed by 908
Abstract
Collaborative robots are rapidly gaining popularity and will occupy 33% of the industrial robot market by 2030 due to their ability to adapt to dynamic environments where traditional automation approaches lack flexibility. Available solutions in the market are characterized by the generality of [...] Read more.
Collaborative robots are rapidly gaining popularity and will occupy 33% of the industrial robot market by 2030 due to their ability to adapt to dynamic environments where traditional automation approaches lack flexibility. Available solutions in the market are characterized by the generality of the pod without considering the specifics of a particular collaborative workplace. This feature creates barriers to developing human–robot interaction (HRI) interfaces. The proposed study developed a Collaborative Robotics Assistant (CobRA) system to address these challenges. Considering the workplace’s peculiarities, this intelligent HRI system provides seamless programming for collaborative robots right in the workplace. CobRA combines machine vision and convolutional neural networks to detect objects in real-time using a depth-sensing camera and uses a projector to visualize the control interface interactively. The system supports high-level commands such as object manipulation and placement by automating programming. The solution was tested in a SmartTechLab and program environment where it demonstrated significant efficiency gains, reducing errors and programming time compared to traditional methods. This development opens new perspectives for improving the safety and efficiency of human–robot interaction in dynamic industrial environments. Full article
Show Figures

Figure 1

22 pages, 6057 KiB  
Article
Enhancing Telexistence Control Through Assistive Manipulation and Haptic Feedback
by Osama Halabi, Mohammed Al-Sada, Hala Abourajouh, Myesha Hoque, Abdullah Iskandar and Tatsuo Nakajima
Appl. Sci. 2025, 15(3), 1324; https://doi.org/10.3390/app15031324 - 27 Jan 2025
Viewed by 1423
Abstract
The COVID-19 pandemic brought telepresence systems into the spotlight, yet manually controlling remote robots often proves ineffective for handling complex manipulation tasks. To tackle this issue, we present a machine learning-based assistive manipulation approach. This method identifies target objects and computes an inverse [...] Read more.
The COVID-19 pandemic brought telepresence systems into the spotlight, yet manually controlling remote robots often proves ineffective for handling complex manipulation tasks. To tackle this issue, we present a machine learning-based assistive manipulation approach. This method identifies target objects and computes an inverse kinematic solution for grasping them. The system integrates the generated solution with the user’s arm movements across varying inverse kinematic (IK) fusion levels. Given the importance of maintaining a sense of body ownership over the remote robot, we examine how haptic feedback and assistive functions influence ownership perception and task performance. Our findings indicate that incorporating assistance and haptic feedback significantly enhances the control of the robotic arm in telepresence environments, leading to improved precision and shorter task completion times. This research underscores the advantages of assistive manipulation techniques and haptic feedback in advancing telepresence technology. Full article
Show Figures

Figure 1

16 pages, 4491 KiB  
Article
Compensating the Symptomatic Increase in Plantarflexion Torque and Mechanical Work for Dorsiflexion in Patients with Spastic Paresis Using the “Hermes” Ankle–Foot Orthosis
by Karen E. Rodriguez Hernandez, Jurriaan H. de Groot, Eveline R. M. Grootendorst-Heemskerk, Frank Baas, Marjon Stijntjes, Sven K. Schiemanck, Frans C. T. van der Helm, Herman van der Kooij and Winfred Mugge
Prosthesis 2025, 7(1), 12; https://doi.org/10.3390/prosthesis7010012 - 27 Jan 2025
Viewed by 962
Abstract
Background/Objectives: “Hermes” is an ankle–foot orthosis (AFO) with negative stiffness designed to mechanically compensate the symptomatic increase in plantarflexion (PF) torque (i.e., ankle joint torque resistance to dorsiflexion, DF) in patients with spastic paresis. Methods: The effectiveness of “Hermes” was evaluated [...] Read more.
Background/Objectives: “Hermes” is an ankle–foot orthosis (AFO) with negative stiffness designed to mechanically compensate the symptomatic increase in plantarflexion (PF) torque (i.e., ankle joint torque resistance to dorsiflexion, DF) in patients with spastic paresis. Methods: The effectiveness of “Hermes” was evaluated in twelve patients with chronic unilateral spastic paresis after stroke. Using a robotic ankle manipulator, stiffness at the ankle joint was assessed across three conditions: ankle without Hermes (A), ankle with Hermes applying no torque compensation (A+H0%), and ankle with Hermes tuned to compensate 100% of the patients’ ankle joint stiffness (A+H100%). Results: A significant reduction in PF torque was found with Hermes applying compensation (A+H100%) compared to the conditions without Hermes (A) and with Hermes applying no compensation (A+H0%). Furthermore, a significant reduction in positive dorsiflexion work was found with Hermes applying compensation (A+H100%) compared to the condition with Hermes applying no compensation (A+H0%). Hermes did not significantly contribute to additional PF torque or positive work when applying no compensation (A+H0%). Conclusions: The reductions in PF torque achieved with Hermes are comparable to those seen with repeated ankle stretching programs and ankle robot training. Thus, Hermes is expected to assist voluntary dorsiflexion and improve walking in patients with spastic paresis. Full article
(This article belongs to the Special Issue Recent Advances in Foot Prosthesis and Orthosis)
Show Figures

Figure 1

37 pages, 10225 KiB  
Article
Cloud/VPN-Based Remote Control of a Modular Production System Assisted by a Mobile Cyber–Physical Robotic System—Digital Twin Approach
by Georgian Simion, Adrian Filipescu, Dan Ionescu and Adriana Filipescu
Sensors 2025, 25(2), 591; https://doi.org/10.3390/s25020591 - 20 Jan 2025
Cited by 2 | Viewed by 1444
Abstract
This paper deals with a “digital twin” (DT) approach for processing, reprocessing, and scrapping (P/R/S) technology running on a modular production system (MPS) assisted by a mobile cyber–physical robotic system (MCPRS). The main hardware architecture consists of four line-shaped workstations (WSs), a wheeled [...] Read more.
This paper deals with a “digital twin” (DT) approach for processing, reprocessing, and scrapping (P/R/S) technology running on a modular production system (MPS) assisted by a mobile cyber–physical robotic system (MCPRS). The main hardware architecture consists of four line-shaped workstations (WSs), a wheeled mobile robot (WMR) equipped with a robotic manipulator (RM) and a mobile visual servoing system (MVSS) mounted on the end effector. The system architecture integrates a hierarchical control system where each of the four WSs, in the MPS, is controlled by a Programable Logic Controller (PLC), all connected via Profibus DP to a central PLC. In addition to the connection via Profibus of the four PLCs, related to the WSs, to the main PLC, there are also the connections of other devices to the local networks, LAN Profinet and LAN Ethernet. There are the connections to the Internet, Cloud and Virtual Private Network (VPN) via WAN Ethernet by open platform communication unified architecture (OPC-UA). The overall system follows a DT approach that enables task planning through augmented reality (AR) and uses virtual reality (VR) for visualization through Synchronized Hybrid Petri Net (SHPN) simulation. Timed Petri Nets (TPNs) are used to control the processes within the MPS’s workstations. Continuous Petri Nets (CPNs) handle the movement of the MCPRS. Task planning in AR enables users to interact with the system in real time using AR technology to visualize and plan tasks. SHPN in VR is a combination of TPNs and CPNs used in the virtual representation of the system to synchronize tasks between the MPS and MCPRS. The workpiece (WP) visits stations successively as it is moved along the line for processing. If the processed WP does not pass the quality test, it is taken from the last WS and is transported, by MCPRS, to the first WS where it will be considered for reprocessing or scrapping. Full article
Show Figures

Figure 1

Back to TopTop