Next Issue
Volume 13, January
Previous Issue
Volume 12, October
 
 

Robotics, Volume 12, Issue 6 (December 2023) – 26 articles

Cover Story (view full-size image): This work proposes a “Learning by Demonstration” framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics. The approach uses Lie theory and integrates the exponential and logarithmic map into the DMP equations, which converts any element of the Lie group SO(3) into an element of the tangent space so(3) and vice versa. Moreover, it includes a dynamic parameterization for the tangent space elements to manage the logarithmic map discontinuity. The proposed approach was tested on the Tiago robot in the fulfilment of agricultural activities, such as digging, seeding, irrigation and harvesting. The obtained results (100% success rate) demonstrated the high capability of the method to manage orientation discontinuity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 5038 KiB  
Article
A Novel Control Architecture Based on Behavior Trees for an Omni-Directional Mobile Robot
by Rodrigo Bernardo, João M. C. Sousa, Miguel Ayala Botto and Paulo J. S. Gonçalves
Robotics 2023, 12(6), 170; https://doi.org/10.3390/robotics12060170 - 16 Dec 2023
Cited by 1 | Viewed by 1567
Abstract
Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each [...] Read more.
Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each level of the sense–plan–act loop to implement robust and effective robotic solutions in dynamic environments. The proposed BT combines high-level decision making and continuous execution monitoring while applying non-linear model predictive control (NMPC) for the point stabilization of an omni-directional mobile robot. The proposed control architecture can guide the mobile robot to any configuration within the workspace while satisfying state constraints (e.g., obstacle avoidance) and input constraints (e.g., motor limits). The effectiveness of the controller was validated through a set of realistic simulation scenarios and experiments in a real environment, where an industrial omni-directional mobile robot performed a point stabilization task with obstacle avoidance in a workspace. Full article
(This article belongs to the Topic Industrial Robotics: 2nd Volume)
Show Figures

Figure 1

26 pages, 5465 KiB  
Article
NOHAS: A Novel Orthotic Hand Actuated by Servo Motors and Mobile App for Stroke Rehabilitation
by Ebenezer Raj Selvaraj Mercyshalinie, Akash Ghadge, Nneka Ifejika and Yonas Tadesse
Robotics 2023, 12(6), 169; https://doi.org/10.3390/robotics12060169 - 8 Dec 2023
Viewed by 2440
Abstract
The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. [...] Read more.
The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. Patients can learn how to utilize adaptive equipment, regain movement, and reduce muscle spasticity through certain repetitive exercises and therapeutic interventions. These exercises can be performed by wearing soft robotic gloves on the impaired extremity. For post-stroke rehabilitation, we have designed and characterized an interactive hand orthosis with tendon-driven finger actuation mechanisms actuated by servo motors, which consists of a fabric glove and force-sensitive resistors (FSRs) at the tip. The robotic device moves the user’s hand when operated by mobile phone to replicate normal gripping behavior. In this paper, the characterization of finger movements in response to step input commands from a mobile app was carried out for each finger at the proximal interphalangeal (PIP), distal interphalangeal (DIP), and metacarpophalangeal (MCP) joints. In general, servo motor-based hand orthoses are energy-efficient; however, they generate noise during actuation. Here, we quantified the noise generated by servo motor actuation for each finger as well as when a group of fingers is simultaneously activated. To test ADL ability, we evaluated the device’s effectiveness in holding different objects from the Action Research Arm Test (ARAT) kit. Our device, novel hand orthosis actuated by servo motors (NOHAS), was tested on ten healthy human subjects and showed an average of 90% success rate in grasping tasks. Our orthotic hand shows promise for aiding post-stroke subjects recover because of its simplicity of use, lightweight construction, and carefully designed components. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

23 pages, 8736 KiB  
Article
Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions
by Franziska Legler, Jonas Trezl, Dorothea Langer, Max Bernhagen, Andre Dettmann and Angelika C. Bullinger
Robotics 2023, 12(6), 168; https://doi.org/10.3390/robotics12060168 - 8 Dec 2023
Viewed by 1841
Abstract
Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. [...] Read more.
Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. VR also allows researchers to study the effects of safety-restricted features like close distance during movements and events of robotic malfunctions. In this paper, we present a VR experiment with 40 participants collaborating with a heavy-load robot and compare the results to a similar real-world experiment to study transferability and validity. The participant’s proximity to the robot, interaction level, and occurring system failures were varied. State anxiety, trust, and intention to use were used as dependent variables, and valence and arousal values were assessed over time. Overall, state anxiety was low and trust and intention to use were high. Only simulated failures significantly increased state anxiety, reduced trust, and resulted in reduced valence and increased arousal. In comparison with the real-world experiment, non-significant differences in all dependent variables and similar progression of valence and arousal were found during scenarios without system failures. Therefore, the suitability of applying VR in HRC research to study safety-restricted features can be supported; however, further research should examine transferability for high-intensity emotional experiences. Full article
(This article belongs to the Special Issue Collection in Honor of Women's Contribution in Robotics)
Show Figures

Figure 1

30 pages, 5439 KiB  
Article
Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis
by Stavros N. Moutsis, Konstantinos A. Tsintotas, Ioannis Kansizoglou and Antonios Gasteratos
Robotics 2023, 12(6), 167; https://doi.org/10.3390/robotics12060167 - 8 Dec 2023
Viewed by 1823
Abstract
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and [...] Read more.
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50. Full article
(This article belongs to the Special Issue Towards Socially Intelligent Robots)
Show Figures

Figure 1

17 pages, 4938 KiB  
Article
Robot Learning by Demonstration with Dynamic Parameterization of the Orientation: An Application to Agricultural Activities
by Clemente Lauretti, Christian Tamantini, Hilario Tomè and Loredana Zollo
Robotics 2023, 12(6), 166; https://doi.org/10.3390/robotics12060166 - 7 Dec 2023
Cited by 3 | Viewed by 1502
Abstract
This work proposes a Learning by Demonstration framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics such as the ones to be performed in agricultural domains and avoid orientation discontinuity during motion learning. The [...] Read more.
This work proposes a Learning by Demonstration framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics such as the ones to be performed in agricultural domains and avoid orientation discontinuity during motion learning. The approach resorts to Lie theory and integrates into the DMP equations the exponential and logarithmic map, which converts any element of the Lie group SO(3) into an element of the tangent space so(3) and vice versa. Moreover, it includes a dynamic parameterization for the tangent space elements to manage the discontinuity of the logarithmic map. The proposed approach was tested on the Tiago robot during the fulfillment of four agricultural activities, such as digging, seeding, irrigation and harvesting. The obtained results were compared to the one achieved by using the original formulation of the DMPs and demonstrated the high capability of the proposed method to manage orientation discontinuity (the success rate was 100 % for all the tested poses). Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

16 pages, 40065 KiB  
Article
AgroCableBot: Reconfigurable Cable-Driven Parallel Robot for Greenhouse or Urban Farming Automation
by Andrés García-Vanegas, María J. García-Bonilla, Manuel G. Forero, Fernando J. Castillo-García and Antonio Gonzalez-Rodriguez
Robotics 2023, 12(6), 165; https://doi.org/10.3390/robotics12060165 - 1 Dec 2023
Cited by 1 | Viewed by 1850
Abstract
In this paper, a Cable-Driven Parallel Robot developed to automate repetitive and essential tasks in crop production in greenhouse and urban garden environments is introduced. The robot has a suspended configuration with five degrees-of-freedom, composed of a fixed platform (frame) and a moving [...] Read more.
In this paper, a Cable-Driven Parallel Robot developed to automate repetitive and essential tasks in crop production in greenhouse and urban garden environments is introduced. The robot has a suspended configuration with five degrees-of-freedom, composed of a fixed platform (frame) and a moving platform known as the end-effector. To generate its movements and operations, eight cables are used, which move through eight pulley systems and are controlled by four winches. In addition, the robot is equipped with a seedbed that houses potted plants. Unlike conventional suspended cable robots, this robot incorporates four moving pulley systems in the frame, which significantly increases its workspace. The development of this type of robot requires precise control of the end-effector pose, which includes both the position and orientation of the robot extremity. To achieve this control, analysis is performed in two fundamental aspects: kinematic analysis and dynamic analysis. In addition, an analysis of the effective workspace of the robot is carried out, taking into account the distribution of tensions in the cables. The aim of this analysis is to verify the increase of the working area, which is useful to cover a larger crop area. The robot has been validated through simulations, where possible trajectories that the robot could follow depending on the tasks to be performed in the crop are presented. This work supports the feasibility of using this type of robotic systems to automate specific agricultural processes, such as sowing, irrigation, and crop inspection. This contribution aims to improve crop quality, reduce the consumption of critical resources such as water and fertilizers, and establish them as technological tools in the field of modern agriculture. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

30 pages, 5705 KiB  
Article
Length Modelling of Spiral Superficial Soft Strain Sensors Using Geodesics and Covering Spaces
by Abdullah Al-Azzawi, Peter Stadler, He Kong and Salah Sukkarieh
Robotics 2023, 12(6), 164; https://doi.org/10.3390/robotics12060164 - 1 Dec 2023
Viewed by 1708
Abstract
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing [...] Read more.
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing solutions were developed using soft strain sensors. However, current mathematical models are only capable of modelling the length of the soft sensors when they are attached to actuators subjected to extension, bending, and rotation movements. Furthermore, these models are limited to modelling straight sensors and incapable of modelling spiral sensors. In this study, for both the spiral and straight sensors, we utilise concepts in geodesics and covering spaces to present a mathematical length model that includes twist. This study is limited to the Piecewise constant curvature actuators and demonstrates, among other things, the advantages of our model and the accuracy when including and excluding twist. We verify the model by comparing the results to a finite element analysis. This analysis involves multiple simulation scenarios designed specifically for the verification process. Finally, we validate the theoretical results with previously published experimental results. Then, we discuss the limitations and possible applications of our model using examples from the literature. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series: "Soft Robotics")
Show Figures

Figure 1

26 pages, 8960 KiB  
Article
Virtual Reality Teleoperation System for Mobile Robot Manipulation
by Bryan R. Galarza, Paulina Ayala, Santiago Manzano and Marcelo V. Garcia
Robotics 2023, 12(6), 163; https://doi.org/10.3390/robotics12060163 - 29 Nov 2023
Cited by 1 | Viewed by 2113
Abstract
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming [...] Read more.
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming for optimal results and reducing human errors. Moreover, robots can perform tasks in areas inaccessible to humans, such as hard-to-reach zones or hazardous environments. However, the challenge lies in the lack of knowledge about the operation and proper use of the robot. This work presents the development of a teleoperation system using HTC Vive Pro 2 virtual reality goggles. This allows individuals to immerse themselves in a fully virtual environment to become familiar with the operation and control of the KUKA youBot robot. The virtual reality experience is created in Unity, and through this, robot movements are executed, followed by a connection to ROS (Robot Operating System). To prevent potential damage to the real robot, a simulation is conducted in Gazebo, facilitating the understanding of the robot’s operation. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

13 pages, 1457 KiB  
Article
Are Friendly Robots Trusted More? An Analysis of Robot Sociability and Trust
by Travis Kadylak, Megan A. Bayles and Wendy A. Rogers
Robotics 2023, 12(6), 162; https://doi.org/10.3390/robotics12060162 - 29 Nov 2023
Viewed by 1638
Abstract
Older individuals prefer to maintain their autonomy while maintaining social connection and engagement with their family, peers, and community. Though individuals can encounter barriers to these goals, socially assistive robots (SARs) hold the potential for promoting aging in place and independence. Such domestic [...] Read more.
Older individuals prefer to maintain their autonomy while maintaining social connection and engagement with their family, peers, and community. Though individuals can encounter barriers to these goals, socially assistive robots (SARs) hold the potential for promoting aging in place and independence. Such domestic robots must be trusted, easy to use, and capable of behaving within the scope of accepted social norms for successful adoption to scale. We investigated perceived associations between robot sociability and trust in domestic robot support for instrumental activities of daily living (IADLs). In our multi-study approach, we collected responses from adults aged 65 years and older using two separate online surveys (Study 1, N = 51; Study 2, N = 43). We assessed the relationship between perceived robot sociability and robot trust. Our results consistently demonstrated a strong positive relationship between perceived robot sociability and robot trust for IADL tasks. These data have design implications for promoting robot trust and acceptance of SARs for use in the home by older adults. Full article
(This article belongs to the Special Issue Human Factors in Human–Robot Interaction)
Show Figures

Figure 1

16 pages, 4168 KiB  
Article
DDPG-Based Adaptive Sliding Mode Control with Extended State Observer for Multibody Robot Systems
by Hamza Khan, Sheraz Ali Khan, Min Cheol Lee, Usman Ghafoor, Fouzia Gillani and Umer Hameed Shah
Robotics 2023, 12(6), 161; https://doi.org/10.3390/robotics12060161 - 26 Nov 2023
Cited by 1 | Viewed by 1562
Abstract
This research introduces a robust control design for multibody robot systems, incorporating sliding mode control (SMC) for robustness against uncertainties and disturbances. SMC achieves this through directing system states toward a predefined sliding surface for finite-time stability. However, the challenge arises in selecting [...] Read more.
This research introduces a robust control design for multibody robot systems, incorporating sliding mode control (SMC) for robustness against uncertainties and disturbances. SMC achieves this through directing system states toward a predefined sliding surface for finite-time stability. However, the challenge arises in selecting controller parameters, specifically the switching gain, as it depends on the upper bounds of perturbations, including nonlinearities, uncertainties, and disturbances, impacting the system. Consequently, gain selection becomes challenging when system dynamics are unknown. To address this issue, an extended state observer (ESO) is integrated with SMC, resulting in SMCESO, which treats system dynamics and disturbances as perturbations and estimates them to compensate for their effects on the system response, ensuring robust performance. To further enhance system performance, deep deterministic policy gradient (DDPG) is employed to fine-tune SMCESO, utilizing both actual and estimated states as input states for the DDPG agent and reward selection. This training process enhances both tracking and estimation performance. Furthermore, the proposed method is compared with the optimal-PID, SMC, and H in the presence of external disturbances and parameter variation. MATLAB/Simulink simulations confirm that overall, the SMCESO provides robust performance, especially with parameter variations, where other controllers struggle to converge the tracking error to zero. Full article
(This article belongs to the Special Issue Kinematics and Robot Design VI, KaRD2023)
Show Figures

Figure 1

27 pages, 2606 KiB  
Review
Industrial Robots in Mechanical Machining: Perspectives and Limitations
by Mantas Makulavičius, Sigitas Petkevičius, Justė Rožėnė, Andrius Dzedzickis and Vytautas Bučinskas
Robotics 2023, 12(6), 160; https://doi.org/10.3390/robotics12060160 - 24 Nov 2023
Cited by 1 | Viewed by 2637
Abstract
Recently, the need to produce from soft materials or components in extra-large sizes has appeared, requiring special solutions that are affordable using industrial robots. Industrial robots are suitable for such tasks due to their flexibility, accuracy, and consistency in machining operations. However, robot [...] Read more.
Recently, the need to produce from soft materials or components in extra-large sizes has appeared, requiring special solutions that are affordable using industrial robots. Industrial robots are suitable for such tasks due to their flexibility, accuracy, and consistency in machining operations. However, robot implementation faces some limitations, such as a huge variety of materials and tools, low adaptability to environmental changes, flexibility issues, a complicated tool path preparation process, and challenges in quality control. Industrial robotics applications include cutting, milling, drilling, and grinding procedures on various materials, including metal, plastics, and wood. Advanced robotics technologies involve the latest advances in robotics, including integrating sophisticated control systems, sensors, data fusion techniques, and machine learning algorithms. These innovations enable robots to adapt better and interact with their environment, ultimately increasing their accuracy. The main focus of this study is to cover the most common industrial robotic machining processes and to identify how specific advanced technologies can improve their performance. In most of the studied literature, the primary research objective across all operations is to enhance the stiffness of the robotic arm’s structure. Some publications propose approaches for planning the robot’s posture or tool orientation. In contrast, others focus on optimizing machining parameters through the utilization of advanced control and computation, including machine learning methods with the integration of collected sensor data. Full article
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Europe)
Show Figures

Figure 1

15 pages, 5905 KiB  
Article
Minimum Energy Utilization Strategy for Fleet of Autonomous Robots in Urban Waste Management
by Valeria Bladinieres Justo, Abhishek Gupta, Tobias Fritz Umland and Dietmar Göhlich
Robotics 2023, 12(6), 159; https://doi.org/10.3390/robotics12060159 - 23 Nov 2023
Viewed by 1514
Abstract
Many service robots have to operate in a variety of different Service Event Areas (SEAs). In the case of the waste collection robot MARBLE (Mobile Autonomous Robot for Litter Emptying) every SEA has characteristics like varying area and number of litter bins, with [...] Read more.
Many service robots have to operate in a variety of different Service Event Areas (SEAs). In the case of the waste collection robot MARBLE (Mobile Autonomous Robot for Litter Emptying) every SEA has characteristics like varying area and number of litter bins, with different distances between litter bins and uncertain filling levels of litter bins. Global positions of litter bins and garbage drop-off positions from MARBLEs after reaching their maximum capacity are defined as task-performing waypoints. We provide boundary delimitation for characteristics that describe the SEA. The boundaries interpolate synergy between individual SEAs and the developed algorithms. This helps in determining which algorithm best suits an SEA, dependent on the characteristics. The developed route-planning methodologies are based on vehicle routing with simulated annealing (VRPSA) and knapsack problems (KSPs). VRPSA uses specific weighting based on route permutation operators, initial temperature, and the nearest neighbor approach. The KSP optimizes a route’s given capacity, in this case using smart litter bins (SLBs) information. The game-theory KSP algorithm with SLBs information and the KSP algorithm without SLBs information performs better on SEAs lower than 0.5 km2, and with fewer than 50 litter bins. When the standard deviation of the fill rate of litter bins is ≈10%, the KSP without SLB is preferred, and if the standard deviation is between 25 and 40%, then the game-theory KSP is selected. Finally, the vehicle routing problem outperforms in SEAs with an area of 0.55 km2, 50–450 litter bins, and a fill rate of 10–40%. Full article
(This article belongs to the Special Issue Multi-robot Systems: State of the Art and Future Progress)
Show Figures

Figure 1

33 pages, 82129 KiB  
Article
Implicit Shape Model Trees: Recognition of 3-D Indoor Scenes and Prediction of Object Poses for Mobile Robots
by Pascal Meißner and Rüdiger Dillmann
Robotics 2023, 12(6), 158; https://doi.org/10.3390/robotics12060158 - 23 Nov 2023
Viewed by 1561
Abstract
This article describes an approach for mobile robots to identify scenes in configurations of objects spread across dense environments. This identification is enabled by intertwining the robotic object search and the scene recognition on already detected objects. We proposed “Implicit Shape Model (ISM) [...] Read more.
This article describes an approach for mobile robots to identify scenes in configurations of objects spread across dense environments. This identification is enabled by intertwining the robotic object search and the scene recognition on already detected objects. We proposed “Implicit Shape Model (ISM) trees” as a scene model to solve these two tasks together. This article presents novel algorithms for ISM trees to recognize scenes and predict object poses. For us, scenes are sets of objects, some of which are interrelated by 3D spatial relations. Yet, many false positives may occur when using single ISMs to recognize scenes. We developed ISM trees, which is a hierarchical model of multiple interconnected ISMs, to remedy this. In this article, we contribute a recognition algorithm that allows the use of these trees for recognizing scenes. ISM trees should be generated from human demonstrations of object configurations. Since a suitable algorithm was unavailable, we created an algorithm for generating ISM trees. In previous work, we integrated the object search and scene recognition into an active vision approach that we called “Active Scene Recognition”. An efficient algorithm was unavailable to make their integration using predicted object poses effective. Physical experiments in this article show that the new algorithm we have contributed overcomes this problem. Full article
(This article belongs to the Special Issue Active Methods in Autonomous Navigation)
Show Figures

Figure 1

28 pages, 66441 KiB  
Article
Real-Time 3D Map Building in a Mobile Robot System with Low-Bandwidth Communication
by Alfin Junaedy, Hiroyuki Masuta, Kei Sawai, Tatsuo Motoyoshi and Noboru Takagi
Robotics 2023, 12(6), 157; https://doi.org/10.3390/robotics12060157 - 22 Nov 2023
Cited by 2 | Viewed by 1734
Abstract
This paper presents a new 3D map building technique using a combination of 2D SLAM and 3D objects that can be implemented on relatively low-cost hardware in real-time. Recently, 3D visualization of the real world became increasingly important. In robotics, it is not [...] Read more.
This paper presents a new 3D map building technique using a combination of 2D SLAM and 3D objects that can be implemented on relatively low-cost hardware in real-time. Recently, 3D visualization of the real world became increasingly important. In robotics, it is not only required for intelligent control, but also necessary for operators to provide intuitive visualization. SLAM is generally applied for this purpose, as it is considered a basic ability for truly autonomous robots. However, due to the increase in the amount of data, real-time processing is becoming a challenge. Therefore, in order to address this problem, we combine 2D data and 3D objects to create a new 3D map. The combination is simple yet robust based on rotation, translation, and clustering techniques. The proposed method was applied to a mobile robot system for indoor observation. The results show that real-time performance can be achieved by the system. Furthermore, we also combine high and low-bandwidth networks to deal with network problems that usually occur in wireless communication. Thus, robust wireless communication can be established, as it ensures that the missions can be continued even if the system loses the main network. Full article
(This article belongs to the Special Issue Active Methods in Autonomous Navigation)
Show Figures

Figure 1

14 pages, 26990 KiB  
Article
Surgical Staplers in Laparoscopic Colectomy: A New Innovative Flexible Design Perspective
by Dhruva Khanzode, Ranjan Jha, Alexandra Thomieres, Emilie Duchalais and Damien Chablat
Robotics 2023, 12(6), 156; https://doi.org/10.3390/robotics12060156 - 21 Nov 2023
Viewed by 1962
Abstract
This article describes the development of a flexible surgical stapler mechanism, which serves as a fundamental tool for laparoscopic rectal cancer surgery, addressing the challenges posed by difficult types of accessibility using conventional instruments. The design of this mechanism involves the incorporation of [...] Read more.
This article describes the development of a flexible surgical stapler mechanism, which serves as a fundamental tool for laparoscopic rectal cancer surgery, addressing the challenges posed by difficult types of accessibility using conventional instruments. The design of this mechanism involves the incorporation of a stacked tensegrity structure, in which a flexible beam serves as the central spine. To assess the stapler’s range of operation, an analysis of the workspace was conducted by examining collaborative Computed Tomography (CT) scan data obtained from different perspectives (Axial, Coronal, and Sagittal planes) at various intervals. By synthesizing kinematic equations, Hooke’s law was employed, taking into account rotational springs and bending moments. This allowed for precise control of the mechanism’s movements during surgical procedures in the rectal region. Additionally, the study examined the singularities and simulations of the tensegrity mechanism, considering the influential eyelet friction parameter. Notably, the research revealed that this friction parameter can alter the mechanism’s curvature, underscoring the importance of accurate analysis. To establish a correlation between the virtual and physical models, a preliminary design was presented, facilitating the identification of the friction parameter. Full article
(This article belongs to the Special Issue Robotics and Parallel Kinematic Machines)
Show Figures

Figure 1

26 pages, 12281 KiB  
Article
MonoGhost: Lightweight Monocular GhostNet 3D Object Properties Estimation for Autonomous Driving
by Ahmed El-Dawy, Amr El-Zawawi and Mohamed El-Habrouk
Robotics 2023, 12(6), 155; https://doi.org/10.3390/robotics12060155 - 17 Nov 2023
Viewed by 1817
Abstract
Effective environmental perception is critical for autonomous driving; thus, the perception system requires collecting 3D information of the surrounding objects, such as their dimensions, locations, and orientation in space. Recently, deep learning has been widely used in perception systems that convert image features [...] Read more.
Effective environmental perception is critical for autonomous driving; thus, the perception system requires collecting 3D information of the surrounding objects, such as their dimensions, locations, and orientation in space. Recently, deep learning has been widely used in perception systems that convert image features from a camera into semantic information. This paper presents the MonoGhost network, a lightweight Monocular GhostNet deep learning technique for full 3D object properties estimation from a single frame monocular image. Unlike other techniques, the proposed MonoGhost network first estimates relatively reliable 3D object properties depending on efficient feature extractor. The proposed MonoGhost network estimates the orientation of the 3D object as well as the 3D dimensions of that object, resulting in reasonably small errors in the dimensions estimations versus other networks. These estimations, combined with the translation projection constraints imposed by the 2D detection coordinates, allow for the prediction of a robust and dependable Bird’s Eye View bounding box. The experimental outcomes prove that the proposed MonoGhost network performs better than other state-of-the-art networks in the Bird’s Eye View of the KITTI dataset benchmark by scoring 16.73% on the moderate class and 15.01% on the hard class while preserving real-time requirements. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

20 pages, 8568 KiB  
Article
Applying Screw Theory to Design the Turmell-Bot: A Cable-Driven, Reconfigurable Ankle Rehabilitation Parallel Robot
by Julio Vargas-Riaño, Óscar Agudelo-Varela and Ángel Valera
Robotics 2023, 12(6), 154; https://doi.org/10.3390/robotics12060154 - 14 Nov 2023
Cited by 1 | Viewed by 1800
Abstract
The ankle is a complex joint with a high injury incidence. Rehabilitation Robotics applied to the ankle is a very active research field. We present the kinematics and statics of a cable-driven reconfigurable ankle rehabilitation robot. First, we studied how the tendons pull [...] Read more.
The ankle is a complex joint with a high injury incidence. Rehabilitation Robotics applied to the ankle is a very active research field. We present the kinematics and statics of a cable-driven reconfigurable ankle rehabilitation robot. First, we studied how the tendons pull mid-foot bones around the talocrural and subtalar axes. We proposed a hybrid serial-parallel mechanism analogous to the ankle. Then, using screw theory, we synthesized a cable-driven robot with the human ankle in the closed-loop kinematics. We incorporated a draw-wire sensor to measure the axes’ pose and compute the product of exponentials. We also reconfigured the cables to balance the tension and pressure forces using the axis projection on the base and platform planes. Furthermore, we computed the workspace to show that the reconfigurable design fits several sizes. The data used are from anthropometry and statistics. Finally, we validated the robot’s statics with MuJoCo for various cable length groups corresponding to the axes’ range of motion. We suggested a platform adjusting system and an alignment method. The design is lightweight, and the cable-driven robot has advantages over rigid parallel robots, such as Stewart platforms. We will use compliant actuators for enhancing human–robot interaction. Full article
(This article belongs to the Special Issue Kinematics and Robot Design VI, KaRD2023)
Show Figures

Figure 1

22 pages, 6561 KiB  
Article
Dual-Quaternion-Based SLERP MPC Local Controller for Safe Self-Driving of Robotic Wheelchairs
by Daifeng Wang, Wenjing Cao and Atsuo Takanishi
Robotics 2023, 12(6), 153; https://doi.org/10.3390/robotics12060153 - 13 Nov 2023
Viewed by 2073
Abstract
In this work, the motion control of a robotic wheelchair to achieve safe and intelligent movement in an unknown scenario is proposed. The primary objective is to develop a comprehensive framework for a robotic wheelchair that combines a global path planner and a [...] Read more.
In this work, the motion control of a robotic wheelchair to achieve safe and intelligent movement in an unknown scenario is proposed. The primary objective is to develop a comprehensive framework for a robotic wheelchair that combines a global path planner and a model predictive control (MPC) local controller. The A* algorithm is employed to generate a global path. To ensure safe and directional motion for the wheelchair user, an MPC local controller is implemented taking into account the via points generated by an approach combined with dual quaternions and spherical linear interpolation (SLERP). Dual quaternions are utilized for their simultaneous handling of rotation and translation, while SLERP enables smooth and continuous rotation interpolation by generating intermediate orientations between two specified orientations. The integration of these two methods optimizes navigation performance. The system is built on the Robot Operating System (ROS), with an electric wheelchair equipped with 3D-LiDAR serving as the hardware foundation. The experimental results reveal the effectiveness of the proposed method and demonstrate the ability of the robotic wheelchair to move safely from the initial position to the destination. This work contributes to the development of effective motion control for robotic wheelchairs, focusing on safety and improving the user experience when navigating in unknown environments. Full article
(This article belongs to the Special Issue Motion Trajectory Prediction for Mobile Robots)
Show Figures

Figure 1

17 pages, 3772 KiB  
Article
A Semiautonomous Control Strategy Based on Computer Vision for a Hand–Wrist Prosthesis
by Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella and Francesca Cordella
Robotics 2023, 12(6), 152; https://doi.org/10.3390/robotics12060152 - 13 Nov 2023
Cited by 1 | Viewed by 1767
Abstract
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to [...] Read more.
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to overcome such limitations. In this paper, a novel semiautonomous control system (SCS) for wrist–hand prostheses using a computer vision system (CVS) is proposed and validated. The SCS integrates object detection, grasp selection, and wrist orientation estimation algorithms. By combining CVS with a simulated EMG-based intention detection module, the SCS guarantees reliable prosthesis control. Results show high accuracy in grasping and object classification (≥97%) at a fast frame analysis frequency (2.07 FPS). The SCS achieves an average angular estimation error ≤18° and stability ≤0.8° for the proposed application. Operative tests demonstrate the capabilities of the proposed approach to handle complex real-world scenarios and pave the way for future implementation on a real prosthetic device. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

17 pages, 26506 KiB  
Article
Remote Instantaneous Power Consumption Estimation of Electric Vehicles from Satellite Information
by Franco Jorquera, Juan Estrada and Fernando Auat
Robotics 2023, 12(6), 151; https://doi.org/10.3390/robotics12060151 - 8 Nov 2023
Viewed by 1598
Abstract
Instantaneous Power Consumption (IPC) is relevant for understanding the autonomy and efficient energy usage of electric vehicles (EVs). However, effective vehicle management requires prior knowledge of whether they can complete a trajectory, necessitating an estimation of IPC consumption along it. This paper proposes [...] Read more.
Instantaneous Power Consumption (IPC) is relevant for understanding the autonomy and efficient energy usage of electric vehicles (EVs). However, effective vehicle management requires prior knowledge of whether they can complete a trajectory, necessitating an estimation of IPC consumption along it. This paper proposes an IPC estimation method for an EV based on satellite information. The methodology involves geolocation and georeferencing of the study area, trajectory planning, extracting altitude characteristics from the map to create an altitude profile, collecting terrain features, and ultimately calculating IPC. The most accurate estimation was achieved on clay terrain with a 5.43% error compared to measures. For pavement and gravel terrains, 19.19% and 102.02% errors were obtained, respectively. This methodology provides IPC estimation on three different terrains using satellite information, which is corroborated with field experiments. This showcases its potential for EV management in industrial contexts. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

10 pages, 10751 KiB  
Article
Fabrication of Origami Soft Gripper Using On-Fabric 3D Printing
by Hana Choi, Tongil Park, Gyomin Hwang, Youngji Ko, Dohun Lee, Taeksu Lee, Jong-Oh Park and Doyeon Bang
Robotics 2023, 12(6), 150; https://doi.org/10.3390/robotics12060150 - 8 Nov 2023
Cited by 1 | Viewed by 2458
Abstract
In this work, we have presented a soft encapsulating gripper for gentle grasps. This was enabled by a series of soft origami patterns, such as the Yoshimura pattern, which was directly printed on fabric. The proposed gripper features a deformable body that enables [...] Read more.
In this work, we have presented a soft encapsulating gripper for gentle grasps. This was enabled by a series of soft origami patterns, such as the Yoshimura pattern, which was directly printed on fabric. The proposed gripper features a deformable body that enables safe interaction with its surroundings, gentle grasps of delicate and fragile objects, and encapsulated structures allowing for noninvasive enclosing. The gripper was fabricated by a direct 3D printing of soft materials on fabric. This allowed for the stiffness adjustment of gripper components and a simple fabrication process. We evaluated the grasping performance of the proposed gripper with several delicate and ultra-gentle objects. It was concluded that the proposed gripper could manipulate delicate objects from fruits to silicone jellyfishes and, therefore, have considerable potential for use as improved soft encapsulating grippers in agriculture and engineering fields. Full article
(This article belongs to the Special Issue Soft Robotics: Fusing Function with Structure)
Show Figures

Figure 1

25 pages, 7094 KiB  
Article
Design and Characterization of a Self-Aligning End-Effector Robot for Single-Joint Arm Movement Rehabilitation
by Prem Kumar Mathavan Jeyabalan, Aravind Nehrujee, Samuel Elias, M. Magesh Kumar, S. Sujatha and Sivakumar Balasubramanian
Robotics 2023, 12(6), 149; https://doi.org/10.3390/robotics12060149 - 7 Nov 2023
Cited by 1 | Viewed by 1738
Abstract
Traditional end-effector robots for arm rehabilitation are usually attached at the hand, primarily focusing on coordinated multi-joint training. Therapy at an individual joint level of the arm for severely impaired stroke survivors is not always possible with existing end-effector robots. The Arm Rehabilitation [...] Read more.
Traditional end-effector robots for arm rehabilitation are usually attached at the hand, primarily focusing on coordinated multi-joint training. Therapy at an individual joint level of the arm for severely impaired stroke survivors is not always possible with existing end-effector robots. The Arm Rehabilitation Robot (AREBO)—an end-effector robot—was designed to provide both single and multi-joint assisted training while retaining the advantages of traditional end-effector robots, such as ease of use, compactness and portability, and potential cost-effectiveness (compared to exoskeletons). This work presents the design, optimization, and characterization of AREBO for training single-joint movements of the arm. AREBO has three actuated and three unactuated degrees of freedom, allowing it to apply forces in any arbitrary direction at its endpoint and self-align to arbitrary orientations within its workspace. AREBO’s link lengths were optimized to maximize its workspace and manipulability. AREBO provides single-joint training in both unassisted and adaptive weight support modes using a human arm model to estimate the human arm’s kinematics and dynamics without using additional sensors. The characterization of the robot’s controller and the algorithm for estimating the human arm parameters were performed using a two degrees of freedom mechatronic model of the human shoulder joint. The results demonstrate that (a) the movements of the human arm can be estimated using a model of the human arm and robot’s kinematics, (b) AREBO has similar transparency to that of existing arm therapy robots in the literature, and (c) the adaptive weight support mode control can adapt to different levels of impairment in the arm. This work demonstrates how an appropriately designed end-effector robot can be used for single-joint training, which can be easily extended to multi-joint training. Future work will focus on the evaluation of the system on patients with any neurological condition requiring arm training. Full article
(This article belongs to the Section Neurorobotics)
Show Figures

Figure 1

17 pages, 3012 KiB  
Article
Improving the Grasping Force Behavior of a Robotic Gripper: Model, Simulations, and Experiments
by Giuseppe Vitrani, Simone Cortinovis, Luca Fiorio, Marco Maggiali and Rocco Antonio Romeo
Robotics 2023, 12(6), 148; https://doi.org/10.3390/robotics12060148 - 31 Oct 2023
Viewed by 1981
Abstract
Robotic grippers allow industrial robots to interact with the surrounding environment. However, control architectures of the grasping force are still rare in common industrial grippers. In this context, one or more sensors (e.g., force or torque sensors) are necessary. However, the incorporation of [...] Read more.
Robotic grippers allow industrial robots to interact with the surrounding environment. However, control architectures of the grasping force are still rare in common industrial grippers. In this context, one or more sensors (e.g., force or torque sensors) are necessary. However, the incorporation of such sensors might heavily affect the cost of the gripper, regardless of its type (e.g., pneumatic or electric). An alternative approach could be open-loop force control strategies. Hence, this work proposes an approach for optimizing the open-loop grasping force behavior of a robotic gripper. For this purpose, a specialized robotic gripper was built, as well as its mathematical model. The model was employed to predict the gripper performance during both static and dynamic force characterization, simulating grasping tasks under different experimental conditions. Both simulated and experimental results showed that by managing the mechanical properties of the finger–object contact interface (e.g., stiffness), the steady-state force variability could be greatly reduced, as well as undesired effects such as finger bouncing. Further, the object’s size is not required unlike most of the grasping approaches for industrial rigid grippers, which often involve high finger velocities. These results may pave the way toward conceiving cheaper and more reliable open-loop force control techniques for use in robotic grippers. Full article
(This article belongs to the Special Issue Advanced Grasping and Motion Control Solutions)
Show Figures

Figure 1

21 pages, 6754 KiB  
Article
Cooperative Grape Harvesting Using Heterogeneous Autonomous Robots
by Chris Lytridis, Christos Bazinas, Ioannis Kalathas, George Siavalas, Christos Tsakmakis, Theodoros Spirantis, Eftichia Badeka, Theodore Pachidis and Vassilis G. Kaburlasos
Robotics 2023, 12(6), 147; https://doi.org/10.3390/robotics12060147 - 28 Oct 2023
Cited by 3 | Viewed by 2236
Abstract
The development of agricultural robots is an increasingly popular research field aiming at addressing the widespread labor shortages in the farming industry and the ever-increasing food production demands. In many cases, multiple cooperating robots can be deployed in order to reduce task duration, [...] Read more.
The development of agricultural robots is an increasingly popular research field aiming at addressing the widespread labor shortages in the farming industry and the ever-increasing food production demands. In many cases, multiple cooperating robots can be deployed in order to reduce task duration, perform an operation not possible with a single robot, or perform an operation more effectively. Building on previous results, this application paper deals with a cooperation strategy that allows two heterogeneous robots to cooperatively carry out grape harvesting, and its implementation is demonstrated. More specifically, the cooperative grape harvesting task involves two heterogeneous robots, where one robot (i.e., the expert) is assigned the grape harvesting task, whereas the second robot (i.e., the helper) is tasked with supporting the harvesting task by carrying the harvested grapes. The proposed cooperative harvesting methodology ensures safe and effective interactions between the robots. Field experiments have been conducted in order firstly to validate the effectiveness of the coordinated navigation algorithm and secondly to demonstrate the proposed cooperative harvesting method. The paper reports on the conclusions drawn from the field experiments, and recommendations for future enhancements are made. The potential of sophisticated as well as explainable decision-making based on logic for enhancing the cooperation of autonomous robots in agricultural applications is discussed in the context of mathematical lattice theory. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

20 pages, 22238 KiB  
Article
An Autonomous Navigation Framework for Holonomic Mobile Robots in Confined Agricultural Environments
by Kosmas Tsiakas, Alexios Papadimitriou, Eleftheria Maria Pechlivani, Dimitrios Giakoumis, Nikolaos Frangakis, Antonios Gasteratos and Dimitrios Tzovaras
Robotics 2023, 12(6), 146; https://doi.org/10.3390/robotics12060146 - 28 Oct 2023
Cited by 3 | Viewed by 1943
Abstract
Due to the accelerated growth of the world’s population, food security and sustainable agricultural practices have become essential. The incorporation of Artificial Intelligence (AI)-enabled robotic systems in cultivation, especially in greenhouse environments, represents a promising solution, where the utilization of the confined infrastructure [...] Read more.
Due to the accelerated growth of the world’s population, food security and sustainable agricultural practices have become essential. The incorporation of Artificial Intelligence (AI)-enabled robotic systems in cultivation, especially in greenhouse environments, represents a promising solution, where the utilization of the confined infrastructure improves the efficacy and accuracy of numerous agricultural duties. In this paper, we present a comprehensive autonomous navigation architecture for holonomic mobile robots in greenhouses. Our approach utilizes the heating system rails to navigate through the crop rows using a single stereo camera for perception and a LiDAR sensor for accurate distance measurements. A finite state machine orchestrates the sequence of required actions, enabling fully automated task execution, while semantic segmentation provides essential cognition to the robot. Our approach has been evaluated in a real-world greenhouse using a custom-made robotic platform, showing its overall efficacy for automated inspection tasks in greenhouses. Full article
(This article belongs to the Special Issue Collection in Honor of Women's Contribution in Robotics)
Show Figures

Figure 1

20 pages, 2092 KiB  
Article
Low-Cost Computer-Vision-Based Embedded Systems for UAVs
by Luis D. Ortega, Erick S. Loyaga, Patricio J. Cruz, Henry P. Lema, Jackeline Abad and Esteban A. Valencia
Robotics 2023, 12(6), 145; https://doi.org/10.3390/robotics12060145 - 27 Oct 2023
Cited by 2 | Viewed by 2313
Abstract
Unmanned Aerial Vehicles (UAVs) are versatile, adapting hardware and software for research. They are vital for remote monitoring, especially in challenging settings such as volcano observation with limited access. In response, economical computer vision systems provide a remedy by processing data, boosting UAV [...] Read more.
Unmanned Aerial Vehicles (UAVs) are versatile, adapting hardware and software for research. They are vital for remote monitoring, especially in challenging settings such as volcano observation with limited access. In response, economical computer vision systems provide a remedy by processing data, boosting UAV autonomy, and assisting in maneuvering. Through the application of these technologies, researchers can effectively monitor remote areas, thus improving surveillance capabilities. Moreover, flight controllers employ onboard tools to gather data, further enhancing UAV navigation during surveillance tasks. For energy efficiency and comprehensive coverage, this paper introduces a budget-friendly prototype aiding UAV navigation, minimizing effects on endurance. The prototype prioritizes improved maneuvering via the integrated landing and obstacle avoidance system (LOAS). Employing open-source software and MAVLink communication, these systems underwent testing on a Pixhawk-equipped quadcopter. Programmed on a Raspberry Pi onboard computer, the prototype includes a distance sensor and basic camera to meet low computational and weight demands.Tests occurred in controlled environments, with systems performing well in 90% of cases. The Pixhawk and Raspberry Pi documented quad actions during evasive and landing maneuvers. Results prove the prototype’s efficacy in refining UAV navigation. Integrating this cost-effective, energy-efficient model holds promise for long-term mission enhancement—cutting costs, expanding terrain coverage, and boosting surveillance capabilities. Full article
(This article belongs to the Special Issue UAV Systems and Swarm Robotics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop