Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = autonomous mobile manipulators

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1849 KB  
Review
Key Technologies of Robotic Arms in Unmanned Greenhouse
by Songchao Zhang, Tianhong Liu, Xiang Li, Chen Cai, Chun Chang and Xinyu Xue
Agronomy 2025, 15(11), 2498; https://doi.org/10.3390/agronomy15112498 - 28 Oct 2025
Viewed by 360
Abstract
As a pioneering solution for precision agriculture, unmanned, robotics-centred greenhouse farms have become a key technological pathway for intelligent upgrades. The robotic arm is the core unit responsible for achieving full automation, and the level of technological development of this unit directly affects [...] Read more.
As a pioneering solution for precision agriculture, unmanned, robotics-centred greenhouse farms have become a key technological pathway for intelligent upgrades. The robotic arm is the core unit responsible for achieving full automation, and the level of technological development of this unit directly affects the productivity and intelligence of these farms. This review aims to systematically analyze the current applications, challenges, and future trends of robotic arms and their key technologies within unmanned greenhouse. The paper systematically classifies and compares the common types of robotic arms and their mobile platforms used in greenhouses. It provides an in-depth exploration of the core technologies that support efficient manipulator operation, focusing on the design evolution of end-effectors and the perception algorithms for plants and fruit. Furthermore, it elaborates on the framework for integrating individual robots into collaborative systems analyzing typical application cases in areas such as plant protection and fruit and vegetable harvesting. The review concludes that greenhouse robotic arm technology is undergoing a profound transformation evolving from single-function automation towards system-level intelligent integration. Finally, it discusses the future development directions highlighting the importance of multi-robot systems, swarm intelligence, and air-ground collaborative frameworks incorporating unmanned aerial vehicles (UAVs) in overcoming current limitations and achieving fully autonomous greenhouses. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 6219 KB  
Article
Model-Free Transformer Framework for 6-DoF Pose Estimation of Textureless Tableware Objects
by Jungwoo Lee, Hyogon Kim, Ji-Wook Kwon, Sung-Jo Yun, Na-Hyun Lee, Young-Ho Choi, Goobong Chung and Jinho Suh
Sensors 2025, 25(19), 6167; https://doi.org/10.3390/s25196167 - 5 Oct 2025
Viewed by 499
Abstract
Tableware objects such as plates, bowls, and cups are usually textureless, uniform in color, and vary widely in shape, making it difficult to apply conventional pose estimation methods that rely on texture cues or object-specific CAD models. These limitations present a significant obstacle [...] Read more.
Tableware objects such as plates, bowls, and cups are usually textureless, uniform in color, and vary widely in shape, making it difficult to apply conventional pose estimation methods that rely on texture cues or object-specific CAD models. These limitations present a significant obstacle to robotic manipulation in restaurant environments, where reliable six-degree-of-freedom (6-DoF) pose estimation is essential for autonomous grasping and collection. To address this problem, we propose a model-free and texture-free 6-DoF pose estimation framework based on a transformer encoder architecture. This method uses only geometry-based features extracted from depth images, including surface vertices and rim normals, which provide strong structural priors. The pipeline begins with object detection and segmentation using a pretrained video foundation model, followed by the generation of uniformly partitioned grids from depth data. For each grid cell, centroid positions, and surface normals are computed and processed by a transformer-based model that jointly predicts object rotation and translation. Experiments with ten types of tableware demonstrate that the method achieves an average rotational error of 3.53 degrees and a translational error of 13.56 mm. Real-world deployment on a mobile robot platform with a manipulator further validated its ability to autonomously recognize and collect tableware, highlighting the practicality of the proposed geometry-driven approach for service robotics. Full article
Show Figures

Figure 1

20 pages, 74841 KB  
Article
Autonomous Concrete Crack Monitoring Using a Mobile Robot with a 2-DoF Manipulator and Stereo Vision Sensors
by Seola Yang, Daeik Jang, Jonghyeok Kim and Haemin Jeon
Sensors 2025, 25(19), 6121; https://doi.org/10.3390/s25196121 - 3 Oct 2025
Cited by 1 | Viewed by 770
Abstract
Crack monitoring in concrete structures is essential to maintaining structural integrity. Therefore, this paper proposes a mobile ground robot equipped with a 2-DoF manipulator and stereo vision sensors for autonomous crack monitoring and mapping. To facilitate crack detection over large areas, a 2-DoF [...] Read more.
Crack monitoring in concrete structures is essential to maintaining structural integrity. Therefore, this paper proposes a mobile ground robot equipped with a 2-DoF manipulator and stereo vision sensors for autonomous crack monitoring and mapping. To facilitate crack detection over large areas, a 2-DoF motorized manipulator providing linear and rotational motions, with a stereo vision sensor mounted on the end effector, was deployed. In combination with a manual rotation plate, this configuration enhances accessibility and expands the field of view for crack monitoring. Another stereo vision sensor, mounted at the front of the robot, was used to acquire point cloud data of the surrounding environment, enabling tasks such as SLAM (simultaneous localization and mapping), path planning and following, and obstacle avoidance. Cracks are detected and segmented using the deep learning algorithms YOLO (You Only Look Once) v6-s and SFNet (Semantic Flow Network), respectively. To enhance the performance of crack segmentation, synthetic image generation and preprocessing techniques, including cropping and scaling, were applied. The dimensions of cracks are calculated using point clouds filtered with the median absolute deviation method. To validate the performance of the proposed crack-monitoring and mapping method with the robot system, indoor experimental tests were performed. The experimental results confirmed that, in cases of divided imaging, the crack propagation direction was predicted, enabling robotic manipulation and division-point calculation. Subsequently, total crack length and width were calculated by combining reconstructed 3D point clouds from multiple frames, with a maximum relative error of 1%. Full article
Show Figures

Figure 1

27 pages, 448 KB  
Review
A Review of Mathematical Models in Robotics
by Pubudu Suranga Dasanayake, Virginijus Baranauskas, Gintaras Dervinis and Leonas Balasevicius
Appl. Sci. 2025, 15(14), 8093; https://doi.org/10.3390/app15148093 - 21 Jul 2025
Viewed by 2237
Abstract
In robotics, much emphasis is placed on mathematical modeling, as the creation, control, and optimization of robots for a wide field of work must be achieved precisely and adaptively. The aim of this paper is to present a systematic and structured approach to [...] Read more.
In robotics, much emphasis is placed on mathematical modeling, as the creation, control, and optimization of robots for a wide field of work must be achieved precisely and adaptively. The aim of this paper is to present a systematic and structured approach to the literature review of mathematical models in robotics, critically considering mathematical frameworks that influence and shape robotics in light of current and prevailing trends. The paper underlines the complexities of maintaining accurate dynamic representations in robotic systems, revealing the challenges that arise from numerical simplifications. The study outlines the development of efficient remote-control systems that consider dynamic relationships among the components comprising the robot. The findings of the recent simulation prove that the developed mathematical model effectively supports designing an adaptive control system with artificial intelligence features, especially for autonomous mobile robotics with manipulators that are inherently complex and networked systems. If models are to accelerate robotics progress toward increasingly intelligent, adaptive, and efficient systems, they must learn to overcome some of the computational challenges while leveraging disciplinary synergies. Full article
Show Figures

Figure 1

12 pages, 17214 KB  
Technical Note
A Prototype Crop Management Platform for Low-Tunnel-Covered Strawberries Using Overhead Power Cables
by Omeed Mirbod and Marvin Pritts
AgriEngineering 2025, 7(7), 210; https://doi.org/10.3390/agriengineering7070210 - 2 Jul 2025
Viewed by 760
Abstract
The continuous and reliable operation of autonomous systems is important for farm management decision making, whether such systems perform crop monitoring using imaging systems or crop handling in pruning and harvesting applications using robotic manipulators. Autonomous systems, including robotic ground vehicles, drones, and [...] Read more.
The continuous and reliable operation of autonomous systems is important for farm management decision making, whether such systems perform crop monitoring using imaging systems or crop handling in pruning and harvesting applications using robotic manipulators. Autonomous systems, including robotic ground vehicles, drones, and tractors, are major research efforts of precision crop management. However, these systems may be less effective or require specific customizations for planting systems in low tunnels, high tunnels, or other environmentally controlled enclosures. In this work, a compact and lightweight crop management platform is developed that uses overhead power cables for continuous operation over row crops, requiring less human intervention and independent of the ground terrain conditions. The platform does not carry batteries onboard for its operation, but rather pulls power from overhead cables, which it also uses to navigate over crop rows. It is developed to be modular, with the top section consisting of mobility and power delivery and the bottom section addressing a custom task, such as incorporating additional sensors for crop monitoring or manipulators for crop handling. This prototype illustrates the infrastructure, locomotive mechanism, and sample usage of the system (crop imaging) in the application of low-tunnel-covered strawberries; however, there is potential for other row crop systems with regularly spaced support structures to adopt this platform as well. Full article
Show Figures

Graphical abstract

23 pages, 3907 KB  
Article
Woodot: An AI-Driven Mobile Robotic System for Sustainable Defect Repair in Custom Glulam Beams
by Pierpaolo Ruttico, Federico Bordoni and Matteo Deval
Sustainability 2025, 17(12), 5574; https://doi.org/10.3390/su17125574 - 17 Jun 2025
Viewed by 930
Abstract
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines [...] Read more.
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines an omnidirectional rover, a six-dof collaborative arm, and a fine-tuned Segment Anything computer vision pipeline to identify, mill, and plug surface knots on geometrically variable beams. The perception model was trained on a purpose-built micro-dataset and reached an F1 score of 0.69 on independent test images, while the integrated system located defects with a 4.3 mm mean positional error. Full repair cycles averaged 74 s per knot, reducing processing time by more than 60% compared with skilled manual operations, and achieved flush plug placement in 87% of trials. These outcomes demonstrate that a lightweight AI model coupled with mobile manipulation can deliver reliable, shop-floor automation for low-volume, high-variation timber production. By shortening cycle times and lowering worker exposure to repetitive tasks, Woodot offers a viable pathway to enhance the environmental, economic, and social sustainability of digital timber construction. Nevertheless, some limitations remain, such as dependency on stable lighting conditions for optimal vision performance and the need for tool calibration checks. Full article
Show Figures

Figure 1

11 pages, 5251 KB  
Proceeding Paper
Soft Robotics: Engineering Flexible Automation for Complex Environments
by Wai Yie Leong
Eng. Proc. 2025, 92(1), 65; https://doi.org/10.3390/engproc2025092065 - 13 May 2025
Cited by 3 | Viewed by 1819
Abstract
Soft robotics represents a transformative approach to automation, focusing on the development of robots constructed from flexible, compliant materials that mimic biological systems. Being different from traditional rigid robots, soft robots are engineered to adapt and operate efficiently in complex, unstructured environments, making [...] Read more.
Soft robotics represents a transformative approach to automation, focusing on the development of robots constructed from flexible, compliant materials that mimic biological systems. Being different from traditional rigid robots, soft robots are engineered to adapt and operate efficiently in complex, unstructured environments, making them highly appropriate for applications that require delicate manipulation, safe human–robot interaction, and mobility on unstable terrain. The key principles, materials, and fabrication techniques of soft robotics are explored in this study, highlighting their versatility in industries such as healthcare, agriculture, and search-and-rescue operations. The essence of soft robotic systems lies in their ability to deform and respond to environmental stimuli. The system enables new paradigms in automation for tasks that demand flexibility, such as handling fragile objects, navigating narrow spaces, or interacting with humans. Emerging materials, such as elastomers, hydrogels, and shape-memory alloys, are driving innovations in actuation and sensing mechanisms, expanding the capabilities of soft robots in applications. We also examine the challenges associated with the control and energy efficiency of soft robots, as well as opportunities for integrating artificial intelligence and advanced sensing to enhance autonomous decision-making. Through case studies and experimental data, the potential of soft robotics is reviewed to revolutionize sectors requiring adaptive automation, ultimately contributing to safer, more efficient, and sustainable technological advancements than present robots. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

20 pages, 29832 KB  
Article
Human-Centric Robotic Solution for Motor and Gearbox Assembly: An Industry 5.0 Pilot Study
by Aitor Ibarguren, Sotiris Aivaliotis, Javier González Huarte, Arkaitz Urquiza, Panagiotis Baris, Apostolis Papavasileiou, George Michalos and Sotiris Makris
Robotics 2025, 14(5), 56; https://doi.org/10.3390/robotics14050056 - 26 Apr 2025
Cited by 1 | Viewed by 1493
Abstract
The automotive industry is one of the most automatized industries, employing more than one million robots worldwide. Although several steps in car production are completely automated, many steps are still carried out by operators, especially in tasks requiring high dexterity. Additionally, customization and [...] Read more.
The automotive industry is one of the most automatized industries, employing more than one million robots worldwide. Although several steps in car production are completely automated, many steps are still carried out by operators, especially in tasks requiring high dexterity. Additionally, customization and deployability are still pending issues in this industry, where a real collaboration between robots and operators would increase the reconfigurability of the assembly lines. This paper presents an innovative robotic cell focused on the motor and gearbox assembly, including collaborative industrial robots and autonomous mobile manipulators along the different assembly stations. The design also incorporates a human-centered approach, with an enhanced human interface to facilitate the interaction with operators with the complete robotic cell. The proposed approach has been deployed and validated on a real automotive industrial scenario, obtaining promising metrics and results. Full article
(This article belongs to the Special Issue Integrating Robotics into High-Accuracy Industrial Operations)
Show Figures

Graphical abstract

19 pages, 8698 KB  
Article
The Design of a Vision-Assisted Dynamic Antenna Positioning Radio Frequency Identification-Based Inventory Robot Utilizing a 3-Degree-of-Freedom Manipulator
by Abdussalam A. Alajami and Rafael Pous
Sensors 2025, 25(8), 2418; https://doi.org/10.3390/s25082418 - 11 Apr 2025
Cited by 1 | Viewed by 1254
Abstract
In contemporary warehouse logistics, the demand for efficient and precise inventory management is increasingly critical, yet traditional Radio Frequency Identification (RFID)-based systems often falter due to static antenna configurations that limit tag detection efficacy in complex environments with diverse object arrangements. Addressing this [...] Read more.
In contemporary warehouse logistics, the demand for efficient and precise inventory management is increasingly critical, yet traditional Radio Frequency Identification (RFID)-based systems often falter due to static antenna configurations that limit tag detection efficacy in complex environments with diverse object arrangements. Addressing this challenge, we introduce an advanced RFID-based inventory robot that integrates a 3-degree-of-freedom (3DOF) manipulator with vision-assisted dynamic antenna positioning to optimize tag detection performance. This autonomous system leverages a pretrained You Only Look Once (YOLO) model to detect objects in real time, employing forward and inverse kinematics to dynamically orient the RFID antenna toward identified items. The manipulator subsequently executes a tailored circular scanning motion, ensuring comprehensive coverage of each object’s surface and maximizing RFID tag readability. To evaluate the system’s efficacy, we conducted a comparative analysis of three scanning strategies: (1) a conventional fixed antenna approach, (2) a predefined path strategy with preprogrammed manipulator movements, and (3) our proposed vision-assisted dynamic positioning method. Experimental results, derived from controlled laboratory tests and Gazebo-based simulations, unequivocally demonstrate the superiority of the dynamic positioning approach. This method achieved detection rates of up to 98.0% across varied shelf heights and spatial distributions, significantly outperforming the fixed antenna (21.6%) and predefined path (88.5%) strategies, particularly in multitiered and cluttered settings. Furthermore, the approach balances energy efficiency, consuming 22.1 Wh per mission—marginally higher than the fixed antenna (18.2 Wh) but 9.8% less than predefined paths (24.5 Wh). By overcoming the limitations of static and preprogrammed systems, our robot offers a scalable, adaptable solution poised to elevate warehouse automation in the era of Industry 4.0. Full article
Show Figures

Figure 1

20 pages, 2772 KB  
Article
Activities of Daily Living Object Dataset: Advancing Assistive Robotic Manipulation with a Tailored Dataset
by Md Tanzil Shahria and Mohammad H. Rahman
Sensors 2024, 24(23), 7566; https://doi.org/10.3390/s24237566 - 27 Nov 2024
Cited by 4 | Viewed by 2296
Abstract
The increasing number of individuals with disabilities—over 61 million adults in the United States alone—underscores the urgent need for technologies that enhance autonomy and independence. Among these individuals, millions rely on wheelchairs and often require assistance from another person with activities of daily [...] Read more.
The increasing number of individuals with disabilities—over 61 million adults in the United States alone—underscores the urgent need for technologies that enhance autonomy and independence. Among these individuals, millions rely on wheelchairs and often require assistance from another person with activities of daily living (ADLs), such as eating, grooming, and dressing. Wheelchair-mounted assistive robotic arms offer a promising solution to enhance independence, but their complex control interfaces can be challenging for users. Automating control through deep learning-based object detection models presents a viable pathway to simplify operation, yet progress is impeded by the absence of specialized datasets tailored for ADL objects suitable for robotic manipulation in home environments. To bridge this gap, we present a novel ADL object dataset explicitly designed for training deep learning models in assistive robotic applications. We curated over 112,000 high-quality images from four major open-source datasets—COCO, Open Images, LVIS, and Roboflow Universe—focusing on objects pertinent to daily living tasks. Annotations were standardized to the YOLO Darknet format, and data quality was enhanced through a rigorous filtering process involving a pre-trained YOLOv5x model and manual validation. Our dataset provides a valuable resource that facilitates the development of more effective and user-friendly semi-autonomous control systems for assistive robots. By offering a focused collection of ADL-related objects, we aim to advance assistive technologies that empower individuals with mobility impairments, addressing a pressing societal need and laying the foundation for future innovations in human–robot interaction within home settings. Full article
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)
Show Figures

Figure 1

18 pages, 7392 KB  
Article
Assistance in Picking Up and Delivering Objects for Individuals with Reduced Mobility Using the TIAGo Robot
by Francisco J. Naranjo-Campos, Ainhoa De Matías-Martínez, Juan G. Victores, José Antonio Gutiérrez Dueñas, Almudena Alcaide and Carlos Balaguer
Appl. Sci. 2024, 14(17), 7536; https://doi.org/10.3390/app14177536 - 26 Aug 2024
Cited by 1 | Viewed by 2557
Abstract
Individuals with reduced mobility, including the growing elderly demographic and those with spinal cord injuries, often face significant challenges in daily activities, leading to a dependence on assistance. To enhance their independence, we propose a robotic system that facilitates greater autonomy. Our approach [...] Read more.
Individuals with reduced mobility, including the growing elderly demographic and those with spinal cord injuries, often face significant challenges in daily activities, leading to a dependence on assistance. To enhance their independence, we propose a robotic system that facilitates greater autonomy. Our approach involves a functional assistive robotic implementation for picking, placing, and delivering containers using the TIAGo mobile manipulator robot. We developed software and routines for detecting containers marked with an ArUco code and manipulating them using the MoveIt library. Subsequently, the robot navigates to specific points of interest within a room to deliver the container to the user or another designated location. This assistance task is commanded through a user interface based on a web application that can be accessed from the personal phones of patients. The functionality of the system was validated through testing. Additionally, a series of user trials were conducted, yielding positive feedback on the performance and the demonstration. Insights gained from user feedback will be incorporated into future improvements to the system. Full article
(This article belongs to the Special Issue Intelligent Rehabilitation and Assistive Robotics)
Show Figures

Figure 1

25 pages, 26740 KB  
Article
Task-Dependent Comfort Zone, a Base Placement Strategy for Mobile Manipulators Based on Manipulability Measures
by Martin Sereinig, Peter Manzl and Johannes Gerstmayr
Robotics 2024, 13(8), 122; https://doi.org/10.3390/robotics13080122 - 16 Aug 2024
Cited by 1 | Viewed by 1981
Abstract
The present contribution introduces the task-dependent comfort zone as a base placement strategy for mobile manipulators using different manipulability measures. Four different manipulability measures depending on end-effector velocities, forces, stiffness, and accelerations are considered. By evaluating a discrete subspace of the manipulator workspace [...] Read more.
The present contribution introduces the task-dependent comfort zone as a base placement strategy for mobile manipulators using different manipulability measures. Four different manipulability measures depending on end-effector velocities, forces, stiffness, and accelerations are considered. By evaluating a discrete subspace of the manipulator workspace with these manipulability measures and using image-processing algorithms, a suitable goal position for the autonomous mobile manipulator was defined within the comfort zone. This always ensures a certain manipulator manipulablity value with a lower limit with respect to the maximum possible manipulability in the discrete subspace. Results are shown for three different mobile manipulators using the velocity-dependent manipulability measure in a simulation. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

33 pages, 60175 KB  
Article
Exploring Saliency for Learning Sensory-Motor Contingencies in Loco-Manipulation Tasks
by Elisa Stefanini, Gianluca Lentini, Giorgio Grioli, Manuel Giuseppe Catalano and Antonio Bicchi
Robotics 2024, 13(4), 58; https://doi.org/10.3390/robotics13040058 - 1 Apr 2024
Viewed by 2697
Abstract
The objective of this paper is to propose a framework for a robot to learn multiple Sensory-Motor Contingencies from human demonstrations and reproduce them. Sensory-Motor Contingencies are a concept that describes intelligent behavior of animals and humans in relation to their environment. They [...] Read more.
The objective of this paper is to propose a framework for a robot to learn multiple Sensory-Motor Contingencies from human demonstrations and reproduce them. Sensory-Motor Contingencies are a concept that describes intelligent behavior of animals and humans in relation to their environment. They have been used to design control and planning algorithms for robots capable of interacting and adapting autonomously. However, enabling a robot to autonomously develop Sensory-Motor Contingencies is challenging due to the complexity of action and perception signals. This framework leverages tools from Learning from Demonstrations to have the robot memorize various sensory phases and corresponding motor actions through an attention mechanism. This generates a metric in the perception space, used by the robot to determine which sensory-motor memory is contingent to the current context. The robot generalizes the memorized actions to adapt them to the present perception. This process creates a discrete lattice of continuous Sensory-Motor Contingencies that can control a robot in loco-manipulation tasks. Experiments on a 7-dof collaborative robotic arm with a gripper, and on a mobile manipulator demonstrate the functionality and versatility of the framework. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

34 pages, 5690 KB  
Review
Mobile Robot for Security Applications in Remotely Operated Advanced Reactors
by Ujwal Sharma, Uma Shankar Medasetti, Taher Deemyad, Mustafa Mashal and Vaibhav Yadav
Appl. Sci. 2024, 14(6), 2552; https://doi.org/10.3390/app14062552 - 18 Mar 2024
Cited by 12 | Viewed by 4510
Abstract
This review paper addresses the escalating operation and maintenance costs of nuclear power plants, primarily attributed to rising labor costs and intensified competition from renewable energy sources. The paper proposes a paradigm shift towards a technology-centric approach, leveraging mobile and automated robots for [...] Read more.
This review paper addresses the escalating operation and maintenance costs of nuclear power plants, primarily attributed to rising labor costs and intensified competition from renewable energy sources. The paper proposes a paradigm shift towards a technology-centric approach, leveraging mobile and automated robots for physical security, aiming to replace labor-intensive methods. Focusing on the human–robot interaction principle, the review conducts a state-of-the-art analysis of dog robots’ potential in infrastructure security and remote inspection within human–robot shared environments. Additionally, this paper surveys research on the capabilities of mobile robots, exploring their applications in various industries, including disaster response, exploration, surveillance, and environmental conservation. This study emphasizes the crucial role of autonomous mobility and manipulation in robots for diverse tasks, and discusses the formalization of problems, performance assessment criteria, and operational capabilities. It provides a comprehensive comparison of three prominent robotic platforms (SPOT, Ghost Robotics, and ANYmal Robotics) across various parameters, shedding light on their suitability for different applications. This review culminates in a research roadmap, delineating experiments and parameters for assessing dog robots’ performance in safeguarding nuclear power plants, offering a structured approach for future research endeavors. Full article
Show Figures

Figure 1

18 pages, 6727 KB  
Article
Autonomous Fever Detection, Medicine Delivery, and Environmental Disinfection for Pandemic Prevention
by Chien-Yu Su and Kuu-Young Young
Appl. Sci. 2023, 13(24), 13316; https://doi.org/10.3390/app132413316 - 17 Dec 2023
Cited by 3 | Viewed by 2130
Abstract
In facing the outbreak of the pandemic, robots are highly appealing for their non-contact nature. Among them, we have selected the mobile robot manipulator to develop an autonomous system for pandemic prevention, as it possesses both mobility and manipulability. The robot was used [...] Read more.
In facing the outbreak of the pandemic, robots are highly appealing for their non-contact nature. Among them, we have selected the mobile robot manipulator to develop an autonomous system for pandemic prevention, as it possesses both mobility and manipulability. The robot was used as a platform for performing autonomous fever detection, medicine delivery, and environmental disinfection system for the fever station and isolation ward, which are the two primary units that deal with the pandemic in a hospital. The proposed novel algorithms aim to ensure both human safety and comfort by automating fever detection and recognizing medicine taking. Additionally, they address environmental disinfection by effectively covering blind spots. We conducted a series of experiments to evaluate their performance in a hospital-like setting, which was designed specifically for the testing of intelligent medical systems developed in our university. Quantitative assessment was administered to analyze how the introduction of the proposed autonomous system reduced the risk of infection, and feedback was also collected from participants through questionnaires. Full article
(This article belongs to the Special Issue Medical Robotics: Advances, Applications, and Challenges)
Show Figures

Figure 1

Back to TopTop