sensors-logo

Journal Browser

Journal Browser

Special Issue "Assistance Robotics and Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 September 2021).

Special Issue Editors

Prof. Dr. Santiago T. Puente
E-Mail Website
Guest Editor
Automatics, Robotics and Computer Vision Group, Computer Science Research Institute, University of Alicante, Alicante 03690, Spain
Interests: dexterous grasping; outdoor manipulation; neuro-robotics; myoelectric control; marine robotics; deep learning; production automation and automatic disassembly
Special Issues and Collections in MDPI journals
Prof. Dr. Fernando Torres
E-Mail Website
Guest Editor
Automatics, Robotics and Computer Vision Group, Computer Science Research Institute, University of Alicante, Alicante 03690, Spain
Interests: intelligent robotic manipulation; visual control of robots; robot perception systems; field robots and advanced automation for industry 4.0; artificial vision engineering and e-learning
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, exploitation of assistance robotics has experienced a significant growth, mostly based on the development of sensor and processing technologies with the increasing interest in improving interaction of robots with humans in a more natural way. Robots are required to assist humans in the industry, in the manufacturing workspace, in the rehabilitation process, as well as in the medical environment. Robots are used to achieve ambient assisted living and to help older adults. Furthermore, assistive robots are used in security, search, or rescue operations, and to interact with humans with infectious diseases.

This Special Issue is focused on breakthrough developments in the field of assistive robotics, including current scientific progress in machine learning, deep learning, reinforcement learning, and imitation learning to enable assistive robots to help humans in any environment, as well as any supportive sensorial system that facilitates interaction between humans and robots at home or in the industrial environment. In addition to the aforementioned environments, methods and algorithms that combine sensors to enable assistive robots can be considered. Papers should address innovative solutions in these fields. Both review articles and original research papers are solicited.

Prof. Dr. Santiago T. Puente
Prof. Dr. Fernando Torres
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Assistive robotics to human operators in industry and in manufacturing workspaces
  • Assistive robotics in the rehabilitation process and the medical environment
  • Robots to achieve ambient assisted living
  • Assistive robots in operations of security, search, or rescue
  • Robotics to interact with humans with infectious diseases
  • Machine learning, deep learning, reinforcement learning, and imitation learning to enable assistive robots to humans in any environment
  • Methods and algorithms that combine sensors to enable assistive robots
  • Any supportive sensorial system that facilitates interaction between humans and robots

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Visual Sensor Fusion Based Autonomous Robotic System for Assistive Drinking
Sensors 2021, 21(16), 5419; https://doi.org/10.3390/s21165419 - 11 Aug 2021
Viewed by 374
Abstract
People with severe motor impairments like tetraplegia are restricted in activities of daily living (ADL) and are dependent on continuous human assistance. Assistive robots perform physical tasks in the context of ADLs to support people in need of assistance. In this work a [...] Read more.
People with severe motor impairments like tetraplegia are restricted in activities of daily living (ADL) and are dependent on continuous human assistance. Assistive robots perform physical tasks in the context of ADLs to support people in need of assistance. In this work a sensor fusion algorithm and a robot control algorithm for localizing the user’s mouth and autonomously navigating a robot arm are proposed for the assistive drinking task. The sensor fusion algorithm is implemented in a visual tracking system which consists of a 2-D camera and a single point time-of-flight distance sensor. The sensor fusion algorithm utilizes computer vision to combine camera images and distance measurements to achieve reliable localization of the user’s mouth. The robot control algorithm uses visual servoing to navigate a robot-handled drinking cup to the mouth and establish physical contact with the lips. This system features an abort command that is triggered by turning the head and unambiguous tracking of multiple faces which enable safe human robot interaction. A study with nine able-bodied test subjects shows that the proposed system reliably localizes the mouth and is able to autonomously navigate the cup to establish physical contact with the mouth. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Article
Design, Development, and Testing of an Intelligent Wearable Robotic Exoskeleton Prototype for Upper Limb Rehabilitation
Sensors 2021, 21(16), 5411; https://doi.org/10.3390/s21165411 - 10 Aug 2021
Viewed by 1038
Abstract
Neuromotor rehabilitation and recovery of upper limb functions are essential to improve the life quality of patients who have suffered injuries or have pathological sequels, where it is desirable to enhance the development of activities of daily living (ADLs). Modern approaches such as [...] Read more.
Neuromotor rehabilitation and recovery of upper limb functions are essential to improve the life quality of patients who have suffered injuries or have pathological sequels, where it is desirable to enhance the development of activities of daily living (ADLs). Modern approaches such as robotic-assisted rehabilitation provide decisive factors for effective motor recovery, such as objective assessment of the progress of the patient and the potential for the implementation of personalized training plans. This paper focuses on the design, development, and preliminary testing of a wearable robotic exoskeleton prototype with autonomous Artificial Intelligence-based control, processing, and safety algorithms that are fully embedded in the device. The proposed exoskeleton is a 1-DoF system that allows flexion-extension at the elbow joint, where the chosen materials render it compact. Different operation modes are supported by a hierarchical control strategy, allowing operation in autonomous mode, remote control mode, or in a leader-follower mode. Laboratory tests validate the proper operation of the integrated technologies, highlighting a low latency and reasonable accuracy. The experimental result shows that the device can be suitable for use in providing support for diagnostic and rehabilitation processes of neuromotor functions, although optimizations and rigorous clinical validation are required beforehand. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Communication
Design of a Payload Adjustment Device for an Unpowered Lower-Limb Exoskeleton
Sensors 2021, 21(12), 4037; https://doi.org/10.3390/s21124037 - 11 Jun 2021
Viewed by 574
Abstract
This paper proposes a device that can change the payload of an unpowered lower-limb exoskeleton supporting the weights of humans and loads. Our previous exoskeletons used a cam–follower structure with a spring applied to the hip joint. This exoskeleton showed satisfying performance within [...] Read more.
This paper proposes a device that can change the payload of an unpowered lower-limb exoskeleton supporting the weights of humans and loads. Our previous exoskeletons used a cam–follower structure with a spring applied to the hip joint. This exoskeleton showed satisfying performance within the payload; however, the performance decreased when the payload was exceeded. Therefore, a payload adjustment device that can adjust the wearer’s required torque by easily applying it to the cam–follower structure was developed. An exoskeleton dynamic equation that can calculate a person’s required joint torque given the required payload and the wearer’s posture was derived. This dynamic equation provides a guideline for designing a device that can adjust the allowable joint torque range of an unpowered exoskeleton. In the Adams simulation environment, the payload adjustment device is applied to the cam–follower structure to show that the payload of the exoskeleton can be changed. User convenience and mass production were taken into account in the design of this device. This payload adjustment device should flexibly change the payload of the level desired by the wearer because it can quickly change the payload of the exoskeleton. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Article
Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface
Sensors 2021, 21(5), 1798; https://doi.org/10.3390/s21051798 - 05 Mar 2021
Cited by 1 | Viewed by 626
Abstract
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM [...] Read more.
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3–1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Article
Suboptimal Omnidirectional Wheel Design and Implementation
Sensors 2021, 21(3), 865; https://doi.org/10.3390/s21030865 - 28 Jan 2021
Cited by 2 | Viewed by 605
Abstract
The optimal design of an omnidirectional wheel is usually focused on the minimization of the gap between the free rollers of the wheel in order to minimize contact discontinuities with the floor in order to minimize the generation of vibrations. However, in practice, [...] Read more.
The optimal design of an omnidirectional wheel is usually focused on the minimization of the gap between the free rollers of the wheel in order to minimize contact discontinuities with the floor in order to minimize the generation of vibrations. However, in practice, a fast, tall, and heavy-weighted mobile robot using optimal omnidirectional wheels may also need a suspension system in order to reduce the presence of vibrations and oscillations in the upper part of the mobile robot. This paper empirically evaluates whether a heavy-weighted omnidirectional mobile robot can take advantage of its passive suspension system in order to also use non-optimal or suboptimal omnidirectional wheels with a non-optimized inner gap. The main comparative advantages of the proposed suboptimal omnidirectional wheel are its low manufacturing cost and the possibility of taking advantage of the gap to operate outdoors. The experimental part of this paper compares the vibrations generated by the motion system of a versatile mobile robot using optimal and suboptimal omnidirectional wheels. The final conclusion is that a suboptimal wheel with a large gap produces comparable on-board vibration patterns while maintaining the traction and increasing the grip on non-perfect planar surfaces. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Article
Impact of Acoustic and Interactive Disruptive Factors during Robot-Assisted Surgery—A Virtual Surgical Training Model
Sensors 2020, 20(20), 5891; https://doi.org/10.3390/s20205891 - 17 Oct 2020
Cited by 1 | Viewed by 1077
Abstract
The use of virtual reality trainers for teaching minimally invasive surgical techniques has been established for a long time in conventional laparoscopy as well as robotic surgery. The aim of the present study was to evaluate the impact of reproducible disruptive factors on [...] Read more.
The use of virtual reality trainers for teaching minimally invasive surgical techniques has been established for a long time in conventional laparoscopy as well as robotic surgery. The aim of the present study was to evaluate the impact of reproducible disruptive factors on the surgeon’s work. In a cross-sectional investigation, surgeons were tested with regard to the impact of different disruptive factors when doing exercises on a robotic-surgery simulator (Mimic Flex VRTM). Additionally, we collected data about the participants’ professional experience, gender, age, expertise in playing an instrument, and expertise in playing video games. The data were collected during DRUS 2019 (Symposium of the German Society for Robot-assisted Urology). Forty-two surgeons attending DRUS 2019 were asked to participate in a virtual robotic stress training unit. The surgeons worked in various specialties (visceral surgery, gynecology, and urology) and had different levels of expertise. The time taken to complete the exercise (TTCE), the final score (FSC), and blood loss (BL) were measured. In the basic exercise with an interactive disruption, TTCE was significantly longer (p < 0.01) and FSC significantly lower (p < 0.05). No significant difference in TTCE, FSC, or BL was noted in the advanced exercise with acoustic disruption. Performance during disruption was not dependent on the level of surgical experience, gender, age, expertise in playing an instrument, or playing video games. A positive correlation was registered between self-estimation and surgical experience. Interactive disruptions have a greater impact on the performance of a surgeon than acoustic ones. Disruption affects the performance of experienced as well as inexperienced surgeons. Disruption in daily surgery should be evaluated and minimized in the interest of the patient’s safety. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Back to TopTop