sensors-logo

Journal Browser

Journal Browser

Advanced Sensing and Control Technologies for Mobile and Collaborative Robotic Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (25 November 2022) | Viewed by 28375

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronics Information and Bioengineering - DEIB, Politecnico di Milano, 20133 Milano, Italy
Interests: computer assisted surgery; robot assisted surgery; rehabilitation and assistive robotics; trajectory planning in surgery; spatial localisation

Special Issue Information

Dear Colleagues,

Recent technological developments have led to the widespread use of mobile and collaborative robotic systems, in a large range of applications in transportation, manufacturing, self-driving automobiles, commercial and domestic services, surveillance and medical care, and planetary surface and subsurface exploration, providing an emerging market with great potential and a significant impact on society. Research on mobile and collaborative robotics systems is multidisciplinary, covering control engineering, computer science, mechatronic engineering, and biomedical engineering, and including many technical aspects, such as perception, control, computer vision, artificial intelligence, human–robot interfaces, and sensor technologies. While existing technologies are developing rapidly, great challenges and issues in mobile robotic systems research still exist in: 1) mechanisms and actuation design, 2) system modeling, 3) robust perception, 4) planning and control, 5) human–robot interaction. These may involve more technical aspects of terramechanics modeling, computer vision, sensor fusion, mapping, task and motion planning, robust/adaptive control, and so forth. To address these challenges, more advanced technologies in sensing and actuation are essential, which will enable future mobile robots to operate effectively, collaboratively, and autonomously in a greater range of real-world environments.

This Special Issue intends to provide a platform to gather the recent developments in mobile and collaborative robotic systems and the associated research, as well as to advance studies on the fundamental problems observed in collaborative and mobile robots also in healthcare applications. We welcome state-of-the-art research papers on mobile and collaborative robotic systems from both research and application perspectives. Various multidisciplinary approaches or integrative contributions, including sensing, perception, motion control, navigation (also inside the human body), learning and adaptation, fault tolerance, filtering, teleoperation, and bio-inspired robots, are also welcome in this Special Issue. 

Prof. Dr. Charlie Yang
Prof. Dr. Giancarlo Ferrigno
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • healthcare collaborative and mobile robots
  • search, exploration, and rescue mobile robots
  • education and service collaborative and mobile robots
  • industrial and agricultural mobile robots
  • self-driving vehicles
  • telepresence with mobile robots
  • motion planning and control
  • navigation and mapping
  • perception and decision-making for collaborative and mobile robots
  • computer vision and data processing for collaborative and mobile robots
  • learning and adaptation for collaborative and mobile robots
  • human–robot interaction and collaboration

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 16649 KiB  
Communication
Manipulation Tasks in Hazardous Environments Using a Teleoperated Robot: A Case Study at CERN
by Cosimo Gentile, Giacomo Lunghi, Luca Rosario Buonocore, Francesca Cordella, Mario Di Castro, Alessandro Masi and Loredana Zollo
Sensors 2023, 23(4), 1979; https://doi.org/10.3390/s23041979 - 10 Feb 2023
Cited by 2 | Viewed by 2065
Abstract
Remote robotic systems are employed in the CERN accelerator complex to perform different tasks, such as the safe handling of cables and their connectors. Without dedicated control, these kinds of actions are difficult and require the operators’ intervention, which is subjected to dangerous [...] Read more.
Remote robotic systems are employed in the CERN accelerator complex to perform different tasks, such as the safe handling of cables and their connectors. Without dedicated control, these kinds of actions are difficult and require the operators’ intervention, which is subjected to dangerous external agents. In this paper, two novel modules of the CERNTAURO framework are presented to provide a safe and usable solution for managing optical fibres and their connectors. The first module is used to detect touch and slippage, while the second one is used to regulate the grasping force and contrast slippage. The force reference was obtained with a combination of object recognition and a look-up table method. The proposed strategy was validated with tests in the CERN laboratory, and the preliminary experimental results demonstrated statistically significant increases in time-based efficiency and in the overall relative efficiency of the tasks. Full article
Show Figures

Figure 1

13 pages, 1759 KiB  
Article
Efficient Stereo Depth Estimation for Pseudo-LiDAR: A Self-Supervised Approach Based on Multi-Input ResNet Encoder
by Sabir Hossain and Xianke Lin
Sensors 2023, 23(3), 1650; https://doi.org/10.3390/s23031650 - 02 Feb 2023
Cited by 5 | Viewed by 2013
Abstract
Perception and localization are essential for autonomous delivery vehicles, mostly estimated from 3D LiDAR sensors due to their precise distance measurement capability. This paper presents a strategy to obtain a real-time pseudo point cloud from image sensors (cameras) instead of laser-based sensors (LiDARs). [...] Read more.
Perception and localization are essential for autonomous delivery vehicles, mostly estimated from 3D LiDAR sensors due to their precise distance measurement capability. This paper presents a strategy to obtain a real-time pseudo point cloud from image sensors (cameras) instead of laser-based sensors (LiDARs). Previous studies (such as PSMNet-based point cloud generation) built the algorithm based on accuracy but failed to operate in real time as LiDAR. We propose an approach to use different depth estimators to obtain pseudo point clouds similar to LiDAR to achieve better performance. Moreover, the depth estimator has used stereo imagery data to achieve more accurate depth estimation as well as point cloud results. Our approach to generating depth maps outperforms other existing approaches on KITTI depth prediction while yielding point clouds significantly faster than other approaches as well. Additionally, the proposed approach is evaluated on the KITTI stereo benchmark, where it shows effectiveness in runtime. Full article
Show Figures

Figure 1

20 pages, 10407 KiB  
Article
SmartCrawler: A Size-Adaptable In-Pipe Wireless Robotic System with Two-Phase Motion Control Algorithm in Water Distribution Systems
by Saber Kazeminasab and M. Katherine Banks
Sensors 2022, 22(24), 9666; https://doi.org/10.3390/s22249666 - 09 Dec 2022
Cited by 5 | Viewed by 1269
Abstract
Incidents to pipes cause damage in water distribution systems (WDS) and access to all parts of the WDS is a challenging task. In this paper, we propose an integrated wireless robotic system for in-pipe missions that includes an agile, maneuverable, and size-adaptable (9-in [...] Read more.
Incidents to pipes cause damage in water distribution systems (WDS) and access to all parts of the WDS is a challenging task. In this paper, we propose an integrated wireless robotic system for in-pipe missions that includes an agile, maneuverable, and size-adaptable (9-in to 22-in) in-pipe robot, “SmartCrawler”, with 1.56 m/s maximum speed. We develop a two-phase motion control algorithm that enables reliable motion in straight and rotation in non-straight configurations of in-service WDS. We also propose a bi-directional wireless sensor module based on active radio frequency identification (RFID) working in 434 MHz carrier frequency and 120 kbps for up to 5 sensor measurements to enable wireless underground communication with the burial depth of 1.5 m. The integration of the proposed wireless sensor module and the two-phase motion controller demonstrates promising results for wireless control of the in-pipe robot and multi-parameter sensor transmission for in-pipe missions. Full article
Show Figures

Figure 1

11 pages, 5326 KiB  
Communication
Research of Online Hand–Eye Calibration Method Based on ChArUco Board
by Wenwei Lin, Peidong Liang, Guantai Luo, Ziyang Zhao and Chentao Zhang
Sensors 2022, 22(10), 3805; https://doi.org/10.3390/s22103805 - 17 May 2022
Cited by 8 | Viewed by 3457
Abstract
To solve the problem of inflexibility of offline hand–eye calibration in “eye-in-hand” modes, an online hand–eye calibration method based on the ChArUco board is proposed in this paper. Firstly, a hand–eye calibration model based on the ChArUco board is established, by analyzing the [...] Read more.
To solve the problem of inflexibility of offline hand–eye calibration in “eye-in-hand” modes, an online hand–eye calibration method based on the ChArUco board is proposed in this paper. Firstly, a hand–eye calibration model based on the ChArUco board is established, by analyzing the mathematical model of hand–eye calibration, and the image features of the ChArUco board. According to the advantages of the ChArUco board, with both the checkerboard and the ArUco marker, an online hand–eye calibration algorithm based on the ChArUco board is designed. Then, the online hand–eye calibration algorithm, based on the ChArUco board, is used to realize the dynamic adjustment of the hand–eye position relationship. Finally, the hand–eye calibration experiment is carried out to verify the accuracy of the hand–eye calibration based on the ChArUco board. The robustness and accuracy of the proposed method are verified by online hand–eye calibration experiments. The experimental results show that the accuracy of the online hand–eye calibration method proposed in this paper is between 0.4 mm and 0.6 mm, which is almost the same as the offline hand–eye calibration accuracy. The method in this paper utilizes the advantages of the ChArUco board to realize online hand–eye calibration, which improves the flexibility and robustness of hand–eye calibration. Full article
Show Figures

Figure 1

25 pages, 1495 KiB  
Article
Design of a Gough–Stewart Platform Based on Visual Servoing Controller
by Minglei Zhu, Cong Huang, Shijie Song and Dawei Gong
Sensors 2022, 22(7), 2523; https://doi.org/10.3390/s22072523 - 25 Mar 2022
Cited by 9 | Viewed by 2829
Abstract
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design [...] Read more.
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design methodology. This advanced optimal design method considers the controller positioning accuracy in the design process for getting the optimal geometric parameters of the robot. In this paper, three types of visual servoing controllers are applied to control the motions of the Gough–Stewart platform: leg-direction-based visual servoing, line-based visual servoing, and image moment visual servoing. Depending on these controllers, the positioning error models considering the camera observation error together with the controller singularities are analyzed. In the next step, the optimization problems are formulated in order to get the optimal geometric parameters of the robot and the placement of the camera for the Gough–Stewart platform for each type of controller. Then, we perform co-simulations on the three optimized Gough–Stewart platforms in order to test the positioning accuracy and the robustness with respect to the manufacturing errors. It turns out that the optimal control-based design methodology helps get both the optimum design parameters of the robot and the performance of the controller {robot + dedicated controller}. Full article
Show Figures

Figure 1

17 pages, 2205 KiB  
Article
Spectral Diagnostic Model for Agricultural Robot System Based on Binary Wavelet Algorithm
by Weibin Wu, Ting Tang, Ting Gao, Chongyang Han, Jie Li, Ying Zhang, Xiaoyi Wang, Jianwu Wang and Yuanjiao Feng
Sensors 2022, 22(5), 1822; https://doi.org/10.3390/s22051822 - 25 Feb 2022
Cited by 3 | Viewed by 1858
Abstract
The application of agricultural robots can liberate labor. The improvement of robot sensing systems is the premise of making it work. At present, more research is being conducted on weeding and harvesting systems of field robot, but less research is being conducted on [...] Read more.
The application of agricultural robots can liberate labor. The improvement of robot sensing systems is the premise of making it work. At present, more research is being conducted on weeding and harvesting systems of field robot, but less research is being conducted on crop disease and insect pest perception, nutritional element diagnosis and precision fertilizer spraying systems. In this study, the effects of the nitrogen application rate on the absorption and accumulation of nitrogen, phosphorus and potassium in sweet maize were determined. Firstly, linear, parabolic, exponential and logarithmic diagnostic models of nitrogen, phosphorus and potassium contents were constructed by spectral characteristic variables. Secondly, the partial least squares regression and neural network nonlinear diagnosis model of nitrogen, phosphorus and potassium contents were constructed by the high-frequency wavelet sensitivity coefficient of binary wavelet decomposition. The results show that the neural network nonlinear diagnosis model of nitrogen, phosphorus and potassium content based on the high-frequency wavelet sensitivity coefficient of binary wavelet decomposition is better. The R2, MRE and NRMSE of nn of nitrogen, phosphorus and potassium were 0.974, 1.65% and 0.0198; 0.969, 9.02% and 0.1041; and 0.821, 2.16% and 0.0301, respectively. The model can provide growth monitoring for sweet corn and a perception model for the nutrient element perception system of an agricultural robot, while making preliminary preparations for the realization of intelligent and accurate field fertilization. Full article
Show Figures

Figure 1

22 pages, 10756 KiB  
Article
Soft Array Surface-Changing Compound Eye
by Yu Wu, Chuanshuai Hu, Yingming Dai, Wenkai Huang, Hongquan Li and Yuming Lan
Sensors 2021, 21(24), 8298; https://doi.org/10.3390/s21248298 - 11 Dec 2021
Viewed by 2040
Abstract
The field-of-view (FOV) of compound eyes is an important index for performance evaluation. Most artificial compound eyes are optical, fabricated by imitating insect compound eyes with a fixed FOV that is difficult to adjust over a wide range. The compound eye is of [...] Read more.
The field-of-view (FOV) of compound eyes is an important index for performance evaluation. Most artificial compound eyes are optical, fabricated by imitating insect compound eyes with a fixed FOV that is difficult to adjust over a wide range. The compound eye is of great significance in the field of tracking high-speed moving objects. However, the tracking ability of a compound eye is often limited by its own FOV size and the reaction speed of the rudder unit matched with the compound eye, so that the compound eye cannot better adapt to tracking high-speed moving objects. Inspired by the eyes of many organisms, we propose a soft-array, surface-changing compound eye (SASCE). Taking soft aerodynamic models (SAM) as the carrier and an infrared sensor as the load, the basic model of the variable structure infrared compound eye (VSICE) is established using an array of infrared sensors on the carrier. The VSICE model is driven by air pressure to change the array surface of the infrared sensor. Then, the spatial position of each sensor and its viewing area are changed and, finally, the FOV of the compound eye is changed. Simultaneously, to validate the theory, we measured the air pressure, spatial sensor position, and the FOV of the compound eye. When compared with the current compound eye, the proposed one has a wider adjustable FOV. Full article
Show Figures

Figure 1

20 pages, 7494 KiB  
Article
The Impact of Attention Mechanisms on Speech Emotion Recognition
by Shouyan Chen, Mingyan Zhang, Xiaofen Yang, Zhijia Zhao, Tao Zou and Xinqi Sun
Sensors 2021, 21(22), 7530; https://doi.org/10.3390/s21227530 - 12 Nov 2021
Cited by 17 | Viewed by 2294
Abstract
Speech emotion recognition (SER) plays an important role in real-time applications of human-machine interaction. The Attention Mechanism is widely used to improve the performance of SER. However, the applicable rules of attention mechanism are not deeply discussed. This paper discussed the difference between [...] Read more.
Speech emotion recognition (SER) plays an important role in real-time applications of human-machine interaction. The Attention Mechanism is widely used to improve the performance of SER. However, the applicable rules of attention mechanism are not deeply discussed. This paper discussed the difference between Global-Attention and Self-Attention and explored their applicable rules to SER classification construction. The experimental results show that the Global-Attention can improve the accuracy of the sequential model, while the Self-Attention can improve the accuracy of the parallel model when conducting the model with the CNN and the LSTM. With this knowledge, a classifier (CNN-LSTM×2+Global-Attention model) for SER is proposed. The experiments result show that it could achieve an accuracy of 85.427% on the EMO-DB dataset. Full article
Show Figures

Figure 1

14 pages, 972 KiB  
Communication
CACLA-Based Trajectory Tracking Guidance for RLV in Terminal Area Energy Management Phase
by Xuejing Lan, Zhifeng Tan, Tao Zou and Wenbiao Xu
Sensors 2021, 21(15), 5062; https://doi.org/10.3390/s21155062 - 26 Jul 2021
Viewed by 2115
Abstract
This paper focuses on the trajectory tracking guidance problem for the Terminal Area Energy Management (TAEM) phase of the Reusable Launch Vehicle (RLV). Considering the continuous state and action space of this guidance problem, the Continuous Actor–Critic Learning Automata (CACLA) is applied to [...] Read more.
This paper focuses on the trajectory tracking guidance problem for the Terminal Area Energy Management (TAEM) phase of the Reusable Launch Vehicle (RLV). Considering the continuous state and action space of this guidance problem, the Continuous Actor–Critic Learning Automata (CACLA) is applied to construct the guidance strategy of RLV. Two three-layer neuron networks are used to model the critic and actor of CACLA, respectively. The weight vectors of the critic are updated by the model-free Temporal Difference (TD) learning algorithm, which is improved by eligibility trace and momentum factor. The weight vectors of the actor are updated based on the sign of TD error, and a Gauss exploration is carried out in the actor. Finally, a Monte Carlo simulation and a comparison simulation are performed to show the effectiveness of the CACLA-based guidance strategy. Full article
Show Figures

Figure 1

18 pages, 12094 KiB  
Article
A Quadruped Robot with Three-Dimensional Flexible Legs
by Wenkai Huang, Junlong Xiao, Feilong Zeng, Puwei Lu, Guojian Lin, Wei Hu, Xuyu Lin and Yu Wu
Sensors 2021, 21(14), 4907; https://doi.org/10.3390/s21144907 - 19 Jul 2021
Cited by 14 | Viewed by 4145
Abstract
As an important part of the quadruped robot, the leg determines its performance. Flexible legs or flexible joints aid in the buffering and adaptability of robots. At present, most flexible quadruped robots only have two-dimensional flexibility or use complex parallel structures to achieve [...] Read more.
As an important part of the quadruped robot, the leg determines its performance. Flexible legs or flexible joints aid in the buffering and adaptability of robots. At present, most flexible quadruped robots only have two-dimensional flexibility or use complex parallel structures to achieve three-dimensional flexibility. This research will propose a new type of three-dimensional flexible structure. This passive compliant three-dimensional flexibility reduces the weight and complex structure of the robot. The anti-impact performance of the robot is verified by a side impact experiment. The simulation and experiments show that the robot still has good stability even under a simple algorithm and that the flexible leg can reduce the impact on the quadruped robot and improve the environmental adaptability of the robot. Full article
Show Figures

Figure 1

18 pages, 4812 KiB  
Communication
Dual-Motor Synchronization Control Design Based on Adaptive Neural Networks Considering Full-State Constraints and Partial Asymmetric Dead-Zone
by Chunhong Jin, Mingjie Cai and Zhihao Xu
Sensors 2021, 21(13), 4261; https://doi.org/10.3390/s21134261 - 22 Jun 2021
Cited by 2 | Viewed by 1882
Abstract
This paper proposes a command filtering backstepping (CFB) scheme with full-state constraints by leading into time-varying barrier Lyapunov functions (T-BLFs) for a dual-motor servo system with partial asymmetric dead-zone. Firstly, for the convenience of the controller design, the conventional partial asymmetric dead-zone model [...] Read more.
This paper proposes a command filtering backstepping (CFB) scheme with full-state constraints by leading into time-varying barrier Lyapunov functions (T-BLFs) for a dual-motor servo system with partial asymmetric dead-zone. Firstly, for the convenience of the controller design, the conventional partial asymmetric dead-zone model was replaced with a new smooth differentiable model owing to its non-smoothness. Secondly, neural networks (NNs) were utilized to approximate the nonlinearity that exists in the dead-zone model, improving the control performance. In addition, CFB was utilized to deal with the inherent computational explosion problem of the traditional backstepping method, and an error compensation mechanism was introduced to further reduce the filtering errors. Then, by applying the T-BLF to the CFB process, the states of the system never violated the prescribed constraints, and all signals in the dual-motor servo system were bounded. The tracking error and synchronization error could converge to a small desired neighborhood of the origin. In the end, the effectiveness of the proposed control scheme was verified through simulations. Full article
Show Figures

Figure 1

Back to TopTop