sensors-logo

Journal Browser

Journal Browser

Mobile Robot Perception: A Themed Issue in Honor of Professor Roland Siegwart

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 15166

Special Issue Editor


E-Mail Website
Guest Editor
International Research Lab. Georgia Tech-CNRS UMI 2958, Metz, France
Interests: mobile robot; field robotics; environment monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Roland Siegwart is professor for autonomous mobile robots at ETH Zurich, founding co-director of the technology transfer center, Wyss Zurich, and co-founder and board member of multiple high-tech companies. He is a well-known expert in the field of autonomous robot design, perception, and navigation. His career as scientist and promotor of startup companies spans over 35 years.

While studying mechanical engineering at a time when the first personal computers were just beginning to evolve, he became strongly attracted by the amazing opportunities offered by fast-evolving microprocessors, digital control systems, and novel sensors. After gaining experience in the digital control of active magnetic bearings during his PhD, he got infected by the robotics virus and this fast-evolving field at the interface of mechanical and electrical engineering, and computer science. During his career as professor at EPFL (1996–2006) and ETH Zurich (since 2006) he was at the forefront of the basic development of flying (multi-copter, solar airplanes, hybrids) and walking robots. Furthermore, he developed a strong leadership in the field of autonomous navigation with Lidars and cameras.

Roland Siegwart is recipient of the IEEE RAS Pioneer Award and IEEE RAS Inaba Technical Award, IEEE Fellow and Officer of the International Federation of Robotics Research (IFRR). With a H-Index of 112 (Google Scholar) he is among the most cited and influential scientists in robotics world-wide. He is on the editorial board of multiple journals in robotics and was a general chair of several conferences in robotics, including IROS 2002, AIM 2007, FSR 2007, ISRR 2009 and CoRL 2018. He is also a strong promotor of innovation and entrepreneurship and co-founder of more than half a dozen spin-off companies.

This Special Issue is dedicated to celebrating the career of Prof. Roland Siegwart in honor of his contributions in the field of autonomous mobile robots. It will cover a selection of recent research and review articles related to the science and technology of mobile robot perception and sensor-based localization and mapping.

You may choose our Joint Special Issue in Robotics.

Prof. Dr. Cédric Pradalier
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Visual-inertial localization and SLAM
  • Event-base cameras for robot navigation
  • Feature-based navigation
  • LiDar–Vision fusion
  • 3D environment perception and understanding
  • Semantic slam
  • Field robotics
  • Walking robots
  • Miniature robots
  • Visual navigation for UAVs

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2436 KiB  
Article
Laser-Based Door Localization for Autonomous Mobile Service Robots
by Steffen Müller, Tristan Müller, Aamir Ahmed and Horst-Michael Gross
Sensors 2023, 23(11), 5247; https://doi.org/10.3390/s23115247 - 31 May 2023
Cited by 1 | Viewed by 1313
Abstract
For autonomous mobile service robots, closed doors that are in their way are restricting obstacles. In order to open doors with on-board manipulation skills, a robot needs to be able to localize the door’s key features, such as the hinge and handle, as [...] Read more.
For autonomous mobile service robots, closed doors that are in their way are restricting obstacles. In order to open doors with on-board manipulation skills, a robot needs to be able to localize the door’s key features, such as the hinge and handle, as well as the current opening angle. While there are vision-based approaches for detecting doors and handles in images, we concentrate on analyzing 2D laser range scans. This requires less computational effort, and laser-scan sensors are available on most mobile robot platforms. Therefore, we developed three different machine learning approaches and a heuristic method based on line fitting able to extract the required position data. The algorithms are compared with respect to localization accuracy with help of a dataset containing laser range scans of doors. Our LaserDoors dataset is publicly available for academic use. Pros and cons of the individual methods are discussed; basically, the machine learning methods could outperform the heuristic method, but require special training data when applied in a real application. Full article
Show Figures

Figure 1

16 pages, 18328 KiB  
Article
Evaluating Autonomous Urban Perception and Planning in a 1/10th Scale MiniCity
by Noam Buckman, Alex Hansen, Sertac Karaman and Daniela Rus
Sensors 2022, 22(18), 6793; https://doi.org/10.3390/s22186793 - 08 Sep 2022
Cited by 5 | Viewed by 1975
Abstract
We present the MiniCity, a multi-vehicle evaluation platform for testing perception hardware and software for autonomous vehicles. The MiniCity is a 1/10th scale city consisting of realistic urban scenery, intersections, and multiple fully autonomous 1/10th scale vehicles with state-of-the-art sensors and algorithms. The [...] Read more.
We present the MiniCity, a multi-vehicle evaluation platform for testing perception hardware and software for autonomous vehicles. The MiniCity is a 1/10th scale city consisting of realistic urban scenery, intersections, and multiple fully autonomous 1/10th scale vehicles with state-of-the-art sensors and algorithms. The MiniCity is used to evaluate and test perception algorithms both upstream and downstream in the autonomy stack, in urban driving scenarios such as occluded intersections and avoiding multiple vehicles. We demonstrate the MiniCity’s ability to evaluate different sensor and algorithm configurations for perception tasks such as object detection and localization. For both tasks, the MiniCity platform is used to evaluate the task itself (accuracy in estimating obstacle pose and ego pose in the map) as well as the downstream performance in collision avoidance and lane following, respectively. Full article
Show Figures

Figure 1

29 pages, 35358 KiB  
Article
Detection of Household Furniture Storage Space in Depth Images
by Mateja Hržica, Petra Pejić, Ivana Hartmann Tolić and Robert Cupec
Sensors 2022, 22(18), 6774; https://doi.org/10.3390/s22186774 - 07 Sep 2022
Viewed by 2572
Abstract
Autonomous service robots assisting in homes and institutions should be able to store and retrieve items in household furniture. This paper presents a neural network-based computer vision method for detection of storage space within storage furniture. The method consists of automatic storage volume [...] Read more.
Autonomous service robots assisting in homes and institutions should be able to store and retrieve items in household furniture. This paper presents a neural network-based computer vision method for detection of storage space within storage furniture. The method consists of automatic storage volume detection and annotation within 3D models of furniture, and automatic generation of a large number of depth images of storage furniture with assigned bounding boxes representing the storage space above the furniture shelves. These scenes are used for the training of a neural network. The proposed method enables storage space detection in depth images acquired by a real 3D camera. Depth images with annotations of storage space bounding boxes are also a contribution of this paper and are available for further research. The proposed approach represents a novel research topic, and the results show that it is possible to facilitate a network originally developed for object detection to detect empty or cluttered storage volumes. Full article
Show Figures

Figure 1

17 pages, 6256 KiB  
Article
Recovery Strategy for Overturned Wheeled Vehicle Using a Mobile Robot and Experimental Validation
by Hidetoshi Ikeda, Shinya Atoji, Manami Amemiya, Shingo Tajima, Takayoshi Kitada, Kotaro Fukai and Keisuke Sato
Sensors 2022, 22(16), 5952; https://doi.org/10.3390/s22165952 - 09 Aug 2022
Cited by 1 | Viewed by 1332
Abstract
This paper describes mobile robot tactics for recovering a wheeled vehicle that has overturned. If such a vehicle were to tip over backward off its wheels and be unable to recover itself, especially in areas where it is difficult for humans to enter [...] Read more.
This paper describes mobile robot tactics for recovering a wheeled vehicle that has overturned. If such a vehicle were to tip over backward off its wheels and be unable to recover itself, especially in areas where it is difficult for humans to enter and work, overall work efficiency could decline significantly, not only because the vehicle is not able to perform its job, but because it becomes an obstacle to other work. Herein, the authors propose a robot-based recovery method that can be used to recover such overturned vehicles, and the authors evaluate its effectiveness. The recovery robot, which uses a mounted manipulator and hand to recover the overturned vehicle, is also equipped with a camera and a personal computer (PC). The ARToolKit software package installed on the PC detects AR markers attached to the overturned vehicle and uses the information they provide to orient itself in order to perform recovery operations. A statics analysis indicates the feasibility of the proposed method. To facilitate these operations, it is also necessary to know the distance between the robotic hand and the target position for grasping of vehicle. Therefore, a theoretical analysis is conducted, and a control system based on the results is implemented. The experimental results obtained in this study demonstrate the effectiveness of the proposed system. Full article
Show Figures

Figure 1

14 pages, 3874 KiB  
Article
A Magnetic Crawler System for Autonomous Long-Range Inspection and Maintenance on Large Structures
by Georges Chahine, Pete Schroepfer, Othmane-Latif Ouabi and Cédric Pradalier
Sensors 2022, 22(9), 3235; https://doi.org/10.3390/s22093235 - 22 Apr 2022
Cited by 1 | Viewed by 2999
Abstract
The inspection and maintenance of large-scale industrial structures are major challenges that require time-efficient and reliable solutions to ensure the healthy condition of structures during operation. Autonomous robots may provide a promising solution for this purpose. In particular, they could lead to faster [...] Read more.
The inspection and maintenance of large-scale industrial structures are major challenges that require time-efficient and reliable solutions to ensure the healthy condition of structures during operation. Autonomous robots may provide a promising solution for this purpose. In particular, they could lead to faster and more reliable inspection and maintenance without direct intervention from human operators. In this paper, we present a custom magnetic crawler system, and sensor suit and sensing modalities to enable such robotic operation. We first describe a localization framework based on a mesh created from a point cloud coupled with Inertial Measurement Unit (IMU) and Ultra-Wide Band (UWB) readings. Next, we introduce a mapping framework that relies on a 3D laser, and explicitly state how autonomous navigation and obstacle avoidance can be developed. Lastly, we present how ultrasonic guided waves (UGWs) are integrated into the system to provide accurate robot localization and structural feature mapping by relying on acoustic reflections in combination with the other systems. It is envisioned that long-range inspection capabilities that are not yet available in current industrial mobile platforms could emerge from the designed robotic system. Full article
Show Figures

Figure 1

19 pages, 3171 KiB  
Article
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
by Zdeněk Rozsypálek, George Broughton, Pavel Linder, Tomáš Rouček, Jan Blaha, Leonard Mentzl, Keerthy Kusumam and Tomáš Krajník
Sensors 2022, 22(8), 2975; https://doi.org/10.3390/s22082975 - 13 Apr 2022
Cited by 7 | Viewed by 2008
Abstract
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have [...] Read more.
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R. Full article
Show Figures

Figure 1

17 pages, 6889 KiB  
Article
Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation
by Tomáš Rouček, Arash Sadeghi Amjadi, Zdeněk Rozsypálek, George Broughton, Jan Blaha, Keerthy Kusumam and Tomáš Krajník
Sensors 2022, 22(8), 2836; https://doi.org/10.3390/s22082836 - 07 Apr 2022
Cited by 2 | Viewed by 1842
Abstract
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day–night cycles and [...] Read more.
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day–night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online. Full article
Show Figures

Figure 1

Back to TopTop