sensors-logo

Journal Browser

Journal Browser

Robotic Sensing for Smart Cities

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 50165

Special Issue Editors


E-Mail Website
Guest Editor
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Interests: autonomous robot navigation; SLAM (Simultaneous Localization And Mapping); structural inspection robot; machine learning; AI
Associate Professor, School of Civil and Environmental Engineering & School of Electrical and Computer Engineering (Adjunct) Georgia Institute of Technology 790 Atlantic Dr NW, Atlanta, GA 30332-0355, USA
Interests: structural health monitoring and damage detection, decentralized structural control, wireless and mobile sensors and structural dynamics

Special Issue Information

Dear Colleague,

For several decades, various sensors and sensing systems have been developed for smart cities or civil infrastructure systems. This Special Issue focuses on state-of-the-art robotics and automation technologies, including construction automation, robotics, instrumentation, monitoring, inspection, control, and rehabilitation for smart cities. It also covers construction informatics supporting sensing, analysis, and design activities needed to construct and operate smart and sustainable built environment.

Examples include robotic systems applied to smart cities equipped with various sensing technologies, such as vision-based sensors, laser sensors, wireless sensors, multi-sensor fusion, etc. Service robots and robotized devices for increasing the usability and safety of a built environment are also good examples.

A Special Issue in these areas will be published in the Sensors journal in an effort to diffuse current advances of various robotics and automation technologies for smart cities. Papers should contain theoretical and/or experimental results and will be subject to formal review procedures.

The particular topics of interest include but are not limited to:

  • Robotic sensing systems for smart cities or civil infrastructure;
  • Bio-inspired sensing for smart cities or civil infrastructure;
  • IT and robotics for construction automation or construction management;
  • Structural health monitoring or inspection using robotics technologies;
  • Mobile sensor networks for smart cities or civil infrastructure;
  • Damage repair and emergency handling control for smart cities or civil infrastructure;
  • Service robots and robotized devices for built environment.

Prof. Dr. Hyun Myung
Dr. Yang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Robotic sensing
  • Smart cities
  • Civil infrastructure
  • Structural health monitoring
  • Inspection robots
  • Mobile sensor network
  • Service robots

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

3 pages, 172 KiB  
Editorial
Robotic Sensing and Systems for Smart Cities
by Hyun Myung and Yang Wang
Sensors 2021, 21(9), 2963; https://doi.org/10.3390/s21092963 - 23 Apr 2021
Cited by 3 | Viewed by 1736
Abstract
For several decades, various sensors and sensing systems have been developed for smart cities and civil infrastructure systems [...] Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)

Research

Jump to: Editorial, Other

19 pages, 47009 KiB  
Article
Learning-Based Autonomous UAV System for Electrical and Mechanical (E&M) Device Inspection
by Yurong Feng, Kwaiwa Tse, Shengyang Chen, Chih-Yung Wen and Boyang Li
Sensors 2021, 21(4), 1385; https://doi.org/10.3390/s21041385 - 16 Feb 2021
Cited by 15 | Viewed by 3206
Abstract
The inspection of electrical and mechanical (E&M) devices using unmanned aerial vehicles (UAVs) has become an increasingly popular choice in the last decade due to their flexibility and mobility. UAVs have the potential to reduce human involvement in visual inspection tasks, which could [...] Read more.
The inspection of electrical and mechanical (E&M) devices using unmanned aerial vehicles (UAVs) has become an increasingly popular choice in the last decade due to their flexibility and mobility. UAVs have the potential to reduce human involvement in visual inspection tasks, which could increase efficiency and reduce risks. This paper presents a UAV system for autonomously performing E&M device inspection. The proposed system relies on learning-based detection for perception, multi-sensor fusion for localization, and path planning for fully autonomous inspection. The perception method utilizes semantic and spatial information generated by a 2-D object detector. The information is then fused with depth measurements for object state estimation. No prior knowledge about the location and category of the target device is needed. The system design is validated by flight experiments using a quadrotor platform. The result shows that the proposed UAV system enables the inspection mission autonomously and ensures a stable and collision-free flight. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

23 pages, 1339 KiB  
Article
Robot Localization in Water Pipes Using Acoustic Signals and Pose Graph Optimization
by Rob Worley, Ke Ma, Gavin Sailor, Michele M. Schirru, Rob Dwyer-Joyce, Joby Boxall, Tony Dodd, Richard Collins and Sean Anderson
Sensors 2020, 20(19), 5584; https://doi.org/10.3390/s20195584 - 29 Sep 2020
Cited by 13 | Viewed by 3571
Abstract
One of the most fundamental tasks for robots inspecting water distribution pipes is localization, which allows for autonomous navigation, for faults to be communicated, and for interventions to be instigated. Pose-graph optimization using spatially varying information is used to enable localization within a [...] Read more.
One of the most fundamental tasks for robots inspecting water distribution pipes is localization, which allows for autonomous navigation, for faults to be communicated, and for interventions to be instigated. Pose-graph optimization using spatially varying information is used to enable localization within a feature-sparse length of pipe. We present a novel method for improving estimation of a robot’s trajectory using the measured acoustic field, which is applicable to other measurements such as magnetic field sensing. Experimental results show that the use of acoustic information in pose-graph optimization reduces errors by 39% compared to the use of typical pose-graph optimization using landmark features only. High location accuracy is essential to efficiently and effectively target investment to maximise the use of our aging pipe infrastructure. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

16 pages, 8942 KiB  
Article
Drivers’ Visual Perception Quantification Using 3D Mobile Sensor Data for Road Safety
by Kanghee Choi, Giyoung Byun, Ayoung Kim and Youngchul Kim
Sensors 2020, 20(10), 2763; https://doi.org/10.3390/s20102763 - 12 May 2020
Cited by 5 | Viewed by 3321
Abstract
To prevent driver accidents in cities, local governments have established policies to limit city speeds and create child protection zones near schools. However, if the same policy is applied throughout a city, it can be difficult to obtain smooth traffic flows. A driver [...] Read more.
To prevent driver accidents in cities, local governments have established policies to limit city speeds and create child protection zones near schools. However, if the same policy is applied throughout a city, it can be difficult to obtain smooth traffic flows. A driver generally obtains visual information while driving, and this information is directly related to traffic safety. In this study, we propose a novel geometric visual model to measure drivers’ visual perception and analyze the corresponding information using the line-of-sight method. Three-dimensional point cloud data are used to analyze on-site three-dimensional elements in a city, such as roadside trees and overpasses, which are normally neglected in urban spatial analyses. To investigate drivers’ visual perceptions of roads, we have developed an analytic model of three types of visual perception. By using this proposed method, this study creates a risk-level map according to the driver’s visual perception degree in Pangyo, South Korea. With the point cloud data from Pangyo, it is possible to analyze actual urban forms such as roadside trees, building shapes, and overpasses that are normally excluded from spatial analyses that use a reconstructed virtual space. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

19 pages, 7743 KiB  
Article
Deep-Learning-Based Indoor Human Following of Mobile Robot Using Color Feature
by Redhwan Algabri and Mun-Taek Choi
Sensors 2020, 20(9), 2699; https://doi.org/10.3390/s20092699 - 9 May 2020
Cited by 51 | Viewed by 16867
Abstract
Human following is one of the fundamental functions in human–robot interaction for mobile robots. This paper shows a novel framework with state-machine control in which the robot tracks the target person in occlusion and illumination changes, as well as navigates with obstacle avoidance [...] Read more.
Human following is one of the fundamental functions in human–robot interaction for mobile robots. This paper shows a novel framework with state-machine control in which the robot tracks the target person in occlusion and illumination changes, as well as navigates with obstacle avoidance while following the target to the destination. People are detected and tracked using a deep learning algorithm, called Single Shot MultiBox Detector, and the target person is identified by extracting the color feature using the hue-saturation-value histogram. The robot follows the target safely to the destination using a simultaneous localization and mapping algorithm with the LIDAR sensor for obstacle avoidance. We performed intensive experiments on our human following approach in an indoor environment with multiple people and moderate illumination changes. Experimental results indicated that the robot followed the target well to the destination, showing the effectiveness and practicability of our proposed system in the given environment. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

20 pages, 10943 KiB  
Article
Mobile Robot Self-Localization with 2D Push-Broom LIDAR in a 2D Map
by Jordi Palacín, David Martínez, Elena Rubies and Eduard Clotet
Sensors 2020, 20(9), 2500; https://doi.org/10.3390/s20092500 - 28 Apr 2020
Cited by 24 | Viewed by 4467
Abstract
This paper proposes mobile robot self-localization based on an onboard 2D push-broom (or tilted-down) LIDAR using a reference 2D map previously obtained with a 2D horizontal LIDAR. The hypothesis of this paper is that a 2D reference map created with a 2D horizontal [...] Read more.
This paper proposes mobile robot self-localization based on an onboard 2D push-broom (or tilted-down) LIDAR using a reference 2D map previously obtained with a 2D horizontal LIDAR. The hypothesis of this paper is that a 2D reference map created with a 2D horizontal LIDAR mounted on a mobile robot or in another mobile device can be used by another mobile robot to locate its location using the same 2D LIDAR tilted-down. The motivation to tilt-down a 2D LIDAR is the direct detection of holes or small objects placed on the ground that remain undetected for a fixed horizontal 2D LIDAR. The experimental evaluation of this hypothesis has demonstrated that self-localization with a 2D push-broom LIDAR is possible by detecting and deleting the ground and ceiling points from the scan data, and projecting the remaining scan points in the horizontal plane of the 2D reference map before applying a 2D self-location algorithm. Therefore, an onboard 2D push-broom LIDAR offers self-location and accurate ground supervision without requiring an additional motorized device to change the tilt of the LIDAR in order to get these two combined characteristics in a mobile robot. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Graphical abstract

16 pages, 1964 KiB  
Article
Human Interaction Smart Subsystem—Extending Speech-Based Human-Robot Interaction Systems with an Implementation of External Smart Sensors
by Michal Podpora, Arkadiusz Gardecki, Ryszard Beniak, Bartlomiej Klin, Jose Lopez Vicario and Aleksandra Kawala-Sterniuk
Sensors 2020, 20(8), 2376; https://doi.org/10.3390/s20082376 - 22 Apr 2020
Cited by 34 | Viewed by 4865
Abstract
This paper presents a more detailed concept of Human-Robot Interaction systems architecture. One of the main differences between the proposed architecture and other ones is the methodology of information acquisition regarding the robot’s interlocutor. In order to obtain as much information as possible [...] Read more.
This paper presents a more detailed concept of Human-Robot Interaction systems architecture. One of the main differences between the proposed architecture and other ones is the methodology of information acquisition regarding the robot’s interlocutor. In order to obtain as much information as possible before the actual interaction took place, a custom Internet-of-Things-based sensor subsystems connected to Smart Infrastructure was designed and implemented, in order to support the interlocutor identification and acquisition of initial interaction parameters. The Artificial Intelligence interaction framework of the developed robotic system (including humanoid Pepper with its sensors and actuators, additional local, remote and cloud computing services) is being extended with the use of custom external subsystems for additional knowledge acquisition: device-based human identification, visual identification and audio-based interlocutor localization subsystems. These subsystems were deeply introduced and evaluated in this paper, presenting the benefits of integrating them into the robotic interaction system. In this paper a more detailed analysis of one of the external subsystems—Bluetooth Human Identification Smart Subsystem—was also included. The idea, use case, and a prototype, integration of elements of Smart Infrastructure systems and the prototype implementation were performed in a small front office of the Weegree company as a decent test-bed application area. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

18 pages, 44307 KiB  
Article
Accessible Real-Time Surveillance Radar System for Object Detection
by Seongha Park, Yongho Kim, Kyuhwan Lee, Anthony H. Smith, James E. Dietz and Eric T. Matson
Sensors 2020, 20(8), 2215; https://doi.org/10.3390/s20082215 - 14 Apr 2020
Cited by 6 | Viewed by 5860
Abstract
As unmanned ground and aerial vehicles become more accessible and their usage covers a wider area of application, including for threatening purposes which can cause connected catastrophe, a surveillance system for the public places is being considered more essential to respond to those [...] Read more.
As unmanned ground and aerial vehicles become more accessible and their usage covers a wider area of application, including for threatening purposes which can cause connected catastrophe, a surveillance system for the public places is being considered more essential to respond to those possible threats. We propose an inexpensive, lighter, safer, and smaller radar system than military-grade radar systems while keeping reasonable capability for use in monitoring public places. The paper details the iterative process on the system design and improvements with experiments to realize the system used for surveillance. The experiments show the practical use of the system and configuration for a better understanding of using the system. Cyber-physical systems for outdoor environments can benefit from the system as a sensor for sensing objects as well as monitoring. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

12 pages, 6436 KiB  
Technical Note
Westdrive X LoopAR: An Open-Access Virtual Reality Project in Unity for Evaluating User Interaction Methods during Takeover Requests
by Farbod N. Nezami, Maximilian A. Wächter, Nora Maleki, Philipp Spaniol, Lea M. Kühne, Anke Haas, Johannes M. Pingel, Linus Tiemann, Frederik Nienhaus, Lynn Keller, Sabine U. König, Peter König and Gordon Pipa
Sensors 2021, 21(5), 1879; https://doi.org/10.3390/s21051879 - 8 Mar 2021
Cited by 8 | Viewed by 5240
Abstract
With the further development of highly automated vehicles, drivers will engage in non-related tasks while being driven. Still, drivers have to take over control when requested by the car. Here, the question arises, how potentially distracted drivers get back into the control-loop quickly [...] Read more.
With the further development of highly automated vehicles, drivers will engage in non-related tasks while being driven. Still, drivers have to take over control when requested by the car. Here, the question arises, how potentially distracted drivers get back into the control-loop quickly and safely when the car requests a takeover. To investigate effective human–machine interactions, a mobile, versatile, and cost-efficient setup is needed. Here, we describe a virtual reality toolkit for the Unity 3D game engine containing all the necessary code and assets to enable fast adaptations to various human–machine interaction experiments, including closely monitoring the subject. The presented project contains all the needed functionalities for realistic traffic behavior, cars, pedestrians, and a large, open-source, scriptable, and modular VR environment. It covers roughly 25 km2, a package of 125 animated pedestrians, and numerous vehicles, including motorbikes, trucks, and cars. It also contains all the needed nature assets to make it both highly dynamic and realistic. The presented repository contains a C++ library made for LoopAR that enables force feedback for gaming steering wheels as a fully supported component. It also includes all necessary scripts for eye-tracking in the used devices. All the main functions are integrated into the graphical user interface of the Unity® editor or are available as prefab variants to ease the use of the embedded functionalities. This project’s primary purpose is to serve as an open-access, cost-efficient toolkit that enables interested researchers to conduct realistic virtual reality research studies without costly and immobile simulators. To ensure the accessibility and usability of the mentioned toolkit, we performed a user experience report, also included in this paper. Full article
(This article belongs to the Special Issue Robotic Sensing for Smart Cities)
Show Figures

Figure 1

Back to TopTop