sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence for Sensing and Robotic Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (17 March 2023) | Viewed by 15328

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Engineering and Computer Science, University of Victoria, Victoria, BC V8P 5C2, Canada
Interests: artificial intelligence; sensor fusion; machine learning; computer vision with applications in unmanned vehicles, robotics, and industrial automation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mechanical Engineering, University of Victoria, Victoria, BC V8P 5C2, Canada
Interests: reinforcement learning; unsupervised learning; machine learning; computer vision; robotics; machine perception; unmanned aerial vehicles; deep learning

E-Mail Website
Guest Editor
School of Engineering, The University of British Columbia, Kelowna, BC V1V 1V7, Canada
Interests: robust control; system filtering; fault detection and isolation; reinforcement learning; supervised learning; robotic manipulation and planning; networked control; risk evaluation

Special Issue Information

Dear Colleagues,

This Special Issue “Artificial Intelligence for Sensing and Robotic Systems” aims to collect original research contributions on the application of artificial intelligence (AI) to alleviate challenges in the domain of robotics and automation. Powered by the latest development in machine learning, data science, natural language processing, and intelligent robotics, AI has demonstrated its potential in liberating humans from tedious work and hazardous environments with powerful self-reasoning mechanisms.

The Special Issue invites novel papers on efficient AI concepts, methods, and prototypes that are dedicated to enhancing practical applications of AI. The broad scope of the Issue includes promising topics and potential technologies that are intended to solve challenging problems for sensory devices in mechatronic, robotics, and automation systems, which may contain, but are not limited to:

  1. Computer vision, signal processing, environment modeling and reconstruction, fault detection and isolation, natural language processing, and other sensing technologies using machine learning methods;
  2. Perception, planning, control, navigation, and manipulation of robots and uncrewed systems using learning-based and data-driven methods.

Prof. Dr. Homayoun Najjaran
Dr. Kashish Gupta
Dr. Zengjie Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • reinforcement learning
  • machine learning
  • artificial intelligence
  • computer vision
  • natural language processing
  • quality assurance
  • simultaneous localization and mapping (SLAM)
  • motion planning
  • robotics
  • intelligent systems
  • uncrewed systems

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 20456 KiB  
Article
(MARGOT) Monocular Camera-Based Robot Grasping Strategy for Metallic Objects
by Carlos Veiga Almagro, Renato Andrés Muñoz Orrego, Álvaro García González, Eloise Matheson, Raúl Marín Prades, Mario Di Castro and Manuel Ferre Pérez
Sensors 2023, 23(11), 5344; https://doi.org/10.3390/s23115344 - 5 Jun 2023
Cited by 1 | Viewed by 1307
Abstract
Robotic handling of objects is not always a trivial assignment, even in teleoperation where, in most cases, this might lead to stressful labor for operators. To reduce the task difficulty, supervised motions could be performed in safe scenarios to reduce the workload in [...] Read more.
Robotic handling of objects is not always a trivial assignment, even in teleoperation where, in most cases, this might lead to stressful labor for operators. To reduce the task difficulty, supervised motions could be performed in safe scenarios to reduce the workload in these non-critical steps by using machine learning and computer vision techniques. This paper describes a novel grasping strategy based on a groundbreaking geometrical analysis which extracts diametrically opposite points taking into account surface smoothing (even those target objects that might conform highly complex shapes) to guarantee the uniformity of the grasping. It uses a monocular camera, as we are often facing space restrictions that generate the need to use laparoscopic cameras integrated in the tools, to recognize and isolate targets from the background, estimating their spatial coordinates and providing the best possible stable grasping points for both feature and featureless objects. It copes with reflections and shadows produced by light sources (which require extra effort to extract their geometrical properties) in unstructured facilities such as nuclear power plants or particle accelerators on scientific equipment. Based on the experimental results, utilizing a specialized dataset improved the detection of metallic objects in low-contrast environments, resulting in the successful application of the algorithm with error rates in the scale of millimeters in the majority of repeatability and accuracy tests. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

18 pages, 6958 KiB  
Article
Target Detection-Based Control Method for Archive Management Robot
by Cheng Yan, Jieqi Ren, Rui Wang, Yaowei Chen and Jie Zhang
Sensors 2023, 23(11), 5343; https://doi.org/10.3390/s23115343 - 5 Jun 2023
Viewed by 1228
Abstract
With increasing demand for efficient archive management, robots have been employed in paper-based archive management for large, unmanned archives. However, the reliability requirements of such systems are high due to their unmanned nature. To address this, this study proposes a paper archive access [...] Read more.
With increasing demand for efficient archive management, robots have been employed in paper-based archive management for large, unmanned archives. However, the reliability requirements of such systems are high due to their unmanned nature. To address this, this study proposes a paper archive access system with adaptive recognition for handling complex archive box access scenarios. The system comprises a vision component that employs the YOLOV5 algorithm to identify feature regions, sort and filter data, and to estimate the target center position, as well as a servo control component. This study proposes a servo-controlled robotic arm system with adaptive recognition for efficient paper-based archive management in unmanned archives. The vision part of the system employs the YOLOV5 algorithm to identify feature regions and to estimate the target center position, while the servo control part uses closed-loop control to adjust posture. The proposed feature region-based sorting and matching algorithm enhances accuracy and reduces the probability of shaking by 1.27% in restricted viewing scenarios. The system is a reliable and cost-effective solution for paper archive access in complex scenarios, and the integration of the proposed system with a lifting device enables the effective storage and retrieval of archive boxes of varying heights. However, further research is necessary to evaluate its scalability and generalizability. The experimental results demonstrate the effectiveness of the proposed adaptive box access system for unmanned archival storage. The system exhibits a higher storage success rate than existing commercial archival management robotic systems. The integration of the proposed system with a lifting device provides a promising solution for efficient archive management in unmanned archival storage. Future research should focus on evaluating the system’s performance and scalability. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

21 pages, 10361 KiB  
Article
Ball Detection Using Deep Learning Implemented on an Educational Robot Based on Raspberry Pi
by Dominik Keča, Ivan Kunović, Jakov Matić and Ana Sovic Krzic
Sensors 2023, 23(8), 4071; https://doi.org/10.3390/s23084071 - 18 Apr 2023
Cited by 2 | Viewed by 3274
Abstract
RoboCupJunior is a project-oriented competition for primary and secondary school students that promotes robotics, computer science and programing. Through real life scenarios, students are encouraged to engage in robotics in order to help people. One of the popular categories is Rescue Line, in [...] Read more.
RoboCupJunior is a project-oriented competition for primary and secondary school students that promotes robotics, computer science and programing. Through real life scenarios, students are encouraged to engage in robotics in order to help people. One of the popular categories is Rescue Line, in which an autonomous robot has to find and rescue victims. The victim is in the shape of a silver ball that reflects light and is electrically conductive. The robot should find the victim and place it in the evacuation zone. Teams mostly detect victims (balls) using random walk or distant sensors. In this preliminary study, we explored the possibility of using a camera, Hough transform (HT) and deep learning methods for finding and locating balls with the educational mobile robot Fischertechnik with Raspberry Pi (RPi). We trained, tested and validated the performance of different algorithms (convolutional neural networks for object detection and U-NET architecture for sematic segmentation) on a handmade dataset made of images of balls in different light conditions and surroundings. RESNET50 was the most accurate, and MOBILENET_V3_LARGE_320 was the fastest object detection method, while EFFICIENTNET-B0 proved to be the most accurate, and MOBILENET_V2 was the fastest semantic segmentation method on the RPi. HT was by far the fastest method, but produced significantly worse results. These methods were then implemented on a robot and tested in a simplified environment (one silver ball with white surroundings and different light conditions) where HT had the best ratio of speed and accuracy (4.71 s, DICE 0.7989, IoU 0.6651). The results show that microcomputers without GPUs are still too weak for complicated deep learning algorithms in real-time situations, although these algorithms show much higher accuracy in complicated environment situations. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

14 pages, 4054 KiB  
Article
Anomaly Detection and Concept Drift Adaptation for Dynamic Systems: A General Method with Practical Implementation Using an Industrial Collaborative Robot
by Renat Kermenov, Giacomo Nabissi, Sauro Longhi and Andrea Bonci
Sensors 2023, 23(6), 3260; https://doi.org/10.3390/s23063260 - 20 Mar 2023
Cited by 1 | Viewed by 2419
Abstract
Industrial collaborative robots (cobots) are known for their ability to operate in dynamic environments to perform many different tasks (since they can be easily reprogrammed). Due to their features, they are largely used in flexible manufacturing processes. Since fault diagnosis methods are generally [...] Read more.
Industrial collaborative robots (cobots) are known for their ability to operate in dynamic environments to perform many different tasks (since they can be easily reprogrammed). Due to their features, they are largely used in flexible manufacturing processes. Since fault diagnosis methods are generally applied to systems where the working conditions are bounded, problems arise when defining condition monitoring architecture, in terms of setting absolute criteria for fault analysis and interpreting the meanings of detected values since working conditions may vary. The same cobot can be easily programmed to accomplish more than three or four tasks in a single working day. The extreme versatility of their use complicates the definition of strategies for detecting abnormal behavior. This is because any variation in working conditions can result in a different distribution of the acquired data stream. This phenomenon can be viewed as concept drift (CD). CD is defined as the change in data distribution that occurs in dynamically changing and nonstationary systems. Therefore, in this work, we propose an unsupervised anomaly detection (UAD) method that is capable of operating under CD. This solution aims to identify data changes coming from different working conditions (the concept drift) or a system degradation (failure) and, at the same time, can distinguish between the two cases. Additionally, once a concept drift is detected, the model can be adapted to the new conditions, thereby avoiding misinterpretation of the data. This paper concludes with a proof of concept (POC) that tests the proposed method on an industrial collaborative robot. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

23 pages, 12385 KiB  
Article
Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping—A Benchmark Study
by Giuseppe Mollica, Marco Legittimo, Alberto Dionigi, Gabriele Costante and Paolo Valigi
Sensors 2023, 23(4), 2286; https://doi.org/10.3390/s23042286 - 18 Feb 2023
Cited by 3 | Viewed by 1696
Abstract
Simultaneous localization and mapping (SLAM) is one of the cornerstones of autonomous navigation systems in robotics and the automotive industry. Visual SLAM (V-SLAM), which relies on image features, such as keypoints and descriptors to estimate the pose transformation between consecutive frames, is a [...] Read more.
Simultaneous localization and mapping (SLAM) is one of the cornerstones of autonomous navigation systems in robotics and the automotive industry. Visual SLAM (V-SLAM), which relies on image features, such as keypoints and descriptors to estimate the pose transformation between consecutive frames, is a highly efficient and effective approach for gathering environmental information. With the rise of representation learning, feature detectors based on deep neural networks (DNNs) have emerged as an alternative to handcrafted solutions. This work examines the integration of sparse learned features into a state-of-the-art SLAM framework and benchmarks handcrafted and learning-based approaches by comparing the two methods through in-depth experiments. Specifically, we replace the ORB detector and BRIEF descriptor of the ORBSLAM3 pipeline with those provided by Superpoint, a DNN model that jointly computes keypoints and descriptors. Experiments on three publicly available datasets from different application domains were conducted to evaluate the pose estimation performance and resource usage of both solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

22 pages, 9762 KiB  
Article
YOLO-S: A Lightweight and Accurate YOLO-like Network for Small Target Detection in Aerial Imagery
by Alessandro Betti and Mauro Tucci
Sensors 2023, 23(4), 1865; https://doi.org/10.3390/s23041865 - 7 Feb 2023
Cited by 15 | Viewed by 4549
Abstract
Small target detection is still a challenging task, especially when looking at fast and accurate solutions for mobile or edge applications. In this work, we present YOLO-S, a simple, fast, and efficient network. It exploits a small feature extractor, as well as skip [...] Read more.
Small target detection is still a challenging task, especially when looking at fast and accurate solutions for mobile or edge applications. In this work, we present YOLO-S, a simple, fast, and efficient network. It exploits a small feature extractor, as well as skip connection, via both bypass and concatenation, and a reshape-passthrough layer to promote feature reuse across network and combine low-level positional information with more meaningful high-level information. Performances are evaluated on AIRES, a novel dataset acquired in Europe, and VEDAI, benchmarking the proposed YOLO-S architecture with four baselines. We also demonstrate that a transitional learning task over a combined dataset based on DOTAv2 and VEDAI can enhance the overall accuracy with respect to more general features transferred from COCO data. YOLO-S is from 25% to 50% faster than YOLOv3 and only 15–25% slower than Tiny-YOLOv3, outperforming also YOLOv3 by a 15% in terms of accuracy (mAP) on the VEDAI dataset. Simulations on SARD dataset also prove its suitability for search and rescue operations. In addition, YOLO-S has roughly 90% of Tiny-YOLOv3’s parameters and one half FLOPs of YOLOv3, making possible the deployment for low-power industrial applications. Full article
(This article belongs to the Special Issue Artificial Intelligence for Sensing and Robotic Systems)
Show Figures

Figure 1

Back to TopTop