Special Issue "Intelligent Recognition and Detection for Unmanned Systems"

A special issue of Drones (ISSN 2504-446X). This special issue belongs to the section "Drone Design and Development".

Deadline for manuscript submissions: 30 April 2023 | Viewed by 6360

Special Issue Editors

School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
Interests: intelligent decision and control of UAVs; deep reinforcement learning; uncertain information processing; image processing
Special Issues, Collections and Topics in MDPI journals
School of Software, Northwestern Polytechnical University, Xi'an 710072, China
Interests: Deep Learning; image restoration; video restoration; computer vision
Special Issues, Collections and Topics in MDPI journals
School of Engineering, London South Bank University, London SE1 0AA, UK
Interests: neural networks and artificial intelligence; machine learning data science
School of Information and Communications Engineering, Communication University of China, Beijing 100024,China
Interests: computer vision; convolutional neural nets; learning (artificial intelligence); object detection; 5G mobile communication; cache storage; feature extraction; mobile computing; object recognition; Markov proces
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned systems (i.e., droned, robots and other intelligent systems) have played important roles in many fields, i.e., disaster relief, intelligent transportation, intelligent medical service and space exploration. Furthermore, object recognition and detection has extensive applications in these tasks. However, due to complex application environments, artficial intelligence techniques suffered from challenges in terms of robustness and flexibility. Thus, designing efficient and stable CNNs and other AI algorithms for object recognition and detection in unmanned systems are critical. 

Inspired by this, we host a SI to bring together the research accomplishments provided by researchers from academia and industry. The other goal is to show the latest research results in the field of deep learning for object recognition and detection and understand how governance strategy can influence it. We encourage prospective authors to submit related distinguished research papers on the subject of both theoretical approaches and practical case reviews.

Dr. Bo Li
Dr. Chunwei Tian
Dr. Daqing Chen
Dr. Ming Yan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object recognition (i.e., image recognition and speech recognition)
  • object detection
  • flexible CNNs
  • deep learning
  • NLP
  • drone
  • smart robot

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
A Real-Time UAV Target Detection Algorithm Based on Edge Computing
Drones 2023, 7(2), 95; https://doi.org/10.3390/drones7020095 - 30 Jan 2023
Cited by 1 | Viewed by 1057
Abstract
Small UAV target detection plays an important role in maintaining the security of cities and citizens. UAV targets have the characteristics of low-altitude flights, slow speeds, and miniaturization. Taking these characteristics into account, we present a real-time UAV target detection algorithm called Fast-YOLOv4 [...] Read more.
Small UAV target detection plays an important role in maintaining the security of cities and citizens. UAV targets have the characteristics of low-altitude flights, slow speeds, and miniaturization. Taking these characteristics into account, we present a real-time UAV target detection algorithm called Fast-YOLOv4 based on edge computing. By adopting Fast-YOLOv4 in the edge computing platform NVIDIA Jetson Nano, intelligent analysis can be performed on the video to realize the fast detection of UAV targets. However, the current iteration of the edge-embedded detection algorithm has low accuracy and poor real-time performance. To solve these problems, this paper introduces the lightweight networks MobileNetV3, Multiscale-PANet, and soft-merge to improve YOLOv4, thus obtaining the Fast-YOLOv4 model. The backbone of the model uses depth-wise separable convolution and an inverse residual structure to simplify the network’s structure and to improve its detection speed. The neck of the model adds a scale fusion branch to improve the feature extraction ability and strengthen small-scale target detection. Then, the predicted boxes filtering algorithm uses the soft-merge function to replace the traditionally used NMS (non-maximum suppression). Soft-merge can improve the model’s detection accuracy by fusing the information of predicted boxes. Finally, the experimental results show that the mAP (mean average precision) and FPS (frames per second) of Fast-YOLOv4 reach 90.62% and 54 f/s, respectively, in the workstation. In the NVIDIA Jetson Nano platform, the FPS of Fast-YOLOv4 is 2.5 times that of YOLOv4. This improved model performance meets the requirements for real-time detection and thus has theoretical significance and application value. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

Article
Multidomain Joint Learning of Pedestrian Detection for Application to Quadrotors
Drones 2022, 6(12), 430; https://doi.org/10.3390/drones6120430 - 19 Dec 2022
Cited by 1 | Viewed by 1392
Abstract
Pedestrian detection and tracking are critical functions in the application of computer vision for autonomous driving in terms of accident avoidance and safety. Extending the application to drones expands the monitoring space from 2D to 3D but complicates the task. Images captured from [...] Read more.
Pedestrian detection and tracking are critical functions in the application of computer vision for autonomous driving in terms of accident avoidance and safety. Extending the application to drones expands the monitoring space from 2D to 3D but complicates the task. Images captured from various angles pose a great challenge for pedestrian detection, because image features from different angles tremendously vary and the detection performance of deep neural networks deteriorates. In this paper, this multiple-angle issue is treated as a multiple-domain problem, and a novel multidomain joint learning (MDJL) method is proposed to train a deep neural network using drone data from multiple domains. Domain-guided dropout, a critical mechanism in MDJL, is developed to self-organize domain-specific features according to neuron impact scores. After training and fine-tuning the network, the accuracy of the obtained model improved in all the domains. In addition, we also combined the MDJL with Markov decision-process trackers to create a multiobject tracking system for flying drones. Experiments are conducted on many benchmarks, and the proposed method is compared with several state-of-the-art methods. Experimental results show that the MDJL effectively tackles many scenarios and significantly improves tracking performance. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

Article
FEC: Fast Euclidean Clustering for Point Cloud Segmentation
Drones 2022, 6(11), 325; https://doi.org/10.3390/drones6110325 - 27 Oct 2022
Viewed by 1265
Abstract
Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud [...] Read more.
Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud instance segmentation with small computational demands is lacking. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a point-wise scheme over the cluster-wise scheme used in existing works. The proposed method avoids traversing every point constantly in each nested loop, which is time and memory-consuming. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

Article
Weld Seam Identification and Tracking of Inspection Robot Based on Deep Learning Network
Drones 2022, 6(8), 216; https://doi.org/10.3390/drones6080216 - 20 Aug 2022
Cited by 1 | Viewed by 1123
Abstract
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing [...] Read more.
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing robot, an intelligent inspection robotic system based on deep learning is proposed to achieve the weld seam identification and tracking in this study. The inspection robot used mecanum wheels and permanent magnets to adsorb metal walls. In the weld seam identification, Mask R-CNN was used to segment the instance of weld seams. Through image processing combined with Hough transform, weld paths were extracted with a high accuracy. The robotic system efficiently completed the weld seam instance segmentation through training and learning with 2281 weld seam images. Experimental results indicated that the robotic system based on deep learning was faster and more accurate than previous methods, and the average time of identifying and calculating weld paths was about 180 ms, and the mask average precision (AP) was about 67.6%. The inspection robot could automatically track seam paths, and the maximum drift angle and offset distance were 3° and 10 mm, respectively. This intelligent weld seam identification system will greatly promote the application of inspection robots. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

Back to TopTop