Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = nano aerial vehicles

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6496 KiB  
Article
Real-Time Search and Rescue with Drones: A Deep Learning Approach for Small-Object Detection Based on YOLO
by Francesco Ciccone and Alessandro Ceruti
Drones 2025, 9(8), 514; https://doi.org/10.3390/drones9080514 - 22 Jul 2025
Viewed by 669
Abstract
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially [...] Read more.
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially when managed by civil agencies. In this work, we present a comprehensive methodology for optimizing YOLO-based object detection models for real-time Search and Rescue scenarios. A two-stage transfer learning strategy was employed using VisDrone for general aerial object detection and Heridal for Search and Rescue-specific fine-tuning. We explored various architectural modifications, including enhanced feature fusion (FPN, BiFPN, PB-FPN), additional detection heads (P2), and modules such as CBAM, Transformers, and deconvolution, analyzing their impact on performance and computational efficiency. The best-performing configuration (YOLOv5s-PBfpn-Deconv) achieved a mAP@50 of 0.802 on the Heridal dataset while maintaining real-time inference on embedded hardware (Jetson Nano). Further tests at different flight altitudes and explainability analyses using EigenCAM confirmed the robustness and interpretability of the model in real-world conditions. The proposed solution offers a viable framework for deploying lightweight, interpretable AI systems for UAV-based Search and Rescue operations managed by civil protection authorities. Limitations and future directions include the integration of multimodal sensors and adaptation to broader environmental conditions. Full article
Show Figures

Figure 1

21 pages, 9976 KiB  
Article
RLRD-YOLO: An Improved YOLOv8 Algorithm for Small Object Detection from an Unmanned Aerial Vehicle (UAV) Perspective
by Hanyun Li, Yi Li, Linsong Xiao, Yunfeng Zhang, Lihua Cao and Di Wu
Drones 2025, 9(4), 293; https://doi.org/10.3390/drones9040293 - 10 Apr 2025
Cited by 1 | Viewed by 2717
Abstract
In Unmanned Aerial Vehicle (UAV) target detection tasks, issues such as missing and erroneous detections frequently occur owing to the small size of the targets and the complexity of the image background. To improve these issues, an improved target detection algorithm named RLRD-YOLO, [...] Read more.
In Unmanned Aerial Vehicle (UAV) target detection tasks, issues such as missing and erroneous detections frequently occur owing to the small size of the targets and the complexity of the image background. To improve these issues, an improved target detection algorithm named RLRD-YOLO, based on You Only Look Once version 8 (YOLOv8), is proposed. First, the backbone network initially integrates the Receptive Field Attention Convolution (RFCBAMConv) Module, which combines the Convolutional Block Attention Module (CBAM) and Receptive Field Attention Convolution (RFAConv). This integration improves the issue of shared attention weights in receptive field features. It also combines attention mechanisms across both channel and spatial dimensions, enhancing the capability of feature extraction. Subsequently, Large-Scale Kernel Attention (LSKA) is integrated to further optimize the Spatial Pyramid Pooling Fast (SPPF) layer. This enhancement employs a large-scale convolutional kernel to improve the capture of intricate small target features and minimize background interference. To enhance feature fusion and effectively integrate low-level details with high-level semantic information, the Reparameterized Generalized Feature Pyramid Network (RepGFPN) replaces the original architecture in the neck network. Additionally, a small-target detection layer is added to enhance the model’s ability to perceive small targets. Finally, the detecting head is replaced with the Dynamic Head, designed to improve the localization accuracy of small targets in complex scenarios by optimizing for Scale Awareness, Spatial Awareness, and Task Awareness. The experimental results showed that RLRD-YOLO outperformed YOLOv8 on the VisDrone2019 dataset, achieving improvements of 12.2% in mAP@0.5 and 8.4% in mAP@0.5:0.95. It also surpassed other widely used object detection methods. Furthermore, experimental results on the HIT-HAV dataset demonstrate that RLRD-YOLO sustains excellent precision in infrared UAV imagery, validating its generalizability across diverse scenarios. Finally, RLRD-YOLO was deployed and validated on the typical airborne platform, Jetson Nano, providing reliable technical support for the improvement of detection algorithms in aerial scenarios and their practical applications. Full article
Show Figures

Figure 1

8 pages, 2885 KiB  
Proceeding Paper
Resilient Time Dissemination Fusion Framework for UAVs for Smart Cities
by Sorin Andrei Negru, Triyan Pal Arora, Ivan Petrunin, Weisi Guo, Antonios Tsourdos, David Sweet and George Dunlop
Eng. Proc. 2025, 88(1), 5; https://doi.org/10.3390/engproc2025088005 - 17 Mar 2025
Viewed by 438
Abstract
Future smart cities will consist of a heterogeneous environment, including UGVs (Unmanned Ground Vehicles) and UAVs (Unmanned Aerial Vehicles), used for different applications such as last mile delivery. Considering the vulnerabilities of GNSS (Global Navigation System Satellite) in urban environments, a resilient PNT [...] Read more.
Future smart cities will consist of a heterogeneous environment, including UGVs (Unmanned Ground Vehicles) and UAVs (Unmanned Aerial Vehicles), used for different applications such as last mile delivery. Considering the vulnerabilities of GNSS (Global Navigation System Satellite) in urban environments, a resilient PNT (Position, Navigation, Timing) solution is needed. A key research question within the PNT community is the capability to deliver a robust and resilient time solution to multiple devices simultaneously. The paper is proposing an innovative time dissemination framework, based on IQuila’s SDN (Software Defined Network) and quantum random key encryption from Quantum Dice to multiple users. The time signal is disseminated using a wireless IEEE 802.11ax, through a wireless AP (Access point) which is received by each user, where a KF (Kalman Filter) is used to enhance the timing resilience of each client into the framework. Each user is equipped with a Jetson Nano board as CC (Companion Computer), a GNSS receiver, an IEEE 802.11ax wireless card, an embedded RTC (Real Time clock) system, and a Pixhawk 2.1 as FCU (Flight Control Unit). The paper is presenting the performance of the fusion framework using the MUEAVI (Multi-user Environment for Autonomous Vehicle Innovation) Cranfield’s University facility. Results showed that an alternative timing source can securely be delivered fulfilling last mile delivery requirements for aerial platforms achieving sub millisecond offset. Full article
(This article belongs to the Proceedings of European Navigation Conference 2024)
Show Figures

Figure 1

13 pages, 3458 KiB  
Article
Smart Glove: A Cost-Effective and Intuitive Interface for Advanced Drone Control
by Cristian Randieri, Andrea Pollina, Adriano Puglisi and Christian Napoli
Drones 2025, 9(2), 109; https://doi.org/10.3390/drones9020109 - 1 Feb 2025
Cited by 7 | Viewed by 2059
Abstract
Recent years have witnessed the development of human-unmanned aerial vehicle (UAV) interfaces to meet the growing demand for intuitive and efficient solutions in UAV piloting. In this paper, we propose a novel Smart Glove v 1.0 prototype for advanced drone gesture control, leveraging [...] Read more.
Recent years have witnessed the development of human-unmanned aerial vehicle (UAV) interfaces to meet the growing demand for intuitive and efficient solutions in UAV piloting. In this paper, we propose a novel Smart Glove v 1.0 prototype for advanced drone gesture control, leveraging key low-cost components such as Arduino Nano to process data, MPU6050 to detect hand movements, flexible sensors for easy throttle control, and the nRF24L01 module for wireless communication. The proposed research highlights the design methodology of reporting flight tests associated with simulation findings to demonstrate the characteristics of Smart Glove v1.0 in terms of intuitive, responsive, and hands-free piloting gesture interface. We aim to make the drone piloting experience more enjoyable and leverage ergonomics by adapting to the pilot’s preferred position. The overall research project points to a seedbed for future solutions, eventually extending its applications to medicine, space, and the metaverse. Full article
Show Figures

Figure 1

20 pages, 9894 KiB  
Article
Estimation of Strawberry Canopy Volume in Unmanned Aerial Vehicle RGB Imagery Using an Object Detection-Based Convolutional Neural Network
by Min-Seok Gang, Thanyachanok Sutthanonkul, Won Suk Lee, Shiyu Liu and Hak-Jin Kim
Sensors 2024, 24(21), 6920; https://doi.org/10.3390/s24216920 - 28 Oct 2024
Cited by 1 | Viewed by 1314
Abstract
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual [...] Read more.
Estimating canopy volumes of strawberry plants can be useful for predicting yields and establishing advanced management plans. Therefore, this study evaluated the spatial variability of strawberry canopy volumes using a ResNet50V2-based convolutional neural network (CNN) model trained with RGB images acquired through manual unmanned aerial vehicle (UAV) flights equipped with a digital color camera. A preprocessing method based on the You Only Look Once v8 Nano (YOLOv8n) object detection model was applied to correct image distortions influenced by fluctuating flight altitude under a manual maneuver. The CNN model was trained using actual canopy volumes measured using a cylindrical case and small expanded polystyrene (EPS) balls to account for internal plant spaces. Estimated canopy volumes using the CNN with flight altitude compensation closely matched the canopy volumes measured with EPS balls (nearly 1:1 relationship). The model achieved a slope, coefficient of determination (R2), and root mean squared error (RMSE) of 0.98, 0.98, and 74.3 cm3, respectively, corresponding to an 84% improvement over the conventional paraboloid shape approximation. In the application tests, the canopy volume map of the entire strawberry field was generated, highlighting the spatial variability of the plant’s canopy volumes, which is crucial for implementing site-specific management of strawberry crops. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2024)
Show Figures

Figure 1

24 pages, 3579 KiB  
Article
Prototype for Multi-UAV Monitoring–Control System Using WebRTC
by Fatih Kilic, Mainul Hassan and Wolfram Hardt
Drones 2024, 8(10), 551; https://doi.org/10.3390/drones8100551 - 5 Oct 2024
Cited by 3 | Viewed by 3109
Abstract
Most unmanned aerial vehicle (UAV) ground control station (GCS) solutions today are either web-based or native applications, primarily designed to support a single UAV. In this paper, our research aims to provide an open, universal framework intended for rapid prototyping, addressing these objectives [...] Read more.
Most unmanned aerial vehicle (UAV) ground control station (GCS) solutions today are either web-based or native applications, primarily designed to support a single UAV. In this paper, our research aims to provide an open, universal framework intended for rapid prototyping, addressing these objectives by developing a Web Real-Time Communication (WebRTC)-based multi-UAV monitoring and control system for applications such as automated power line inspection (APOLI). The APOLI project focuses on identifying damage and faults in power line insulators through real-time image processing, video streaming, and flight data monitoring. The implementation is divided into three main parts. First, we configure UAVs for hardware-accelerated streaming using the GStreamer framework on the NVIDIA Jetson Nano companion board. Second, we develop the server-side application to receive hardware-encoded video feeds from the UAVs by utilizing a WebRTC media server. Lastly, we develop a web application that facilitates communication between clients and the server, allowing users with different authorization levels to access video feeds and control the UAVs. The system supports three user types: pilot/admin, inspector, and customer. Our research aims to leverage the WebRTC media server framework to develop a web-based GCS solution capable of managing multiple UAVs with low latency. The proposed solution enables real-time video streaming and flight data collection from multiple UAVs to a server, which is displayed in a web application interface hosted on the GCS. This approach ensures efficient inspection for applications like APOLI while prioritizing UAV safety during critical scenarios. Another advantage of the solution is its integration compatibility with platforms such as cloud services and native applications, as well as the modularity of the plugin-based architecture offered by the Janus WebRTC server for future development. Full article
(This article belongs to the Special Issue Conceptual Design, Modeling, and Control Strategies of Drones-II)
Show Figures

Figure 1

23 pages, 30652 KiB  
Article
EUAVDet: An Efficient and Lightweight Object Detector for UAV Aerial Images with an Edge-Based Computing Platform
by Wanneng Wu, Ao Liu, Jianwen Hu, Yan Mo, Shao Xiang, Puhong Duan and Qiaokang Liang
Drones 2024, 8(6), 261; https://doi.org/10.3390/drones8060261 - 13 Jun 2024
Cited by 11 | Viewed by 3036
Abstract
Crafting an edge-based real-time object detector for unmanned aerial vehicle (UAV) aerial images is challenging because of the limited computational resources and the small size of detected objects. Existing lightweight object detectors often prioritize speed over detecting extremely small targets. To better balance [...] Read more.
Crafting an edge-based real-time object detector for unmanned aerial vehicle (UAV) aerial images is challenging because of the limited computational resources and the small size of detected objects. Existing lightweight object detectors often prioritize speed over detecting extremely small targets. To better balance this trade-off, this paper proposes an efficient and low-complexity object detector for edge computing platforms deployed on UAVs, termed EUAVDet (Edge-based UAV Object Detector). Specifically, an efficient feature downsampling module and a novel multi-kernel aggregation block are first introduced into the backbone network to retain more feature details and capture richer spatial information. Subsequently, an improved feature pyramid network with a faster ghost module is incorporated into the neck network to fuse multi-scale features with fewer parameters. Experimental evaluations on the VisDrone, SeaDronesSeeV2, and UAVDT datasets demonstrate the effectiveness and plug-and-play capability of our proposed modules. Compared with the state-of-the-art YOLOv8 detector, the proposed EUAVDet achieves better performance in nearly all the metrics, including parameters, FLOPs, mAP, and FPS. The smallest version of EUAVDet (EUAVDet-n) contains only 1.34 M parameters and achieves over 20 fps on the Jetson Nano. Our algorithm strikes a better balance between detection accuracy and inference speed, making it suitable for edge-based UAV applications. Full article
(This article belongs to the Special Issue Advances in Perception, Communications, and Control for Drones)
Show Figures

Figure 1

18 pages, 2112 KiB  
Data Descriptor
CrazyPAD: A Dataset for Assessing the Impact of Structural Defects on Nano-Quadcopter Performance
by Kamil Masalimov, Tagir Muslimov, Evgeny Kozlov and Rustem Munasypov
Data 2024, 9(6), 79; https://doi.org/10.3390/data9060079 - 13 Jun 2024
Cited by 1 | Viewed by 1829
Abstract
This article presents a novel dataset focused on structural damage in quadcopters, addressing a significant gap in unmanned aerial vehicle (UAV or drone) research. The dataset is called CrazyPAD (Crazyflie Propeller Anomaly Data) according to the name of the Crazyflie 2.1 nano-quadrocopter used [...] Read more.
This article presents a novel dataset focused on structural damage in quadcopters, addressing a significant gap in unmanned aerial vehicle (UAV or drone) research. The dataset is called CrazyPAD (Crazyflie Propeller Anomaly Data) according to the name of the Crazyflie 2.1 nano-quadrocopter used to collect the data. Despite the existence of datasets on UAV anomalies and behavior, none of them covers structural damage specifically in nano-quadrocopters. Our dataset, therefore, provides critical data for developing predictive models for defect detection in nano-quadcopters. This work details the data collection methodology, involving rigorous simulations of structural damages and their effects on UAV performance. The ultimate goal is to enhance UAV safety by enabling accurate defect diagnosis and predictive maintenance, contributing substantially to the field of UAV technology and its practical applications. Full article
Show Figures

Figure 1

15 pages, 5909 KiB  
Article
An Unmanned Aerial Vehicle Indoor Low-Computation Navigation Method Based on Vision and Deep Learning
by Tzu-Ling Hsieh, Zih-Syuan Jhan, Nai-Jui Yeh, Chang-Yu Chen and Cheng-Ta Chuang
Sensors 2024, 24(1), 190; https://doi.org/10.3390/s24010190 - 28 Dec 2023
Cited by 4 | Viewed by 1911
Abstract
Recently, unmanned aerial vehicles (UAVs) have found extensive indoor applications. In numerous indoor UAV scenarios, navigation paths remain consistent. While many indoor positioning methods offer excellent precision, they often demand significant costs and computational resources. Furthermore, such high functionality can be superfluous for [...] Read more.
Recently, unmanned aerial vehicles (UAVs) have found extensive indoor applications. In numerous indoor UAV scenarios, navigation paths remain consistent. While many indoor positioning methods offer excellent precision, they often demand significant costs and computational resources. Furthermore, such high functionality can be superfluous for these applications. To address this issue, we present a cost-effective, computationally efficient solution for path following and obstacle avoidance. The UAV employs a down-looking camera for path following and a front-looking camera for obstacle avoidance. This paper refines the carrot casing algorithm for line tracking and introduces our novel line-fitting path-following algorithm (LFPF). Both algorithms competently manage indoor path-following tasks within a constrained field of view. However, the LFPF is superior at adapting to light variations and maintaining a consistent flight speed, maintaining its error margin within ±40 cm in real flight scenarios. For obstacle avoidance, we utilize depth images and YOLOv4-tiny to detect obstacles, subsequently implementing suitable avoidance strategies based on the type and proximity of these obstacles. Real-world tests indicated minimal computational demands, enabling the Nvidia Jetson Nano, an entry-level computing platform, to operate at 23 FPS. Full article
(This article belongs to the Special Issue Advances in CMOS-MEMS Devices and Sensors)
Show Figures

Figure 1

18 pages, 2479 KiB  
Article
Implementation of an Edge-Computing Vision System on Reduced-Board Computers Embedded in UAVs for Intelligent Traffic Management
by Sergio Bemposta Rosende, Sergio Ghisler, Javier Fernández-Andrés and Javier Sánchez-Soriano
Drones 2023, 7(11), 682; https://doi.org/10.3390/drones7110682 - 20 Nov 2023
Cited by 15 | Viewed by 4414
Abstract
Advancements in autonomous driving have seen unprecedented improvement in recent years. This work addresses the challenge of enhancing the navigation of autonomous vehicles in complex urban environments such as intersections and roundabouts through the integration of computer vision and unmanned aerial vehicles (UAVs). [...] Read more.
Advancements in autonomous driving have seen unprecedented improvement in recent years. This work addresses the challenge of enhancing the navigation of autonomous vehicles in complex urban environments such as intersections and roundabouts through the integration of computer vision and unmanned aerial vehicles (UAVs). UAVs, owing to their aerial perspective, offer a more effective means of detecting vehicles involved in these maneuvers. The primary objective is to develop, evaluate, and compare different computer vision models and reduced-board (and small-power) hardware for optimizing traffic management in these scenarios. A dataset was constructed using two sources, several models (YOLO 5 and 8, DETR, and EfficientDetLite) were selected and trained, four reduced-board computers were chosen (Raspberry Pi 3B+ and 4, Jetson Nano, and Google Coral), and the models were tested on these boards for edge computing in UAVs. The experiments considered training times (with the dataset and its optimized version), model metrics were obtained, inference frames per second (FPS) were measured, and energy consumption was quantified. After the experiments, it was observed that the combination that best suits our use case is the YoloV8 model with the Jetson Nano. On the other hand, a combination with much higher inference speed but lower accuracy involves the EfficientDetLite models with the Google Coral board. Full article
(This article belongs to the Special Issue Edge Computing and IoT Technologies for Drones)
Show Figures

Figure 1

23 pages, 141288 KiB  
Article
Feasibility of Detecting Sweet Potato (Ipomoea batatas) Virus Disease from High-Resolution Imagery in the Field Using a Deep Learning Framework
by Fanguo Zeng, Ziyu Ding, Qingkui Song, Jiayi Xiao, Jianyu Zheng, Haifeng Li, Zhongxia Luo, Zhangying Wang, Xuejun Yue and Lifei Huang
Agronomy 2023, 13(11), 2801; https://doi.org/10.3390/agronomy13112801 - 13 Nov 2023
Cited by 2 | Viewed by 4368
Abstract
The sweet potato is an essential food and economic crop that is often threatened by the devastating sweet potato virus disease (SPVD), especially in developing countries. Traditional laboratory-based direct detection methods and field scouting are commonly used to rapidly detect SPVD. However, these [...] Read more.
The sweet potato is an essential food and economic crop that is often threatened by the devastating sweet potato virus disease (SPVD), especially in developing countries. Traditional laboratory-based direct detection methods and field scouting are commonly used to rapidly detect SPVD. However, these molecular-based methods are costly and disruptive, while field scouting is subjective, labor-intensive, and time-consuming. In this study, we propose a deep learning-based object detection framework to assess the feasibility of detecting SPVD from ground and aerial high-resolution images. We proposed a novel object detector called SPVDet, as well as a lightweight version called SPVDet-Nano, using a single-level feature. These detectors were prototyped based on a small-scale publicly available benchmark dataset (PASCAL VOC 2012) and compared to mainstream feature pyramid object detectors using a leading large-scale publicly available benchmark dataset (MS COCO 2017). The learned model weights from this dataset were then transferred to fine-tune the detectors and directly analyze our self-made SPVD dataset encompassing one category and 1074 objects, incorporating the slicing aided hyper inference (SAHI) technology. The results showed that SPVDet outperformed both its single-level counterparts and several mainstream feature pyramid detectors. Furthermore, the introduction of SAHI techniques significantly improved the detection accuracy of SPVDet by 14% in terms of mean average precision (mAP) in both ground and aerial images, and yielded the best detection accuracy of 78.1% from close-up perspectives. These findings demonstrate the feasibility of detecting SPVD from ground and unmanned aerial vehicle (UAV) high-resolution images using the deep learning-based SPVDet object detector proposed here. They also have great implications for broader applications in high-throughput phenotyping of sweet potatoes under biotic stresses, which could accelerate the screening process for genetic resistance against SPVD in plant breeding and provide timely decision support for production management. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

18 pages, 6706 KiB  
Article
Detection of Volatile Organic Compounds (VOCs) in Indoor Environments Using Nano Quadcopter
by Aline Mara Oliveira, Aniel Silva Morais, Gabriela Vieira Lima, Rafael Monteiro Jorge Alves Souza and Luis Cláudio Oliveira-Lopes
Drones 2023, 7(11), 660; https://doi.org/10.3390/drones7110660 - 6 Nov 2023
Cited by 3 | Viewed by 2671
Abstract
The dispersion of chemical gases poses a threat to human health, animals, and the environment. Leaks or accidents during the handling of samples and laboratory materials can result in the uncontrolled release of hazardous or explosive substances. Therefore, it is crucial to monitor [...] Read more.
The dispersion of chemical gases poses a threat to human health, animals, and the environment. Leaks or accidents during the handling of samples and laboratory materials can result in the uncontrolled release of hazardous or explosive substances. Therefore, it is crucial to monitor gas concentrations in environments where these substances are manipulated. Gas sensor technology has evolved rapidly in recent years, offering increasingly precise and reliable solutions. However, there are still challenges to be overcome, especially when sensors are deployed on unmanned aerial vehicles (UAVs). This article discusses the use of UAVs to locate gas sources and presents real test results using the SGP40 metal oxide semiconductor gas sensor onboard the Crazyflie 2.1 nano quadcopter. The solution proposed in this article uses an odor source identification strategy, employing a gas distribution mapping approach in a three-dimensional environment. The aim of the study was to investigate the feasibility and effectiveness of this approach for detecting gases in areas that are difficult to access or dangerous for humans. The results obtained show that the use of drones equipped with gas sensors is a promising alternative for the detection and monitoring of gas leaks in closed environments. Full article
(This article belongs to the Special Issue Advances in Detection, Security, and Communication for UAV)
Show Figures

Figure 1

22 pages, 7980 KiB  
Article
3D Gas Sensing with Multiple Nano Aerial Vehicles: Interference Analysis, Algorithms and Experimental Validation
by Chiara Ercolani, Wanting Jin and Alcherio Martinoli
Sensors 2023, 23(20), 8512; https://doi.org/10.3390/s23208512 - 17 Oct 2023
Cited by 4 | Viewed by 1910
Abstract
Within the scope of the ongoing efforts to fight climate change, the application of multi-robot systems to environmental mapping and monitoring missions is a prominent approach aimed at increasing exploration efficiency. However, the application of such systems to gas sensing missions has yet [...] Read more.
Within the scope of the ongoing efforts to fight climate change, the application of multi-robot systems to environmental mapping and monitoring missions is a prominent approach aimed at increasing exploration efficiency. However, the application of such systems to gas sensing missions has yet to be extensively explored and presents some unique challenges, mainly due to the hard-to-sense and expensive-to-model nature of gas dispersion. For this paper, we explored the application of a multi-robot system composed of rotary-winged nano aerial vehicles to a gas sensing mission. We qualitatively and quantitatively analyzed the interference between different robots and the effect on their sensing performance. We then assessed this effect, by deploying several algorithms for 3D gas sensing with increasing levels of coordination in a state-of-the-art wind tunnel facility. The results show that multi-robot gas sensing missions can be robust against documented interference and degradation in their sensing performance. We additionally highlight the competitiveness of multi-robot strategies in gas source location performance with tight mission time constraints. Full article
(This article belongs to the Special Issue Robotics for Environment Sensing)
Show Figures

Figure 1

19 pages, 13632 KiB  
Article
An Approach to the Implementation of a Neural Network for Cryptographic Protection of Data Transmission at UAV
by Ivan Tsmots, Vasyl Teslyuk, Andrzej Łukaszewicz, Yurii Lukashchuk, Iryna Kazymyra, Andriy Holovatyy and Yurii Opotyak
Drones 2023, 7(8), 507; https://doi.org/10.3390/drones7080507 - 2 Aug 2023
Cited by 10 | Viewed by 2539
Abstract
An approach to the implementation of a neural network for real-time cryptographic data protection with symmetric keys oriented on embedded systems is presented. This approach is valuable, especially for onboard communication systems in unmanned aerial vehicles (UAV), because of its suitability for hardware [...] Read more.
An approach to the implementation of a neural network for real-time cryptographic data protection with symmetric keys oriented on embedded systems is presented. This approach is valuable, especially for onboard communication systems in unmanned aerial vehicles (UAV), because of its suitability for hardware implementation. In this study, we evaluate the possibility of building such a system in hardware implementation at FPGA. Onboard implementation-oriented information technology of real-time neuro-like cryptographic data protection with symmetric keys (masking codes, neural network architecture, and matrix of weighting coefficients) has been developed. Due to the pre-calculation of matrices of weighting coefficients and tables of macro-partial products and the use of tabular-algorithmic implementation of neuro-like elements and dynamic change of keys, it provides increased cryptographic stability and hardware–software implementation on FPGA. The table-algorithmic method of calculating the scalar product has been improved. By bringing the weighting coefficients to the greatest common order, pre-computing the tables of macro-partial products, and using operations of memory read, fixed-point addition, and shift operations instead of floating-point multiplication and addition operations, it provides a reduction in hardware costs for its implementation and calculation time as well. Using a processor core supplemented with specialized hardware modules for calculating the scalar product, a system of neural network cryptographic data protection in real-time has been developed, which, due to the combination of universal and specialized approaches, software, and hardware, ensures the effective implementation of neuro-like algorithms for cryptographic encryption and decryption of data in real-time. The specialized hardware for neural network cryptographic data encryption was developed using VHDL for equipment programming in the Quartus II development environment ver. 13.1 and the appropriate libraries and implemented on the basis of the FPGA EP3C16F484C6 Cyclone III family, and it requires 3053 logic elements and 745 registers. The execution time of exclusively software realization of NN cryptographic data encryption procedure using a NanoPi Duo microcomputer based on the Allwinner Cortex-A7 H2+ SoC was about 20 ms. The hardware–software implementation of the encryption, taking into account the pre-calculations and settings, requires about 1 msec, including hardware encryption on the FPGA of four 2-bit inputs, which is performed in 160 nanoseconds. Full article
Show Figures

Figure 1

28 pages, 3005 KiB  
Article
Multi-UAV Mapping and Target Finding in Large, Complex, Partially Observable Environments
by Violet Walker, Fernando Vanegas and Felipe Gonzalez
Remote Sens. 2023, 15(15), 3802; https://doi.org/10.3390/rs15153802 - 30 Jul 2023
Cited by 2 | Viewed by 2393
Abstract
Coordinating multiple unmanned aerial vehicles (UAVs) for the purposes of target finding or surveying points of interest in large, complex, and partially observable environments remains an area of exploration. This work proposes a modeling approach and software framework for multi-UAV search and target [...] Read more.
Coordinating multiple unmanned aerial vehicles (UAVs) for the purposes of target finding or surveying points of interest in large, complex, and partially observable environments remains an area of exploration. This work proposes a modeling approach and software framework for multi-UAV search and target finding within large, complex, and partially observable environments. Mapping and path-solving is carried out by an extended NanoMap library; the global planning problem is defined as a decentralized partially observable Markov decision process and solved using an online model-based solver, and the local control problem is defined as two separate partially observable Markov decision processes that are solved using deep reinforcement learning. Simulated testing demonstrates that the proposed framework enables multiple UAVs to search and target-find within large, complex, and partially observable environments. Full article
Show Figures

Figure 1

Back to TopTop