Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = UAV-MBS deployment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3823 KB  
Article
Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests
by Akmalbek Abdusalomov, Sabina Umirzakova, Alpamis Kutlimuratov, Dilshod Mirzaev, Adilbek Dauletov, Tulkin Botirov, Madina Zakirova, Mukhriddin Mukhiddinov and Young Im Cho
Fire 2025, 8(8), 288; https://doi.org/10.3390/fire8080288 - 23 Jul 2025
Cited by 1 | Viewed by 685
Abstract
The escalating impacts and occurrence of wildfires threaten the public, economies, and global ecosystems. Physiologically declining or dead trees are a great portion of the fires because these trees are prone to higher ignition and have lower moisture content. To prevent wildfires, hazardous [...] Read more.
The escalating impacts and occurrence of wildfires threaten the public, economies, and global ecosystems. Physiologically declining or dead trees are a great portion of the fires because these trees are prone to higher ignition and have lower moisture content. To prevent wildfires, hazardous vegetation needs to be removed, and the vegetation should be identified early on. This work proposes a real-time fire risk tree detection framework using UAV images, which is based on lightweight object detection. The model uses the MobileNetV3-Small spine, which is optimized for edge deployment, combined with an SSD head. This configuration results in a highly optimized and fast UAV-based inference pipeline. The dataset used in this study comprises over 3000 annotated RGB UAV images of trees in healthy, partially dead, and fully dead conditions, collected from mixed real-world forest scenes and public drone imagery repositories. Thorough evaluation shows that the proposed model outperforms conventional SSD and recent YOLOs on Precision (94.1%), Recall (93.7%), mAP (90.7%), F1 (91.0%) while being light-weight (8.7 MB) and fast (62.5 FPS on Jetson Xavier NX). These findings strongly support the model’s effectiveness for large-scale continuous forest monitoring to detect health degradations and mitigate wildfire risks proactively. The framework UAV-based environmental monitoring systems differentiates itself by incorporating a balance between detection accuracy, speed, and resource efficiency as fundamental principles. Full article
Show Figures

Figure 1

23 pages, 6358 KB  
Article
Optimization of Sorghum Spike Recognition Algorithm and Yield Estimation
by Mengyao Han, Jian Gao, Cuiqing Wu, Qingliang Cui, Xiangyang Yuan and Shujin Qiu
Agronomy 2025, 15(7), 1526; https://doi.org/10.3390/agronomy15071526 - 23 Jun 2025
Viewed by 459
Abstract
In the natural field environment, the high planting density of sorghum and severe occlusion among spikes substantially increases the difficulty of sorghum spike recognition, resulting in frequent false positives and false negatives. The target detection model suitable for this environment requires high computational [...] Read more.
In the natural field environment, the high planting density of sorghum and severe occlusion among spikes substantially increases the difficulty of sorghum spike recognition, resulting in frequent false positives and false negatives. The target detection model suitable for this environment requires high computational power, and it is difficult to realize real-time detection of sorghum spikes on mobile devices. This study proposes a detection-tracking scheme based on improved YOLOv8s-GOLD-LSKA with optimized DeepSort, aiming to enhance yield estimation accuracy in complex agricultural field scenarios. By integrating the GOLD module’s dual-branch multi-scale feature fusion and the LSKA attention mechanism, a lightweight detection model is developed. The improved DeepSort algorithm enhances tracking robustness in occlusion scenarios by optimizing the confidence threshold filtering (0.46), frame-skipping count, and cascading matching strategy (n = 3, max_age = 40). Combined with the five-point sampling method, the average dry weight of sorghum spikes (0.12 kg) was used to enable rapid yield estimation. The results demonstrate that the improved model achieved a mAP of 85.86% (a 6.63% increase over the original YOLOv8), an F1 score of 81.19%, and a model size reduced to 7.48 MB, with a detection speed of 0.0168 s per frame. The optimized tracking system attained a MOTA of 67.96% and ran at 42 FPS. Image- and video-based yield estimation accuracies reached 89–96% and 75–93%, respectively, with single-frame latency as low as 0.047 s. By optimizing the full detection–tracking–yield pipeline, this solution overcomes challenges in small object missed detections, ID switches under occlusion, and real-time processing in complex scenarios. Its lightweight, high-efficiency design is well suited for deployment on UAVs and mobile terminals, providing robust technical support for intelligent sorghum monitoring and precision agriculture management, and thereby playing a crucial role in driving agricultural digital transformation. Full article
Show Figures

Figure 1

21 pages, 1560 KB  
Article
Energy-Efficient Deployment Simulator of UAV-Mounted Base Stations Under Dynamic Weather Conditions
by Gyeonghyeon Min and Jaewoo So
Sensors 2025, 25(12), 3648; https://doi.org/10.3390/s25123648 - 11 Jun 2025
Viewed by 516
Abstract
In unmanned aerial vehicle (UAV)-mounted base station (MBS) networks, user equipment (UE) experiences dynamic channel variations because of the mobility of the UAV and the changing weather conditions. In order to overcome the degradation in the quality of service (QoS) of the UE [...] Read more.
In unmanned aerial vehicle (UAV)-mounted base station (MBS) networks, user equipment (UE) experiences dynamic channel variations because of the mobility of the UAV and the changing weather conditions. In order to overcome the degradation in the quality of service (QoS) of the UE due to channel variations, it is important to appropriately determine the three-dimensional (3D) position and transmission power of the base station (BS) mounted on the UAV. Moreover, it is also important to account for both geographical and meteorological factors when deploying UAV-MBSs because they service ground UE in various regions and atmospheric environments. In this paper, we propose an energy-efficient UAV-MBS deployment scheme in multi-UAV-MBS networks using a hybrid improved simulated annealing–particle swarm optimization (ISA-PSO) algorithm to find the 3D position and transmission power of each UAV-MBS. Moreover, we developed a simulator for deploying UAV-MBSs, which took the dynamic weather conditions into consideration. The proposed scheme for deploying UAV-MBSs demonstrated superior performance, where it achieved faster convergence and higher stability compared with conventional approaches, making it well suited for practical deployment. The developed simulator integrates terrain data based on geolocation and real-time weather information to produce more practical results. Full article
(This article belongs to the Special Issue Energy-Efficient Communication Networks and Systems: 2nd Edition)
Show Figures

Figure 1

27 pages, 6553 KB  
Article
DEMNet: A Small Object Detection Method for Tea Leaf Blight in Slightly Blurry UAV Remote Sensing Images
by Yating Gu, Yuxin Jing, Hao-Dong Li, Juntao Shi and Haifeng Lin
Remote Sens. 2025, 17(12), 1967; https://doi.org/10.3390/rs17121967 - 6 Jun 2025
Viewed by 790
Abstract
Unmanned aerial vehicles are widely used in agricultural disease detection. Still, slight image blurring caused by lighting, wind, and flight instability often hampers the detection of dense small targets like tea leaf blight spots. In response to this problem, this paper proposes DEMNet, [...] Read more.
Unmanned aerial vehicles are widely used in agricultural disease detection. Still, slight image blurring caused by lighting, wind, and flight instability often hampers the detection of dense small targets like tea leaf blight spots. In response to this problem, this paper proposes DEMNet, a model based on the YOLOv8n architecture. The goal is to enhance small, blurry object detection performance in UAV-based scenarios. DEMNet introduces a dynamic convolution mechanism into the HGNetV2 backbone to form DynamicHGNetV2, enabling adaptive convolutional weight generation and improving feature extraction for blurry objects. An efficient EMAFPN neck structure further facilitates deep–shallow feature interaction while reducing the computational cost. Additionally, a novel CMLAB module replaces the traditional C2f structure, employing multi-scale convolutions and local attention mechanisms to recover semantic information in blurry regions and better detect densely distributed small targets. Experimental results on a slightly blurry tea leaf blight dataset demonstrate that DEMNet surpasses the baseline by 5.7% in recall and 4.9% in mAP@0.5. Moreover, the model reduces parameters to 1.7 M, computation to 6.1 GFLOPs, and model size to 4.2 MB, demonstrating high accuracy and strong deployment potential. Full article
Show Figures

Figure 1

24 pages, 58810 KB  
Article
RML-YOLO: An Insulator Defect Detection Method for UAV Aerial Images
by Zhenrong Deng, Xiaoming Li and Rui Yang
Appl. Sci. 2025, 15(11), 6117; https://doi.org/10.3390/app15116117 - 29 May 2025
Cited by 1 | Viewed by 791
Abstract
The safety of power transmission lines is crucial to public well-being, with insulators being prone to failures such as self-detonation. However, images captured by unmanned aerial vehicles (UAVs) carrying optical sensors often face challenges, including uneven object scales, complex backgrounds, and difficulties in [...] Read more.
The safety of power transmission lines is crucial to public well-being, with insulators being prone to failures such as self-detonation. However, images captured by unmanned aerial vehicles (UAVs) carrying optical sensors often face challenges, including uneven object scales, complex backgrounds, and difficulties in feature extraction due to distance, angles, and terrain. Additionally, conventional models are too large for UAV deployment. To address these issues, this paper proposes RML-YOLO, an improved insulator defect detection method based on YOLOv8. The approach introduces a tiered scale fusion feature (TSFF) module to enhance multi-scale detection accuracy by fusing features across network layers. Additionally, the multi-scale feature extraction network (MSFENet) is designed to prioritize large-scale features while adding an extra detection layer for small objects, improving multi-scale object detection precision. A lightweight multi-scale shared detection head (LMSHead) reduces model size and parameters by sharing features across layers, addressing scale distribution imbalances. Lastly, the receptive field attention channel attention convolution (RFCAConv) module aggregates features from various receptive fields to overcome the limitations of standard convolution. Experiments on the UID, SFID, and VISDrone 2019 datasets show that RML-YOLO outperforms YOLOv8n, reducing model size by 0.8 MB and parameters by 500,000, while improving AP by 7.8%, 2.74%, and 3.9%, respectively. These results demonstrate the method’s lightweight design, high detection performance, and strong generalization capability, making it suitable for deployment on UAVs with limited resources. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection)
Show Figures

Figure 1

21 pages, 11638 KB  
Article
YOLOv8-MFD: An Enhanced Detection Model for Pine Wilt Diseased Trees Using UAV Imagery
by Hua Shi, Yonghang Wang, Xiaozhou Feng, Yufen Xie, Zhenhui Zhu, Hui Guo and Guofeng Jin
Sensors 2025, 25(11), 3315; https://doi.org/10.3390/s25113315 - 24 May 2025
Viewed by 909
Abstract
Pine Wilt Disease (PWD) is a highly infectious and lethal disease that severely threatens global pine forest ecosystems and forestry economies. Early and accurate detection of infected trees is crucial to prevent large-scale outbreaks and support timely forest management. However, existing remote sensing-based [...] Read more.
Pine Wilt Disease (PWD) is a highly infectious and lethal disease that severely threatens global pine forest ecosystems and forestry economies. Early and accurate detection of infected trees is crucial to prevent large-scale outbreaks and support timely forest management. However, existing remote sensing-based detection models often struggle with performance degradation in complex environments, as well as a trade-off between detection accuracy and real-time efficiency. To address these challenges, we propose an improved object detection model, YOLOv8-MFD, designed for accurate and efficient detection of PWD-infected trees from UAV imagery. The model incorporates a MobileViT-based backbone that fuses convolutional neural networks with Transformer-based global modeling to enhance feature representation under complex forest backgrounds. To further improve robustness and precision, we integrate a Focal Modulation mechanism to suppress environmental interference and adopt a Dynamic Head to strengthen multi-scale object perception and adaptive feature fusion. Experimental results on a UAV-based forest dataset demonstrate that YOLOv8-MFD achieves a precision of 92.5%, a recall of 84.7%, an F1-score of 88.4%, and a mAP@0.5 of 88.2%. Compared to baseline models such as YOLOv8 and YOLOv10, our method achieves higher accuracy while maintaining acceptable computational cost (11.8 GFLOPs) and a compact model size (10.2 MB). Its inference speed is moderate and still suitable for real-time deployment. Overall, the proposed method offers a reliable solution for early-stage PWD monitoring across large forested areas, enabling more timely disease intervention and resource protection. Furthermore, its generalizable architecture holds promise for broader applications in forest health monitoring and agricultural disease detection. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Figure 1

20 pages, 5649 KB  
Article
Edge-Deployed Band-Split Rotary Position Encoding Transformer for Ultra-Low-Signal-to-Noise-Ratio Unmanned Aerial Vehicle Speech Enhancement
by Feifan Liu, Muying Li, Luming Guo, Hao Guo, Jie Cao, Wei Zhao and Jun Wang
Drones 2025, 9(6), 386; https://doi.org/10.3390/drones9060386 - 22 May 2025
Cited by 1 | Viewed by 1211
Abstract
Addressing the significant challenge of speech enhancement in ultra-low-Signal-to-Noise-Ratio (SNR) scenarios for Unmanned Aerial Vehicle (UAV) voice communication, particularly under edge deployment constraints, this study proposes the Edge-Deployed Band-Split Rotary Position Encoding Transformer (Edge-BS-RoFormer), a novel, lightweight band-split rotary position encoding transformer. While [...] Read more.
Addressing the significant challenge of speech enhancement in ultra-low-Signal-to-Noise-Ratio (SNR) scenarios for Unmanned Aerial Vehicle (UAV) voice communication, particularly under edge deployment constraints, this study proposes the Edge-Deployed Band-Split Rotary Position Encoding Transformer (Edge-BS-RoFormer), a novel, lightweight band-split rotary position encoding transformer. While existing deep learning methods face limitations in dynamic UAV noise suppression under such constraints, including insufficient harmonic modeling and high computational complexity, the proposed Edge-BS-RoFormer distinctively synergizes a band-split strategy for fine-grained spectral processing, a dual-dimension Rotary Position Encoding (RoPE) mechanism for superior joint time–frequency modeling, and FlashAttention to optimize computational efficiency, pivotal for its lightweight nature and robust ultra-low-SNR performance. Experiments on our self-constructed DroneNoise-LibriMix (DN-LM) dataset demonstrate Edge-BS-RoFormer’s superiority. Under a −15 dB SNR, it achieves Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) improvements of 2.2 dB over Deep Complex U-Net (DCUNet), 25.0 dB over the Dual-Path Transformer Network (DPTNet), and 2.3 dB over HTDemucs. Correspondingly, the Perceptual Evaluation of Speech Quality (PESQ) is enhanced by 0.11, 0.18, and 0.15, respectively. Crucially, its efficacy for edge deployment is substantiated by a minimal model storage of 8.534 MB, 11.617 GFLOPs (an 89.6% reduction vs. DCUNet), a runtime memory footprint of under 500MB, a Real-Time Factor (RTF) of 0.325 (latency: 330.830 ms), and a power consumption of 6.536 W on an NVIDIA Jetson AGX Xavier, fulfilling real-time processing demands. This study delivers a validated lightweight solution, exemplified by its minimal computational overhead and real-time edge inference capability, for effective speech enhancement in complex UAV acoustic scenarios, including dynamic noise conditions. Furthermore, the open-sourced dataset and model contribute to advancing research and establishing standardized evaluation frameworks in this domain. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

24 pages, 6840 KB  
Article
A Tree Crown Segmentation Approach for Unmanned Aerial Vehicle Remote Sensing Images on Field Programmable Gate Array (FPGA) Neural Network Accelerator
by Jiayi Ma, Lingxiao Yan, Baozhe Chen and Li Zhang
Sensors 2025, 25(9), 2729; https://doi.org/10.3390/s25092729 - 25 Apr 2025
Viewed by 770
Abstract
Tree crown detection of high-resolution UAV forest remote sensing images using computer technology has been widely performed in the last ten years. In forest resource inventory management based on remote sensing data, crown detection is the most important and essential part. Deep learning [...] Read more.
Tree crown detection of high-resolution UAV forest remote sensing images using computer technology has been widely performed in the last ten years. In forest resource inventory management based on remote sensing data, crown detection is the most important and essential part. Deep learning technology has achieved good results in tree crown segmentation and species classification, but relying on high-performance computing platforms, edge calculation, and real-time processing cannot be realized. In this thesis, the UAV images of coniferous Pinus tabuliformis and broad-leaved Salix matsudana collected by Jingyue Ecological Forest Farm in Changping District, Beijing, are used as datasets, and a lightweight neural network U-Net-Light based on U-Net and VGG16 is designed and trained. At the same time, the IP core and SoC architecture of the neural network accelerator are designed and implemented on the Xilinx ZYNQ 7100 SoC platform. The results show that U-Net-light only uses 1.56 MB parameters to classify and segment the crown images of double tree species, and the accuracy rate reaches 85%. The designed SoC architecture and accelerator IP core achieved 31 times the speedup of the ZYNQ hard core, and 1.3 times the speedup compared with the high-end CPU (Intel CoreTM i9-10900K). The hardware resource overhead is less than 20% of the total deployment platform, and the total on-chip power consumption is 2.127 W. Shorter prediction time and higher energy consumption ratio prove the effectiveness and rationality of architecture design and IP development. This work departs from conventional canopy segmentation methods that rely heavily on ground-based high-performance computing. Instead, it proposes a lightweight neural network model deployed on FPGA for real-time inference on unmanned aerial vehicles (UAVs), thereby significantly lowering both latency and system resource consumption. The proposed approach demonstrates a certain degree of innovation and provides meaningful references for the automation and intelligent development of forest resource monitoring and precision agriculture. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 8528 KB  
Article
MSEA-Net: Multi-Scale and Edge-Aware Network for Weed Segmentation
by Akram Syed, Baifan Chen, Adeel Ahmed Abbasi, Sharjeel Abid Butt and Xiaoqing Fang
AgriEngineering 2025, 7(4), 103; https://doi.org/10.3390/agriengineering7040103 - 3 Apr 2025
Cited by 1 | Viewed by 1060
Abstract
Accurate weed segmentation in Unmanned Aerial Vehicle (UAV) imagery remains a significant challenge in precision agriculture due to environmental variability, weak contextual representation, and inaccurate boundary detection. To address these limitations, we propose the Multi-Scale and Edge-Aware Network (MSEA-Net), a lightweight and efficient [...] Read more.
Accurate weed segmentation in Unmanned Aerial Vehicle (UAV) imagery remains a significant challenge in precision agriculture due to environmental variability, weak contextual representation, and inaccurate boundary detection. To address these limitations, we propose the Multi-Scale and Edge-Aware Network (MSEA-Net), a lightweight and efficient deep learning framework designed to enhance segmentation accuracy while maintaining computational efficiency. Specifically, we introduce the Multi-Scale Spatial-Channel Attention (MSCA) module to recalibrate spatial and channel dependencies, improving local–global feature fusion while reducing redundant computations. Additionally, the Edge-Enhanced Bottleneck Attention (EEBA) module integrates Sobel-based edge detection to refine boundary delineation, ensuring sharper object separation in dense vegetation environments. Extensive evaluations on publicly available datasets demonstrate the effectiveness of MSEA-Net, achieving a mean Intersection over Union (IoU) of 87.42% on the Motion-Blurred UAV Images of Sorghum Fields dataset and 71.35% on the CoFly-WeedDB dataset, outperforming benchmark models. MSEA-Net also maintains a compact architecture with only 6.74 M parameters and a model size of 25.74 MB, making it suitable for UAV-based real-time weed segmentation. These results highlight the potential of MSEA-Net for improving automated weed detection in precision agriculture while ensuring computational efficiency for edge deployment. Full article
Show Figures

Figure 1

21 pages, 14918 KB  
Article
Real-Time Object Detection from UAV Inspection Videos by Combining YOLOv5s and DeepStream
by Shidun Xie, Guanghong Deng, Baihao Lin, Wenlong Jing, Yong Li and Xiaodan Zhao
Sensors 2024, 24(12), 3862; https://doi.org/10.3390/s24123862 - 14 Jun 2024
Cited by 4 | Viewed by 2919
Abstract
The high-altitude real-time inspection of unmanned aerial vehicles (UAVs) has always been a very challenging task. Because high-altitude inspections are susceptible to interference from different weather conditions, interference from communication signals and a larger field of view result in a smaller object area [...] Read more.
The high-altitude real-time inspection of unmanned aerial vehicles (UAVs) has always been a very challenging task. Because high-altitude inspections are susceptible to interference from different weather conditions, interference from communication signals and a larger field of view result in a smaller object area to be identified. We adopted a method that combines a UAV system scheduling platform with artificial intelligence object detection to implement the UAV automatic inspection technology. We trained the YOLOv5s model on five different categories of vehicle data sets, in which mAP50 and mAP50-95 reached 93.2% and 71.7%, respectively. The YOLOv5s model size is only 13.76 MB, and the detection speed of a single inspection photo reaches 11.26 ms. It is a relatively lightweight model and is suitable for deployment on edge devices for real-time detection. In the original DeepStream framework, we set up the http communication protocol to start quickly to enable different users to call and use it at the same time. In addition, asynchronous sending of alarm frame interception function was added and the auxiliary services were set up to quickly resume video streaming after interruption. We deployed the trained YOLOv5s model on the improved DeepStream framework to implement automatic UAV inspection. Full article
(This article belongs to the Special Issue New Methods and Applications for UAVs)
Show Figures

Figure 1

22 pages, 8871 KB  
Article
Early Drought Detection in Maize Using UAV Images and YOLOv8+
by Shanwei Niu, Zhigang Nie, Guang Li and Wenyu Zhu
Drones 2024, 8(5), 170; https://doi.org/10.3390/drones8050170 - 24 Apr 2024
Cited by 15 | Viewed by 3310
Abstract
The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the [...] Read more.
The escalating global climate change significantly impacts the yield and quality of maize, a vital staple crop worldwide, especially during seedling stage droughts. Traditional detection methods are limited by their single-scenario approach, requiring substantial human labor and time, and lack accuracy in the real-time monitoring and precise assessment of drought severity. In this study, a novel early drought detection method for maize based on unmanned aerial vehicle (UAV) images and Yolov8+ is proposed. In the Backbone section, the C2F-Conv module is adopted to reduce model parameters and deployment costs, while incorporating the CA attention mechanism module to effectively capture tiny feature information in the images. The Neck section utilizes the BiFPN fusion architecture and spatial attention mechanism to enhance the model’s ability to recognize small and occluded targets. The Head section introduces an additional 10 × 10 output, integrates loss functions, and enhances accuracy by 1.46%, reduces training time by 30.2%, and improves robustness. The experimental results demonstrate that the improved Yolov8+ model achieves precision and recall rates of approximately 90.6% and 88.7%, respectively. The mAP@50 and mAP@50:95 reach 89.16% and 71.14%, respectively, representing respective increases of 3.9% and 3.3% compared to the original Yolov8. The UAV image detection speed of the model is up to 24.63 ms, with a model size of 13.76 MB, optimized by 31.6% and 28.8% compared to the original model, respectively. In comparison with the Yolov8, Yolov7, and Yolo5s models, the proposed method exhibits varying degrees of superiority in mAP@50, mAP@50:95, and other metrics, utilizing drone imagery and deep learning techniques to truly propel agricultural modernization. Full article
Show Figures

Figure 1

15 pages, 2522 KB  
Article
Resource Allocation of Multiple Base Stations for Throughput Enhancement in UAV Relay Networks
by Sang Ik Han
Electronics 2023, 12(19), 4053; https://doi.org/10.3390/electronics12194053 - 27 Sep 2023
Cited by 3 | Viewed by 1480
Abstract
An unmanned aerial vehicle (UAV), with the advantages of mobility and easy deployment, serves as a relay node in wireless networks, which are known as UAV relay networks (URNs), to support user equipment that is out of service range (Uo) [...] Read more.
An unmanned aerial vehicle (UAV), with the advantages of mobility and easy deployment, serves as a relay node in wireless networks, which are known as UAV relay networks (URNs), to support user equipment that is out of service range (Uo) or does not have a direct communication link from/to the base station (BS) due to severe blockage. Furthermore, URNs have become crucial for delivering temporary communication services in emergency states or in disaster areas where the infrastructure is destroyed. The literature has explored single transmissions from one BS to a UAV to establish a wireless backhaul link in the URN; however, there exists a possibility of Uo outages due to severe interference from an adjacent BS, causing an overall throughput degradation of user equipment (UE) in the URN. In this paper, to improve the signal-to-interference-plus-noise ratio (SINR) of a wireless backhaul link, avoid an outage of Uo, and guarantee a reliable relay transmission, simultaneous transmissions from multiple BSs (e.g., macrocell BSs (mBSs) and small cell BSs (sBSs)) is considered. An outage probability is analyzed, and an optimal transmit time allocation algorithm is proposed to maximize the throughput of the UE and guarantee a reliable relay transmission. Simulation results demonstrate that simultaneous transmissions from multiple BSs in the URN leads to higher throughput and reliable transmission without an Uo outage compared to a single transmission in the URN from a single BS (e.g., mBS or sBS), and the optimization of transmit time allocation is essential in the URN. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

16 pages, 7905 KB  
Article
A Lightweight Modified YOLOv5 Network Using a Swin Transformer for Transmission-Line Foreign Object Detection
by Dongsheng Zhang, Zhigang Zhang, Na Zhao and Zhihai Wang
Electronics 2023, 12(18), 3904; https://doi.org/10.3390/electronics12183904 - 15 Sep 2023
Cited by 7 | Viewed by 2396
Abstract
Transmission lines are often located in complex environments and are susceptible to the presence of foreign objects. Failure to promptly address these objects can result in accidents, including short circuits and fires. Existing foreign object detection networks face several challenges, such as high [...] Read more.
Transmission lines are often located in complex environments and are susceptible to the presence of foreign objects. Failure to promptly address these objects can result in accidents, including short circuits and fires. Existing foreign object detection networks face several challenges, such as high levels of memory consumption, slow detection speeds, and susceptibility to background interference. To address these issues, this paper proposes a lightweight detection network based on deep learning, namely YOLOv5 with an improved version of CSPDarknet and a Swin Transformer (YOLOv5-IC-ST). YOLOv5-IC-ST was developed by incorporating the Swin Transformer into YOLOv5, thereby reducing the impact of background information on the model. Furthermore, the improved CSPDarknet (IC) enhances the model’s feature-extraction capability while reducing the number of parameters. To evaluate the model’s performance, a dataset specific to foreign objects on transmission lines was constructed. The experimental results demonstrate that compared to other single-stage networks such as YOLOv4, YOLOv5, and YOLOv7, YOLOv5-IC-ST achieves superior detection results, with a mean average precision (mAP) of 98.4%, a detection speed of 92.8 frames per second (FPS), and a compact model size of 10.3 MB. These findings highlight that the proposed network is well suited for deployment on embedded devices such as UAVs. Full article
Show Figures

Figure 1

25 pages, 22129 KB  
Article
TLI-YOLOv5: A Lightweight Object Detection Framework for Transmission Line Inspection by Unmanned Aerial Vehicle
by Hanqiang Huang, Guiwen Lan, Jia Wei, Zhan Zhong, Zirui Xu, Dongbo Li and Fengfan Zou
Electronics 2023, 12(15), 3340; https://doi.org/10.3390/electronics12153340 - 4 Aug 2023
Cited by 9 | Viewed by 3727
Abstract
Unmanned aerial vehicles (UAVs) have become an important tool for transmission line inspection, and the inspection images taken by UAVs often contain complex backgrounds and many types of targets, which poses many challenges to object detection algorithms. In this paper, we propose a [...] Read more.
Unmanned aerial vehicles (UAVs) have become an important tool for transmission line inspection, and the inspection images taken by UAVs often contain complex backgrounds and many types of targets, which poses many challenges to object detection algorithms. In this paper, we propose a lightweight object detection framework, TLI-YOLOv5, for transmission line inspection tasks. Firstly, we incorporate the parameter-free attention module SimAM into the YOLOv5 network. This integration enhances the network’s feature extraction capabilities, without introducing additional parameters. Secondly, we introduce the Wise-IoU (WIoU) loss function to evaluate the quality of anchor boxes and allocate various gradient gains to them, aiming to improve network performance and generalization capabilities. Furthermore, we employ transfer learning and cosine learning rate decay to further enhance the model’s performance. The experimental evaluations performed on our UAV transmission line inspection dataset reveal that, in comparison to the original YOLOv5n, TLI-YOLOv5 increases precision by 0.40%, recall by 4.01%, F1 score by 1.69%, mean average precision at 50% IoU (mAP50) by 2.91%, and mean average precision from 50% to 95% IoU (mAP50-95) by 0.74%, while maintaining a recognition speed of 76.1 frames per second and model size of only 4.15 MB, exhibiting attributes such as small size, high speed, and ease of deployment. With these advantages, TLI-YOLOv5 proves more adept at meeting the requirements of modern, large-scale transmission line inspection operations, providing a reliable, efficient solution for such demanding tasks. Full article
Show Figures

Figure 1

14 pages, 5347 KB  
Article
Detection of Power Poles in Orchards Based on Improved Yolov5s Model
by Yali Zhang, Xiaoyang Lu, Wanjian Li, Kangting Yan, Zhenjie Mo, Yubin Lan and Linlin Wang
Agronomy 2023, 13(7), 1705; https://doi.org/10.3390/agronomy13071705 - 26 Jun 2023
Cited by 13 | Viewed by 2455
Abstract
During the operation of agricultural unmanned aerial vehicles (UAVs) in orchards, the presence of power poles and wires pose a serious threat to flight safety, and can even lead to crashes. Due to the difficulty of directly detecting wires, this research aimed to [...] Read more.
During the operation of agricultural unmanned aerial vehicles (UAVs) in orchards, the presence of power poles and wires pose a serious threat to flight safety, and can even lead to crashes. Due to the difficulty of directly detecting wires, this research aimed to quickly and accurately detect wire poles, and proposed an improved Yolov5s deep learning object detection algorithm named Yolov5s-Pole. The algorithm enhances the model’s generalization ability and robustness by applying Mixup data augmentation technique, replaces the C3 module with the GhostBottleneck module to reduce the model’s parameters and computational complexity, and incorporates the Shuffle Attention (SA) module to improve its focus on small targets. The results show that when the improved Yolov5s-Pole model was used for detecting poles in orchards, its accuracy, recall, and mAP@50 were 0.803, 0.831, and 0.838 respectively, which increased by 0.5%, 10%, and 9.2% compared to the original Yolov5s model. Additionally, the weights, parameters, and GFLOPs of the Yolov5s-Pole model were 7.86 MB, 3,974,310, and 9, respectively. Compared to the original Yolov5s model, these represent compression rates of 42.2%, 43.4%, and 43.3%, respectively. The detection time for a single image using this model was 4.2 ms, and good robustness under different lighting conditions (dark, normal, and bright) was demonstrated. The model is suitable for deployment on agricultural UAVs’ onboard equipment, and is of great practical significance for ensuring the efficiency and flight safety of agricultural UAVs. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application)
Show Figures

Figure 1

Back to TopTop