Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (862)

Search Parameters:
Keywords = vehicle heading

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2053 KiB  
Article
Enhanced Real-Time Method Traffic Light Signal Color Recognition Using Advanced Convolutional Neural Network Techniques
by Fakhri Yagob and Jurek Z. Sasiadek
World Electr. Veh. J. 2025, 16(8), 441; https://doi.org/10.3390/wevj16080441 - 5 Aug 2025
Abstract
Real-time traffic light detection is essential for the safe navigation of autonomous vehicles, where timely and accurate recognition of signal states is critical. YOLOv8, a state-of-the-art object detection model, offers enhanced speed and precision, making it well-suited for real-time applications in complex driving [...] Read more.
Real-time traffic light detection is essential for the safe navigation of autonomous vehicles, where timely and accurate recognition of signal states is critical. YOLOv8, a state-of-the-art object detection model, offers enhanced speed and precision, making it well-suited for real-time applications in complex driving environments. This study presents a modified YOLOv8 architecture optimized for traffic light detection by integrating Depth-Wise Separable Convolutions (DWSCs) throughout the backbone and head. The model was first pretrained on a public traffic light dataset to establish a strong baseline and then fine-tuned on a custom real-time dataset consisting of 480 images collected from video recordings under diverse road conditions. Experimental results demonstrate high detection performance, with precision scores of 0.992 for red, 0.995 for yellow, and 0.853 for green lights. The model achieved an average mAP@0.5 of 0.947, with stable F1 scores and low validation losses over 80 epochs, confirming effective learning and generalization. Compared to existing YOLO variants, the modified architecture showed superior performance, especially for red and yellow lights. Full article
Show Figures

Figure 1

23 pages, 23638 KiB  
Article
Enhanced YOLO and Scanning Portal System for Vehicle Component Detection
by Feng Ye, Mingzhe Yuan, Chen Luo, Shuo Li, Duotao Pan, Wenhong Wang, Feidao Cao and Diwen Chen
Sensors 2025, 25(15), 4809; https://doi.org/10.3390/s25154809 - 5 Aug 2025
Abstract
In this paper, a novel online detection system is designed to enhance accuracy and operational efficiency in the outbound logistics of automotive components after production. The system consists of a scanning portal system and an improved YOLOv12-based detection algorithm which captures images of [...] Read more.
In this paper, a novel online detection system is designed to enhance accuracy and operational efficiency in the outbound logistics of automotive components after production. The system consists of a scanning portal system and an improved YOLOv12-based detection algorithm which captures images of automotive parts passing through the scanning portal in real time. By integrating deep learning, the system enables real-time monitoring and identification, thereby preventing misdetections and missed detections of automotive parts, in this way promoting intelligent automotive part recognition and detection. Our system introduces the A2C2f-SA module, which achieves an efficient feature attention mechanism while maintaining a lightweight design. Additionally, Dynamic Space-to-Depth (Dynamic S2D) is employed to improve convolution and replace the stride convolution and pooling layers in the baseline network, helping to mitigate the loss of fine-grained information and enhancing the network’s feature extraction capability. To improve real-time performance, a GFL-MBConv lightweight detection head is proposed. Furthermore, adaptive frequency-aware feature fusion (Adpfreqfusion) is hybridized at the end of the neck network to effectively enhance high-frequency information lost during downsampling, thereby improving the model’s detection accuracy for target objects in complex backgrounds. On-site tests demonstrate that the system achieves a comprehensive accuracy of 97.3% and an average vehicle detection time of 7.59 s, exhibiting not only high precision but also high detection efficiency. These results can make the proposed system highly valuable for applications in the automotive industry. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

24 pages, 1593 KiB  
Article
Robust Adaptive Multiple Backtracking VBKF for In-Motion Alignment of Low-Cost SINS/GNSS
by Weiwei Lyu, Yingli Wang, Shuanggen Jin, Haocai Huang, Xiaojuan Tian and Jinling Wang
Remote Sens. 2025, 17(15), 2680; https://doi.org/10.3390/rs17152680 - 2 Aug 2025
Viewed by 149
Abstract
The low-cost Strapdown Inertial Navigation System (SINS)/Global Navigation Satellite System (GNSS) is widely used in autonomous vehicles for positioning and navigation. Initial alignment is a critical stage for SINS operations, and the alignment time and accuracy directly affect the SINS navigation performance. To [...] Read more.
The low-cost Strapdown Inertial Navigation System (SINS)/Global Navigation Satellite System (GNSS) is widely used in autonomous vehicles for positioning and navigation. Initial alignment is a critical stage for SINS operations, and the alignment time and accuracy directly affect the SINS navigation performance. To address the issue that low-cost SINS/GNSS cannot effectively achieve rapid and high-accuracy alignment in complex environments that contain noise and external interference, an adaptive multiple backtracking robust alignment method is proposed. The sliding window that constructs observation and reference vectors is established, which effectively avoids the accumulation of sensor errors during the full integration process. A new observation vector based on the magnitude matching is then constructed to effectively reduce the effect of outliers on the alignment process. An adaptive multiple backtracking method is designed in which the window size can be dynamically adjusted based on the innovation gradient; thus, the alignment time can be significantly shortened. Furthermore, the modified variational Bayesian Kalman filter (VBKF) that accurately adjusts the measurement noise covariance matrix is proposed, and the Expectation–Maximization (EM) algorithm is employed to refine the prior parameter of the predicted error covariance matrix. Simulation and experimental results demonstrate that the proposed method significantly reduces alignment time and improves alignment accuracy. Taking heading error as the critical evaluation indicator, the proposed method achieves rapid alignment within 120 s and maintains a stable error below 1.2° after 80 s, yielding an improvement of over 63% compared to the backtracking-based Kalman filter (BKF) method and over 57% compared to the fuzzy adaptive KF (FAKF) method. Full article
(This article belongs to the Section Urban Remote Sensing)
22 pages, 4629 KiB  
Article
Wind-Resistant UAV Landing Control Based on Drift Angle Control Strategy
by Haonan Chen, Zhengyou Wen, Yu Zhang, Guoqiang Su, Liaoni Wu and Kun Xie
Aerospace 2025, 12(8), 678; https://doi.org/10.3390/aerospace12080678 - 29 Jul 2025
Viewed by 133
Abstract
Addressing lateral-directional control challenges during unmanned aerial vehicle (UAV) landing in complex wind fields, this study proposes a drift angle control strategy that integrates coordinated heading and trajectory regulation. An adaptive radius optimization method for the Dubins approach path is designed using wind [...] Read more.
Addressing lateral-directional control challenges during unmanned aerial vehicle (UAV) landing in complex wind fields, this study proposes a drift angle control strategy that integrates coordinated heading and trajectory regulation. An adaptive radius optimization method for the Dubins approach path is designed using wind speed estimation. By developing a wind-coupled flight dynamics model, we establish a roll angle control loop combining the L1 nonlinear guidance law with Linear Active Disturbance Rejection Control (LADRC). Simulation tests against conventional sideslip approach and crab approach, along with flight tests, confirm that the proposed autonomous landing system achieves smoother attitude transitions during landing while meeting all touchdown performance requirements. This solution provides a theoretically rigorous and practically viable approach for safe UAV landings in challenging wind conditions. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

36 pages, 11747 KiB  
Article
Numerical Study on Interaction Between the Water-Exiting Vehicle and Ice Based on FEM-SPH-SALE Coupling Algorithm
by Zhenting Diao, Dengjian Fang and Jingwen Cao
Appl. Sci. 2025, 15(15), 8318; https://doi.org/10.3390/app15158318 - 26 Jul 2025
Viewed by 152
Abstract
The icebreaking process of water-exiting vehicles involves complex nonlinear interactions as well as multi-physical field coupling effects among ice, solids, and fluids, which poses enormous challenges for numerical calculations. Addressing the low solution accuracy of traditional grid methods in simulating large deformation and [...] Read more.
The icebreaking process of water-exiting vehicles involves complex nonlinear interactions as well as multi-physical field coupling effects among ice, solids, and fluids, which poses enormous challenges for numerical calculations. Addressing the low solution accuracy of traditional grid methods in simulating large deformation and destruction of ice layers, a numerical model was established based on the FEM-SPH-SALE coupling algorithm to study the dynamic characteristics of the water-exiting vehicle on the icebreaking process. The FEM-SPH adaptive algorithm was used to simulate the damage performance of ice, and its feasibility was verified through the four-point bending test and vehicle breaking ice experiment. The S-ALE algorithm was used to simulate the process of fluid/structure interaction, and its accuracy was verified through the wedge-body water-entry test and simulation. On this basis, numerical simulations were performed for different ice thicknesses and initial velocities of vehicles. The results show that the motion characteristics of the vehicle undergoes a sudden change during the ice-breaking. The head and middle section of the vehicle are subject to greater stress, which is related to the transmission of stress waves and inertial effect. The velocity loss rate of the vehicle and the maximum stress increase with the thickness of ice. The higher the initial velocity of the vehicle, the larger the acceleration and maximum stress in the process of the vehicle breaking ice. The acceleration peak is sensitive to the variation in the vehicle’s initial velocity but insensitive to the thickness of the ice. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

28 pages, 4562 KiB  
Article
A Capacity-Constrained Weighted Clustering Algorithm for UAV Self-Organizing Networks Under Interference
by Siqi Li, Peng Gong, Weidong Wang, Jinyue Liu, Zhixuan Feng and Xiang Gao
Drones 2025, 9(8), 527; https://doi.org/10.3390/drones9080527 - 25 Jul 2025
Viewed by 206
Abstract
Compared to traditional ad hoc networks, self-organizing networks of unmanned aerial vehicle (UAV) are characterized by high node mobility, vulnerability to interference, wide distribution range, and large network scale, which make network management and routing protocol operation more challenging. Cluster structures can be [...] Read more.
Compared to traditional ad hoc networks, self-organizing networks of unmanned aerial vehicle (UAV) are characterized by high node mobility, vulnerability to interference, wide distribution range, and large network scale, which make network management and routing protocol operation more challenging. Cluster structures can be used to optimize network management and mitigate the impact of local topology changes on the entire network during collaborative task execution. To address the issue of cluster structure instability caused by the high mobility and vulnerability to interference in UAV networks, we propose a capacity-constrained weighted clustering algorithm for UAV self-organizing networks under interference. Specifically, a capacity-constrained partitioning algorithm based on K-means++ is developed to establish the initial node partitions. Then, a weighted cluster head (CH) and backup cluster head (BCH) selection algorithm is proposed, incorporating interference factors into the selection process. Additionally, a dynamic maintenance mechanism for the clustering network is introduced to enhance the stability and robustness of the network. Simulation results show that the algorithm achieves efficient node clustering under interference conditions, improving cluster load balancing, average cluster head maintenance time, and cluster head failure reconstruction time. Furthermore, the method demonstrates fast recovery capabilities in the event of node failures, making it more suitable for deployment in complex emergency rescue environments. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Enhanced Emergency Response)
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Diff-Pre: A Diffusion Framework for Trajectory Prediction
by Yijie Liu, Chengjie Zhu, Xin Chang, Xinyu Xi, Che Liu and Yanli Xu
Sensors 2025, 25(15), 4603; https://doi.org/10.3390/s25154603 - 25 Jul 2025
Viewed by 341
Abstract
With the rapid development of intelligent transportation, accurately predicting vehicle trajectories is crucial for ensuring road safety and enhancing traffic efficiency. This paper proposes a trajectory prediction model that integrates a diffusion model framework with trajectory features of target and neighboring vehicles, as [...] Read more.
With the rapid development of intelligent transportation, accurately predicting vehicle trajectories is crucial for ensuring road safety and enhancing traffic efficiency. This paper proposes a trajectory prediction model that integrates a diffusion model framework with trajectory features of target and neighboring vehicles, as well as driving intentions. The model uses historical trajectories of the target and adjacent vehicles as input, employs Long Short-Term Memory (LSTM) networks to extract temporal features, and dynamically captures the interaction between the target and neighboring vehicles through a multi-head attention mechanism. An intention module regulates lateral offsets, and the diffusion framework selects the most probable trajectory from various possible predictions, thereby improving the model’s ability to handle complex scenarios. Experiments conducted on real traffic data demonstrate that the proposed method outperforms several representative models in terms of Average Displacement Error (ADE) and Final Displacement Error (FDE), without sacrificing efficiency. Notably, it exhibits higher robustness and predictive accuracy in high-interaction and uncertain scenarios, such as lane changes and overtaking. To the best of our knowledge, this is the first application of the diffusion framework in vehicle trajectory prediction. This study provides an efficient solution for vehicle trajectory prediction tasks. The average ADE within 1 to 5 s reached 0.199 m, while the average FDE within 1 to 5 s reached 0.437 m. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

22 pages, 6496 KiB  
Article
Real-Time Search and Rescue with Drones: A Deep Learning Approach for Small-Object Detection Based on YOLO
by Francesco Ciccone and Alessandro Ceruti
Drones 2025, 9(8), 514; https://doi.org/10.3390/drones9080514 - 22 Jul 2025
Viewed by 649
Abstract
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially [...] Read more.
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially when managed by civil agencies. In this work, we present a comprehensive methodology for optimizing YOLO-based object detection models for real-time Search and Rescue scenarios. A two-stage transfer learning strategy was employed using VisDrone for general aerial object detection and Heridal for Search and Rescue-specific fine-tuning. We explored various architectural modifications, including enhanced feature fusion (FPN, BiFPN, PB-FPN), additional detection heads (P2), and modules such as CBAM, Transformers, and deconvolution, analyzing their impact on performance and computational efficiency. The best-performing configuration (YOLOv5s-PBfpn-Deconv) achieved a mAP@50 of 0.802 on the Heridal dataset while maintaining real-time inference on embedded hardware (Jetson Nano). Further tests at different flight altitudes and explainability analyses using EigenCAM confirmed the robustness and interpretability of the model in real-world conditions. The proposed solution offers a viable framework for deploying lightweight, interpretable AI systems for UAV-based Search and Rescue operations managed by civil protection authorities. Limitations and future directions include the integration of multimodal sensors and adaptation to broader environmental conditions. Full article
Show Figures

Figure 1

21 pages, 16254 KiB  
Article
Prediction of Winter Wheat Yield and Interpretable Accuracy Under Different Water and Nitrogen Treatments Based on CNNResNet-50
by Donglin Wang, Yuhan Cheng, Longfei Shi, Huiqing Yin, Guangguang Yang, Shaobo Liu, Qinge Dong and Jiankun Ge
Agronomy 2025, 15(7), 1755; https://doi.org/10.3390/agronomy15071755 - 21 Jul 2025
Viewed by 427
Abstract
Winter wheat yield prediction is critical for optimizing field management plans and guiding agricultural production. To address the limitations of conventional manual yield estimation methods, including low efficiency and poor interpretability, this study innovatively proposes an intelligent yield estimation method based on a [...] Read more.
Winter wheat yield prediction is critical for optimizing field management plans and guiding agricultural production. To address the limitations of conventional manual yield estimation methods, including low efficiency and poor interpretability, this study innovatively proposes an intelligent yield estimation method based on a convolutional neural network (CNN). A comprehensive two-factor (fertilization × irrigation) controlled field experiment was designed to thoroughly validate the applicability and effectiveness of this method. The experimental design comprised two irrigation treatments, sufficient irrigation (C) at 750 m3 ha−1 and deficit irrigation (M) at 450 m3 ha−1, along with five fertilization treatments (at a rate of 180 kg N ha−1): (1) organic fertilizer alone, (2) organic–inorganic fertilizer blend at a 7:3 ratio, (3) organic–inorganic fertilizer blend at a 3:7 ratio, (4) inorganic fertilizer alone, and (5) no fertilizer control. The experimental protocol employed a DJI M300 RTK unmanned aerial vehicle (UAV) equipped with a multispectral sensor to systematically acquire high-resolution growth imagery of winter wheat across critical phenological stages, from heading to maturity. The acquired multispectral imagery was meticulously annotated using the Labelme professional annotation tool to construct a comprehensive experimental dataset comprising over 2000 labeled images. These annotated data were subsequently employed to train an enhanced CNN model based on ResNet50 architecture, which achieved automated generation of panicle density maps and precise panicle counting, thereby realizing yield prediction. Field experimental results demonstrated significant yield variations among fertilization treatments under sufficient irrigation, with the 3:7 organic–inorganic blend achieving the highest actual yield (9363.38 ± 468.17 kg ha−1) significantly outperforming other treatments (p < 0.05), confirming the synergistic effects of optimized nitrogen and water management. The enhanced CNN model exhibited superior performance, with an average accuracy of 89.0–92.1%, representing a 3.0% improvement over YOLOv8. Notably, model accuracy showed significant correlation with yield levels (p < 0.05), suggesting more distinct panicle morphological features in high-yield plots that facilitated model identification. The CNN’s yield predictions demonstrated strong agreement with the measured values, maintaining mean relative errors below 10%. Particularly outstanding performance was observed for the organic fertilizer with full irrigation (5.5% error) and the 7:3 organic-inorganic blend with sufficient irrigation (8.0% error), indicating that the CNN network is more suitable for these management regimes. These findings provide a robust technical foundation for precision farming applications in winter wheat production. Future research will focus on integrating this technology into smart agricultural management systems to enable real-time, data-driven decision making at the farm scale. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

8 pages, 4837 KiB  
Case Report
Successful Rehabilitation and Release of a Korean Water Deer (Hydropotes inermis argyropus) After a Femoral Head Ostectomy (FHO)
by Sohwon Bae, Minjae Jo, Woojin Shin, Chea-Un Cho, Son-Il Pak and Sangjin Ahn
Animals 2025, 15(14), 2148; https://doi.org/10.3390/ani15142148 - 21 Jul 2025
Viewed by 275
Abstract
A water deer (Hydropotes inermis argyropus) was rescued following a vehicle collision and presented with suspected hip injury. Radiographic examination confirmed coxofemoral luxation, and a femoral head ostectomy (FHO) was performed to restore functional mobility. Postoperatively, the water deer underwent intensive [...] Read more.
A water deer (Hydropotes inermis argyropus) was rescued following a vehicle collision and presented with suspected hip injury. Radiographic examination confirmed coxofemoral luxation, and a femoral head ostectomy (FHO) was performed to restore functional mobility. Postoperatively, the water deer underwent intensive rehabilitation, including controlled movement and physical therapy, to enhance limb function. Following successful recovery, the water deer was equipped with a GPS collar and released into its natural habitat. GPS tracking data were collected to evaluate the water deer’s post-release adaptation and movement patterns. The Minimum Convex Polygon (MCP) method was used to determine the home range, showing an overall home range (MCP 95%) of 8.03 km2 and a core habitat (MCP 50%) of 6.967 km2. These results indicate a successful post-surgery outcome, with the water deer demonstrating mobility comparable to healthy individuals. This case demonstrates the clinical feasibility of an FHO in managing hip luxation in water deer and underscores the critical role of post-release monitoring in evaluating functional rehabilitation success in wildlife medicine. This study underscores the importance of integrating surgical intervention, structured rehabilitation, and post-release monitoring to ensure the successful reintroduction of injured wildlife. GPS tracking provides valuable insights into long-term adaptation and mobility, contributing to evidence-based conservation medicine. Full article
Show Figures

Figure 1

17 pages, 9414 KiB  
Article
Influence of High-Speed Flow on Aerodynamic Lift of Pantograph at 400 km/h
by Zhao Xu, Hongwei Zhang, Wen Wang and Guobin Lin
Infrastructures 2025, 10(7), 188; https://doi.org/10.3390/infrastructures10070188 - 17 Jul 2025
Viewed by 275
Abstract
This study examines pantograph aerodynamic lift at 400 km/h, and uncovers the dynamic behaviors and mechanisms that influence pantograph–catenary performance. Using computational fluid dynamics (CFD) with a compressible fluid model and an SST k-ω turbulence model, aerodynamic characteristics were analyzed. Simulation data at [...] Read more.
This study examines pantograph aerodynamic lift at 400 km/h, and uncovers the dynamic behaviors and mechanisms that influence pantograph–catenary performance. Using computational fluid dynamics (CFD) with a compressible fluid model and an SST k-ω turbulence model, aerodynamic characteristics were analyzed. Simulation data at 300, 350, and 400 km/h showed lift fluctuation amplitude increases with speed, peaking near 50 N at 400 km/h. Power spectral density (PSD) energy, dominated by low frequencies, peaked around 10 dB/Hz in the low-frequency band, highlighting exacerbated lift instability. Component analysis revealed the smallest lift-to-drag ratio and most significant fluctuations at the head, primarily due to boundary-layer separation and vortex shedding from its non-streamlined design. Turbulence energy analysis identified the head and base as main turbulence sources; however, base vibrations are absorbed by the vehicle body, while the head causes pantograph–catenary vibrations due to direct contact. These findings confirm that aerodynamic instability at the head is the main cause of contact force fluctuations. Optimizing head design is necessary to suppress fluctuations, ensuring safe operation at 400 km/h and above. Results provide a theoretical foundation for aerodynamic optimization and improved dynamic performance of high-speed pantographs. Full article
(This article belongs to the Special Issue The Resilience of Railway Networks: Enhancing Safety and Robustness)
Show Figures

Figure 1

20 pages, 33417 KiB  
Article
Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11
by Tianhang Weng and Xiaopeng Niu
Sensors 2025, 25(14), 4463; https://doi.org/10.3390/s25144463 - 17 Jul 2025
Viewed by 569
Abstract
Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored [...] Read more.
Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored for low-light environments, built upon the YOLOv11s framework. ELS-YOLO features a re-parameterized backbone (ER-HGNetV2) with integrated Re-parameterized Convolution and Efficient Channel Attention mechanisms, a Lightweight Feature Selection Pyramid Network (LFSPN) for multi-scale object detection, and a Shared Convolution Separate Batch Normalization Head (SCSHead) to reduce computational complexity. Layer-Adaptive Magnitude-Based Pruning (LAMP) is employed to compress the model size. Experiments on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves high detection accuracy with a compact model. Here, we show that ELS-YOLO attains a mAP@0.5 of 74.3% and 68.7% on the ExDark and DroneVehicle datasets, respectively, while maintaining real-time inference capability. Full article
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)
Show Figures

Figure 1

25 pages, 85368 KiB  
Article
SMA-YOLO: An Improved YOLOv8 Algorithm Based on Parameter-Free Attention Mechanism and Multi-Scale Feature Fusion for Small Object Detection in UAV Images
by Shenming Qu, Chaoxu Dang, Wangyou Chen and Yanhong Liu
Remote Sens. 2025, 17(14), 2421; https://doi.org/10.3390/rs17142421 - 12 Jul 2025
Viewed by 760
Abstract
With special consideration for complex scenes and densely distributed small objects, this frequently leads to serious false and missed detections for unmanned aerial vehicle (UAV) images in small object detection scenarios. Consequently, we propose a UAV image small object detection algorithm, termed SMA-YOLO. [...] Read more.
With special consideration for complex scenes and densely distributed small objects, this frequently leads to serious false and missed detections for unmanned aerial vehicle (UAV) images in small object detection scenarios. Consequently, we propose a UAV image small object detection algorithm, termed SMA-YOLO. Firstly, a parameter-free simple slicing convolution (SSC) module is integrated in the backbone network to slice the feature maps and enhance the features so as to effectively retain the features of small objects. Subsequently, to enhance the information exchange between upper and lower layers, we design a special multi-cross-scale feature pyramid network (M-FPN). The C2f-Hierarchical-Phantom Convolution (C2f-HPC) module in the network effectively reduces information loss by fine-grained multi-scale feature fusion. Ultimately, adaptive spatial feature fusion detection Head (ASFFDHead) introduces an additional P2 detection head to enhance the resolution of feature maps to better locate small objects. Moreover, the ASFF mechanism is employed to optimize the detection process by filtering out information conflicts during multi-scale feature fusion, thereby significantly optimizing small object detection capability. Using YOLOv8n as the baseline, SMA-YOLO is evaluated on the VisDrone2019 dataset, achieving a 7.4% improvement in mAP@0.5 and a 13.3% reduction in model parameters, and we also verified its generalization ability on VAUDT and RSOD datasets, which demonstrates the effectiveness of our approach. Full article
Show Figures

Graphical abstract

26 pages, 3701 KiB  
Article
Research on Path Tracking Technology for Tracked Unmanned Vehicles Based on DDPG-PP
by Yongjuan Zhao, Chaozhe Guo, Jiangyong Mi, Lijin Wang, Haidi Wang and Hailong Zhang
Machines 2025, 13(7), 603; https://doi.org/10.3390/machines13070603 - 12 Jul 2025
Viewed by 331
Abstract
Realizing path tracking is crucial for improving the accuracy and efficiency of unmanned vehicle operations. In this paper, a path tracking hierarchical control method based on DDPG-PP is proposed to improve the path tracking accuracy of tracked unmanned vehicles. Constrained by the objective [...] Read more.
Realizing path tracking is crucial for improving the accuracy and efficiency of unmanned vehicle operations. In this paper, a path tracking hierarchical control method based on DDPG-PP is proposed to improve the path tracking accuracy of tracked unmanned vehicles. Constrained by the objective of minimizing path tracking error, with the upper controller, we adopted the DDPG method to construct an adaptive look-ahead distance optimizer in which the look-ahead distance was dynamically adjusted in real-time using a reinforcement learning strategy. Meanwhile, reinforcement learning training was carried out with randomly generated paths to improve the model’s generalization ability. Based on the optimal look-ahead distance output from the upper layer, the lower layer realizes precise closed-loop control of torque, required for steering, based on the PP method. Simulation results show that the path tracking accuracy of the proposed method is better than that of the LQR and PP methods. The proposed method reduces the average tracking error by 94.0% and 79.2% and the average heading error by 80.4% and 65.0% under complex paths compared to the LQR and PP methods, respectively. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

23 pages, 88853 KiB  
Article
RSW-YOLO: A Vehicle Detection Model for Urban UAV Remote Sensing Images
by Hao Wang, Jiapeng Shang, Xinbo Wang, Qingqi Zhang, Xiaoli Wang, Jie Li and Yan Wang
Sensors 2025, 25(14), 4335; https://doi.org/10.3390/s25144335 - 11 Jul 2025
Viewed by 579
Abstract
Vehicle detection in remote sensing images faces significant challenges due to small object sizes, scale variation, and cluttered backgrounds. To address these issues, we propose RSW-YOLO, an enhanced detection model built upon the YOLOv8n framework, designed to improve feature extraction and robustness against [...] Read more.
Vehicle detection in remote sensing images faces significant challenges due to small object sizes, scale variation, and cluttered backgrounds. To address these issues, we propose RSW-YOLO, an enhanced detection model built upon the YOLOv8n framework, designed to improve feature extraction and robustness against environmental noise. A Restormer module is incorporated into the backbone to model long-range dependencies via self-attention, enabling better handling of multi-scale features and complex scenes. A dedicated detection head is introduced for small objects, focusing on critical channels while suppressing irrelevant information. Additionally, the original CIoU loss is replaced with WIoU, which dynamically reweights predicted boxes based on their quality, enhancing localization accuracy and stability. Experimental results on the DJCAR dataset show mAP@0.5 and mAP@0.5:0.95 improvements of 5.4% and 6.2%, respectively, and corresponding gains of 4.3% and 2.6% on the VisDrone dataset. These results demonstrate that RSW-YOLO offers a robust and accurate solution for UAV-based vehicle detection, particularly in urban scenes with dense or small targets. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop