Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = Average Positioning Error (APE)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 6720 KB  
Article
Vision-Based Vehicle State and Behavior Analysis for Aircraft Stand Safety
by Ke Tang, Liang Zeng, Tianxiong Zhang, Di Zhu, Wenjie Liu and Xinping Zhu
Sensors 2026, 26(6), 1821; https://doi.org/10.3390/s26061821 - 13 Mar 2026
Viewed by 398
Abstract
With the continuous elevation of aviation safety standards, accurate monitoring of ground support vehicles in aircraft stand areas has become a critical task for enhancing overall aircraft stand operational safety. Given the limitations of existing surface movement radar and multi-camera surveillance systems in [...] Read more.
With the continuous elevation of aviation safety standards, accurate monitoring of ground support vehicles in aircraft stand areas has become a critical task for enhancing overall aircraft stand operational safety. Given the limitations of existing surface movement radar and multi-camera surveillance systems in terms of cost, deployment complexity, and coverage, this paper proposes a lightweight vision-based framework for vehicle state perception and spatiotemporal behavior analysis oriented toward aircraft stand safety. Leveraging existing fixed monocular monitoring resources in the stand area, the framework first establishes a precise mapping from image pixel coordinates to the physical plane through self-calibration and homography transformation utilizing scene line features, thereby achieving unified spatial measurement of vehicle targets. Subsequently, it integrates an improved lightweight YOLO detector (incorporating Ghost modules and CBAM for noise suppression) with the ByteTrack tracking algorithm to enable stable extraction of vehicle trajectories under complex occlusion conditions. Finally, by combining functional zone division within the stand, a semantic map is constructed, and a behavior analysis method based on a spatiotemporal finite state machine is proposed. This method performs joint reasoning by fusing multi-dimensional constraints including position, zone, and time, enabling automatic detection of abnormal behaviors such as “intrusion into restricted areas” and “abnormal stop.” Quantitative evaluations demonstrate the framework’s efficacy: it achieves an average physical localization error (RMSE) of 0.32 m, and the improved detection model reaches an accuracy (mAP@50) of 90.4% for ground support vehicles. In tests simulating typical violation scenarios, the system achieved high recall (96.0%) and precision (95.8%) rates in detecting ‘area intrusion’ and ‘abnormal stop’ violations, respectively. These results, achieved using only existing surveillance cameras, validate its potential as a cost-effective and easily deployable tool to augment existing safety monitoring systems for airport ground operations. Full article
(This article belongs to the Special Issue Intelligent Sensing and Control Technology for Unmanned Vehicles)
Show Figures

Figure 1

16 pages, 32370 KB  
Article
ATDIOU: Arctangent Differential Loss Function for Bounding Box Regression
by Qiang Tang, Hao Qiang, Yuan Tian, Xubin Feng, Wei Hao and Meilin Xie
Sensors 2026, 26(5), 1545; https://doi.org/10.3390/s26051545 - 1 Mar 2026
Viewed by 397
Abstract
Object detection is a fundamental task in computer vision. Bounding box regression (BBR) losses are critical to detector performance. However, evaluation measures that rely on the Intersection over Union (IoU) between the predicted and ground truth boxes are highly sensitive to positional deviations, [...] Read more.
Object detection is a fundamental task in computer vision. Bounding box regression (BBR) losses are critical to detector performance. However, evaluation measures that rely on the Intersection over Union (IoU) between the predicted and ground truth boxes are highly sensitive to positional deviations, which can hinder optimization. To alleviate this issue, we propose ATDIoU, a novel arctangent-differential loss for bounding-box regression. ATDIoU computes distance similarity between a predicted and a ground truth box by modeling the distances between their corresponding vertices as a two-dimensional arctangent differential distribution (ATD). This arctangent differential-based design mitigates bounding box drift and reduces sensitivity to localization errors. As a result, it guides the model to learn target positions more effectively. We evaluate ATDIoU by integrating it into YOLOv6 and conducting experiments on PASCAL VOC and VisDrone2019. The results demonstrate that ATDIoU yields improvements of 1.4% and 0.7% in mean average precision (mAP) relative to MPDIoU. Full article
(This article belongs to the Special Issue AI for Emerging Image-Based Sensor Applications)
Show Figures

Figure 1

22 pages, 10574 KB  
Article
A Method for Pedestrian Trajectory Prediction Using INS-GNSS Wearable Devices
by Shengli Pang, Zhe Wang, Shiji Xu, Weichen Long, Ruoyu Pan and Honggang Wang
Sensors 2026, 26(4), 1309; https://doi.org/10.3390/s26041309 - 18 Feb 2026
Viewed by 452
Abstract
Driven by advancements in artificial intelligence technology, pedestrian trajectory prediction is shifting from traditional machine learning methods toward autonomous decision-making frameworks based on neural networks. However, the spatiotemporal uncertainty of pedestrian movement results in low accuracy of existing prediction models. To address this [...] Read more.
Driven by advancements in artificial intelligence technology, pedestrian trajectory prediction is shifting from traditional machine learning methods toward autonomous decision-making frameworks based on neural networks. However, the spatiotemporal uncertainty of pedestrian movement results in low accuracy of existing prediction models. To address this issue, we propose a multi-source perception fusion system based on INS-GNSS wearable devices. By integrating high-precision inertial measurement units (IMUs) and multi-mode global navigation satellite systems (GNSS), we enhance localization and prediction accuracy. For localization, we introduce a Gait Adaptive UKF (Gait-AUKF) that identifies pedestrian gait patterns and motion states by fusing multi-sensor data. An adaptive algorithm effectively suppresses trajectory drift and improves tracking accuracy. For trajectory prediction, we propose a pedestrian trajectory prediction framework based on a multi-source fusion attention mechanism. A GRU encoder extracts pedestrian trajectory features from historical motion data. An attention mechanism assigns varying weights to trajectory features across different scales. An LSTM decoder and A* path planning algorithm constrain spatiotemporal paths to generate future pedestrian trajectories. Experimental results demonstrate that compared to UKF and AKF, the Gait-AUKF reduces eastward error by 30%, northward error by 26.27%, and vertical error by 49.08%. The complete prediction framework achieves a 68.54% reduction in average position error (APE) and a 70.42% reduction in direction error (DE) compared to LSTM and Transformer models. Ablation experiments demonstrate that the integrated Gait-AUKF algorithm and A* path planning algorithm enhance model decision performance. After incorporating these algorithms, the model’s ADE decreased by 68.49% and FDE by 71.86%. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

23 pages, 6060 KB  
Article
YOLO-CSB: A Model for Real-Time and Accurate Detection and Localization of Occluded Apples in Complex Orchard Environments
by Yunxiao Pan, Yiwen Chen, Xing Tong, Mengfei Liu, Anxiang Huang, Meng Zhou and Yaohua Hu
Agronomy 2026, 16(3), 390; https://doi.org/10.3390/agronomy16030390 - 5 Feb 2026
Cited by 1 | Viewed by 676
Abstract
Apples are cultivated over a large global area with high yields, and efficient robotic harvesting requires accurate detection and localization, particularly in complex orchard environments where occlusion by leaves and fruits poses substantial challenges. To address this, we proposed a YOLO-CSB model-based method [...] Read more.
Apples are cultivated over a large global area with high yields, and efficient robotic harvesting requires accurate detection and localization, particularly in complex orchard environments where occlusion by leaves and fruits poses substantial challenges. To address this, we proposed a YOLO-CSB model-based method for apple detection and localization, designed to overcome occlusion and enhance the efficiency and accuracy of mechanized harvesting. Firstly, a comprehensive apple dataset was constructed, encompassing various lighting conditions and leaf obstructions, to train the model. Subsequently, the YOLO-CSB model, built upon YOLO11s, was developed with improvements including the integration of a lightweight CSFC Block to reconstruct the backbone, making the model more lightweight; the SEAM component is introduced to improve feature restoration in areas with occlusions, complemented by the efficient BiFPN approach to boost detection precision. Additionally, a 3D positioning technique integrating YOLO-CSB with an RGB-D camera is presented. Validation was conducted via ablation analyses, comparative tests, and 3D localization accuracy assessments in controlled laboratory and structured orchard settings, The YOLO-CSB model demonstrated effectiveness in apple target recognition and localization, with notable advantages under leaf and fruit occlusion conditions. Compared to the baseline YOLO11s model, YOLO-CSB improved mAP by 3.02% and reduced the parameter count by 3.19%. Against mainstream object detection models, YOLO-CSB exhibited significant advantages in detection accuracy and model size, achieving a mAP of 93.69%, precision of 88.82%, recall of 87.58%, and a parameter count of only 9.11 M. The detection accuracy in laboratory settings reached 100%, with average localization errors of 4.15 mm, 3.96 mm, and 4.02 mm in the X, Y, and Z directions, respectively. This method effectively addresses complex occlusion environments, enabling efficient detection and precise localization of apples, providing reliable technical support for mechanized harvesting. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

23 pages, 5549 KB  
Article
A Precision Weeding System for Cabbage Seedling Stage
by Pei Wang, Weiyue Chen, Qi Niu, Chengsong Li, Yuheng Yang and Hui Li
Agriculture 2026, 16(3), 384; https://doi.org/10.3390/agriculture16030384 - 5 Feb 2026
Viewed by 441
Abstract
This study developed an integrated vision–actuation system for precision weeding in indoor soil bin environments, with cabbage as a case example. The system integrates lightweight object detection, 3D co-ordinate mapping, path planning, and a three-axis synchronized conveyor-type actuator to enable precise weed identification [...] Read more.
This study developed an integrated vision–actuation system for precision weeding in indoor soil bin environments, with cabbage as a case example. The system integrates lightweight object detection, 3D co-ordinate mapping, path planning, and a three-axis synchronized conveyor-type actuator to enable precise weed identification and automated removal. By integrating ECA and CBAM attention mechanisms into YOLO11, we developed the YOLO11-WeedNet model. This integration significantly enhanced the detection performance for small-scale weeds under complex lighting and cluttered backgrounds. Based on the optimal model performance achieved during experimental evaluation, the model achieved 96.25% precision, 86.49% recall, 91.10% F1-score, and a mean Average Precision (mAP@0.5) of 91.50% calculated across two categories (crop and weed). An RGB-D fusion localization method combined with a protected-area constraint enabled accurate mapping of weed spatial positions. Furthermore, an enhanced Artificial Hummingbird Algorithm (AHA+) was proposed to optimize the execution path and reduce the operating trajectory while maintaining real-time performance. Indoor soil bin tests showed positioning errors of less than 8 mm on the X/Y axes, depth control within ±1 mm on the Z-axis, and an average weeding rate of 88.14%. The system achieved zero contact with cabbage seedlings, with a processing time of 6.88 s per weed. These results demonstrate the feasibility of the proposed system for precise and automated weeding at the cabbage seedling stage. Full article
Show Figures

Figure 1

23 pages, 3879 KB  
Article
Simultaneous Digital Twin: Chaining Climbing-Robot, Defect Segmentation, and Model Updating for Building Facade Inspection
by Changhao Song, Chang Lu, Yilong Shi, Aili He, Jiarui Lin and Zhiliang Ma
Buildings 2026, 16(3), 646; https://doi.org/10.3390/buildings16030646 - 4 Feb 2026
Viewed by 651
Abstract
The rapid deterioration of building facades presents substantial safety hazards in urban environments, necessitating advanced, automated inspection solutions. While computer vision (CV) and deep learning (DL) techniques have shown promise for defect analysis, critical gaps remain in achieving real-time, quantitative, and generalizable damage [...] Read more.
The rapid deterioration of building facades presents substantial safety hazards in urban environments, necessitating advanced, automated inspection solutions. While computer vision (CV) and deep learning (DL) techniques have shown promise for defect analysis, critical gaps remain in achieving real-time, quantitative, and generalizable damage assessment suitable for robotic deployment. Current methods often lack precise metric quantification, struggle with diverse material appearances, and are computationally intensive for on-site processing. To address these limitations, this paper introduces a fully automated, end-to-end inspection framework integrating a wall-climbing robot, a real-time vision-based analysis system, and a digital twin management platform. The primary contributions are threefold: (1) a novel, fully integrated robotic framework for autonomous navigation, multi-sensor data collection, and real-time analysis; (2) a lightweight, synthetic data-augmented DL model for real-time defect segmentation and metric quantification, achieving a mean Average Precision (mAP) of 0.775 for segmentation, an average defect length error of 1.140 cm, and an average center position error of 0.826 cm; (3) a cloud-based digital twin platform enabling quantitative defect visualization, spatiotemporal traceability, and data-driven project management, with the on-site inspection cycle demonstrating a responsive latency of 2.8–4.8 s. Validated through laboratory tests and real building projects, the framework demonstrates significant improvements in inspection efficiency, quantitative accuracy, and decision support over conventional methods. Full article
Show Figures

Figure 1

22 pages, 1674 KB  
Article
Foggy Ship Detection with Multi-Scale Feature and Attention Fusion
by Xiangjin Zeng, Jie Li and Ruifeng Xiong
Appl. Sci. 2026, 16(3), 1475; https://doi.org/10.3390/app16031475 - 1 Feb 2026
Viewed by 325
Abstract
To address the problem of insufficient detection accuracy, high false negative rate of small targets, and large positioning errors of ships in complex marine environments and foggy conditions, an improved DBL-YOLO method based on YOLOv11 is proposed. This method customizes and optimizes modules [...] Read more.
To address the problem of insufficient detection accuracy, high false negative rate of small targets, and large positioning errors of ships in complex marine environments and foggy conditions, an improved DBL-YOLO method based on YOLOv11 is proposed. This method customizes and optimizes modules according to the characteristics of foggy scenes—the C3k2-MDSC module is designed to efficiently extract and fuse multi-scale spatial features, and a dynamic weight allocation mechanism is adopted to balance the contributions of features at different scales in the foggy and blurred environment; a lightweight BiFPN structure is introduced to enhance the efficiency of cross-scale feature transmission and solve the problem of feature attenuation in foggy conditions; a novel fusion of the Deformable-LKA attention mechanism is innovated, which combines a large receptive field and spatial adaptive adjustment capabilities to focus on the key contour features of blurred ships in foggy conditions; an Inner-SIoU regression loss function is proposed, which optimizes the positioning accuracy of dense and small targets through an auxiliary bounding box dynamic scaling strategy. Experimental results show that in foggy scenes, the recall rate is increased by 3.4%, the F1 score is increased by 1%, and mAP@0.5 and mAP@0.5:0.95 are increased by 1.4% and 3.1%, respectively. The final average precision reaches 98.6%, demonstrating excellent detection accuracy and robustness. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 5907 KB  
Article
Indoor Localization Algorithm Based on Information Gain Ratio and Affinity Propagation Clustering
by Rencheng Jin, Di Zhang, Xiao Tian and Jianping Ma
Sensors 2026, 26(2), 664; https://doi.org/10.3390/s26020664 - 19 Jan 2026
Viewed by 506
Abstract
In indoor positioning systems, it is common to use existing AP deployments within buildings to build a fingerprint database, providing positioning information during the online phase. However, AP layouts inside buildings often contain a large number of redundant APs, which leads to the [...] Read more.
In indoor positioning systems, it is common to use existing AP deployments within buildings to build a fingerprint database, providing positioning information during the online phase. However, AP layouts inside buildings often contain a large number of redundant APs, which leads to the improvement in positioning accuracy leveling off as the number of redundant APs increases, while also increasing the computational load of indoor positioning services. To address this problem, the thesis proposes a method for calculating the AP location discrimination capability and combines the location discrimination capability with coverage to eliminate redundant APs. Experiments conducted in real indoor scenarios, as well as on the Crowdsourced dataset and the SODIndoorLoc dataset, validate the results. The results show that the redundant AP removing strategy ensures that the average positioning accuracy fluctuates by no more than 5% compared to the unfiltered case, while significantly reducing the number of APs in the fingerprint database—by 64.43%, 72.78%, and 59.62%, respectively. In the position estimation phase, this paper uses affinity propagation clustering for coarse positioning and combines Bayesian methods for fine positioning. Compared with GMM, K-Means, and the pointwise algorithm, the average positioning error of the proposed method is reduced by 11% to 39%. Full article
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)
Show Figures

Figure 1

20 pages, 5733 KB  
Article
A Lightweight Segmentation Model Method for Marigold Picking Point Localization
by Baojian Ma, Zhenghao Wu, Yun Ge, Bangbang Chen, Jijing Lin, He Zhang and Hao Xia
Horticulturae 2026, 12(1), 97; https://doi.org/10.3390/horticulturae12010097 - 17 Jan 2026
Cited by 1 | Viewed by 368
Abstract
A key challenge in automated marigold harvesting lies in the accurate identification of picking points under complex environmental conditions, such as dense shading and intense illumination. To tackle this problem, this research proposes a lightweight instance segmentation model combined with a harvest position [...] Read more.
A key challenge in automated marigold harvesting lies in the accurate identification of picking points under complex environmental conditions, such as dense shading and intense illumination. To tackle this problem, this research proposes a lightweight instance segmentation model combined with a harvest position estimation method. Based on the YOLOv11n-seg segmentation framework, we develop a lightweight PDS-YOLO model through two key improvements: (1) structural pruning of the base model to reduce its parameter count, (2) incorporation of a Channel-wise Distillation (CWD)-based feature distillation method to compensate for the accuracy loss caused by pruning. The resulting lightweight segmentation model achieves a size of only 1.3 MB (22.8% of the base model) and a computational cost of 5 GFLOPs (49.02% of the base model). At the same time, it maintains high segmentation performance, with a precision of 93.6% and a mean average precision (mAP) of 96.7% for marigold segmentation. Furthermore, the proposed model demonstrates enhanced robustness under challenging scenarios including strong lighting, cloudy weather, and occlusion, improving the recall rate by 1.1% over the base model. Based on the segmentation results, a method for estimating marigold harvest positions using 3D point clouds is proposed. Fitting and deflection angle experiments confirm that the fitting errors are constrained within 3–12 mm, which lies within an acceptable range for automated harvesting. These results validate the capability of the proposed approach to accurately locate marigold harvest positions under top-down viewing conditions. The lightweight segmentation network and harvest position estimation method presented in this work offer effective technical support for selective harvesting of marigolds. Full article
(This article belongs to the Special Issue Orchard Intelligent Production: Technology and Equipment)
Show Figures

Figure 1

20 pages, 4373 KB  
Article
SO-YOLO11-CDP: An Instance Segmentation-Based Approach for Cross-Depth-of-Field Positioning Micro Image Sensor Modules in Precision Assembly
by Xi Lu, Juan Zhang, Yi Yang and Lie Bi
Electronics 2026, 15(2), 411; https://doi.org/10.3390/electronics15020411 - 16 Jan 2026
Viewed by 412
Abstract
During batch soldering, assembly of micro image sensor modules, initial random pose, and feature partially occlude target micro-component image, leading to issues of missed and erroneous detection, and low 3D spatial positioning accuracy due to cross-depth-of-field detection errors in microscopic vision. This paper [...] Read more.
During batch soldering, assembly of micro image sensor modules, initial random pose, and feature partially occlude target micro-component image, leading to issues of missed and erroneous detection, and low 3D spatial positioning accuracy due to cross-depth-of-field detection errors in microscopic vision. This paper proposes Small object-YOLO11-Cross-Depth-of-field Positioning (SO-YOLO11-CDP), an instance segmentation-based approach for precision cross-depth-of-field positioning micro-component. First, an improved Small object-YOLO11 (SO-YOLO11) image segmentation algorithm is designed. By incorporating a coordinate attention mechanism (CA) into segmentation head to enhance localization of micro-targets, the backbone uses non-stride convolution to preserve fine-grained feature, while target regression performance is boosted via Efficient-IoU (EIoU) loss combined with normalized Wasserstein distance (NWD). Subsequently, to further improve spatial position detection accuracy in cross-depth-of-field detection, a calibration error compensation model for image Jacobian matrix is established based on pinhole imaging principles. Experimental results indicate that SO-YOLO11 achieves 16.1% increase in precision, 4.0% increase in recall, and 9.9% increase in mean average precision (mAP0.5) over baseline YOLO11. Furthermore, it accomplishes spatial detection accuracy superior to 6.5 μm for target micro-components. The method presented in this paper holds significant engineering application value for high-precision spatial position detection of micro image sensor components. Full article
Show Figures

Figure 1

22 pages, 20100 KB  
Article
Real-Time Detection and Validation of a Target-Oriented Model for Spindle-Shaped Tree Trunks Leveraging Deep Learning
by Kang Zheng, Shuo Yang, Zhichong Wang, Hao Fu, Xiu Wang, Wei Zou, Changyuan Zhai and Liping Chen
Agronomy 2026, 16(2), 210; https://doi.org/10.3390/agronomy16020210 - 15 Jan 2026
Viewed by 505
Abstract
To enhance the automation and intelligence of trenching fertilization operations, this research proposes a real-time trunk detection model (Trunk-Seek) designed for spindle-shaped orchards. The model employs a customized data augmentation strategy and integrates the YOLO deep learning framework to effectively address visual challenges [...] Read more.
To enhance the automation and intelligence of trenching fertilization operations, this research proposes a real-time trunk detection model (Trunk-Seek) designed for spindle-shaped orchards. The model employs a customized data augmentation strategy and integrates the YOLO deep learning framework to effectively address visual challenges such as lighting variation, occlusion, and motion blur. Multiple object tracking algorithms were evaluated, and ByteTrack was selected for its superior performance in dynamic trunk tracking. In addition, a Positioning and Triggering Algorithm (PTA) was developed to enable precise localization and triggering for target-oriented fertilization. The system was deployed on an edge device, a test bench was established, and both laboratory and field experiments were conducted to validate its performance. Experimental results demonstrated that the detection model achieved an mAP50 of 98.9% and maintained a stable 32.53 FPS on the edge device, fulfilling real-time detection requirements. Test bench analysis revealed that variations in trunk diameter and operation speed affected triggering accuracy, with an average dynamic localization error of ±1.78 cm. An empirical model (T) was developed to describe the time-delay behavior associated with positioning errors. Field verification in orchards confirmed that Trunk-Seek achieved a triggering accuracy of 91.08%, representing a 24.08% improvement over conventional training methods. Combining high accuracy with robust real-time performance, Trunk-Seek and the proposed PTA provide essential technical support for the development of a visual target-oriented fertilization system in modern orchards. Full article
Show Figures

Figure 1

27 pages, 7144 KB  
Article
A Time and Frequency Domain Based Dual-Attention Neural Network for Tropical Cyclone Track Prediction
by Fancheng Meng, Xiran Xiong and Liling Zhao
Appl. Sci. 2026, 16(1), 436; https://doi.org/10.3390/app16010436 - 31 Dec 2025
Cited by 1 | Viewed by 518
Abstract
Due to the influence of various dynamic meteorological factors, accurate Tropical Cyclone (TC) track prediction is a significant challenge. However, current deep learning based time series prediction models fail to simultaneously capture both short-term and long-term dependencies, while also neglecting the change in [...] Read more.
Due to the influence of various dynamic meteorological factors, accurate Tropical Cyclone (TC) track prediction is a significant challenge. However, current deep learning based time series prediction models fail to simultaneously capture both short-term and long-term dependencies, while also neglecting the change in meteorological environment pattern associated with TC motion. This limitation becomes particularly pronounced during sudden turning in the TC track, resulting in significant deterioration of prediction accuracy. To overcome these limitations, we propose LFInformer, a hybrid deep learning framework that integrates an Informer backbone, a Frequency-Enhanced Channel Attention Mechanism (FECAM), and a Long Short-Term Memory (LSTM) network for TC track prediction. The Informer backbone is underpinned by ProbSparse Self-Attention in both the encoder and the causally masked decoder, prioritizing the most informative query–key interactions to deliver robust long-range modeling and sharper detection of turning signals. FECAM enhances meteorological inputs via discrete cosine transforms, band-wise weighting, and channel-wise reweighting, then projects the enhanced signals back into the time domain to produce frequency-aware representations. The LSTM branch captures short-term variations and localized temporal dynamics through its recurrent structure. Together, these components sustain high accuracy during both steady evolution and sudden turnin. Experiments based on the JMA and IBTrACS 1951–2022 Northwest Pacific TC data show that the proposed model achieves an average absolute position error (APE) of 72.39 km, 117.72 km, 145.31 km and 168.64 km for the 6-h, 12-h, 24-h and 48-h forecasting tasks, respectively. The proposed model enhances the accuracy of TC track predictions, offering an innovative approach that optimally balances precision and efficiency in forecasting sudden turning points. Full article
(This article belongs to the Special Issue Advanced Methods for Time Series Forecasting)
Show Figures

Figure 1

28 pages, 6066 KB  
Article
Vision-Based System for Tree Species Recognition and DBH Estimation in Artificial Forests
by Zhiheng Lu, Yu Li, Chong Li, Tianyi Wang, Hao Lai, Wang Yang and Guanghui Wang
Forests 2026, 17(1), 17; https://doi.org/10.3390/f17010017 - 22 Dec 2025
Viewed by 679
Abstract
The species, quantity, and tree diameter at breast height (DBH) are important indicators for assessing species distribution, individual growth status, and overall health in the forest. The existing tree information collection mainly relies on manual labor, which results in low efficiency and high [...] Read more.
The species, quantity, and tree diameter at breast height (DBH) are important indicators for assessing species distribution, individual growth status, and overall health in the forest. The existing tree information collection mainly relies on manual labor, which results in low efficiency and high labor intensity. To address these issues, we propose a method for tree species identification and diameter estimation by combining deep learning algorithms with binocular vision. First, an image acquisition platform is designed and integrated with a weeding machine to capture images during weeding operation. Images of seven types of trees are captured to develop a dataset. Second, a tree species identification model is established based on the YOLOv8n network, achieving 98.5% accuracy, 99.0% recall, and 99.2% mAP. Then, an improved YOLOv8n-seg model is proposed. It simplifies the network by introducing VanillaBlock in the backbone. FasterNet with a CCFM structure is added at the neck to enhance the model’s multi-scale expression capability. The mIoU of the improved model is 93.7%. Finally, the improved YOLOv8n-seg model is combined with binocular vision. After obtaining the segmentation mask of the tree, the spatial position of the two measurement points is calculated, allowing for the measurement of tree diameter. Verification experiments show that the average error for tree diameter ranges from 4.40~6.40 mm, and the proposed error compensation method can reduce diameter errors. This study provides a theoretical foundation and technical support for intelligent collection of tree information. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 7924 KB  
Article
Wood-YOLOv11: An Optimized YOLOv11-Based Model for Real-Time Pith Detection in Sawn Timber
by Shuke Jia, Fanxu Kong, Baolei Jin, Chenyang Jin and Zeli Que
Appl. Sci. 2025, 15(24), 13056; https://doi.org/10.3390/app152413056 - 11 Dec 2025
Viewed by 744
Abstract
The precise localization of the pith within sawn timber cross-sections is essential for improving downstream processing accuracy in modern wood manufacturing. Existing industrial workflows still rely heavily on manual interpretation, which is labor-intensive, error-prone, and unsuitable for real-time quality control. However, automatic pith [...] Read more.
The precise localization of the pith within sawn timber cross-sections is essential for improving downstream processing accuracy in modern wood manufacturing. Existing industrial workflows still rely heavily on manual interpretation, which is labor-intensive, error-prone, and unsuitable for real-time quality control. However, automatic pith detection is challenging due to the small size of the pith, its visual similarity to knots and cracks, and the dominance of negative samples (boards without visible pith) in practical scenarios. To address these challenges, this study develops Wood-YOLOv11, a task-adapted YOLOv11-based pith detection model optimized for real-time and high-precision operation in wood processing environments. The proposed approach incorporates: (1) a dedicated sawn-timber cross-section dataset including multiple species, mixed imaging sources, and clearly annotated pith positions; (2) a negative-sample-aware training strategy that explicitly leverages pithless boards and weighted binary cross-entropy to mitigate extreme class imbalance; (3) a high-resolution (840 × 840) input configuration and optimized loss weighting to improve small-target localization; and (4) a comprehensive evaluation protocol including false-positive analysis on pithless boards and comparison with mainstream detectors. Validated on a comprehensive, custom-annotated sawn timber dataset, our model demonstrates excellent performance. It achieves a mean Average Precision (mAP@0.5) of 92.1%, a Precision of 95.18%, and a Recall of 87.72%, proving its ability to handle high-texture backgrounds and small target sizes. The proposed Wood-YOLOv11 model provides a robust, real-time, and efficient technical solution for the intelligent transformation of the wood processing industry. Full article
Show Figures

Figure 1

26 pages, 10166 KB  
Article
ADG-YOLO: A Lightweight and Efficient Framework for Real-Time UAV Target Detection and Ranging
by Hongyu Wang, Zheng Dang, Mingzhu Cui, Hanqi Shi, Yifeng Qu, Hongyuan Ye, Jingtao Zhao and Duosheng Wu
Drones 2025, 9(10), 707; https://doi.org/10.3390/drones9100707 - 13 Oct 2025
Cited by 3 | Viewed by 3250
Abstract
The rapid evolution of UAV technology has increased the demand for lightweight airborne perception systems. This study introduces ADG-YOLO, an optimized model for real-time target detection and ranging on UAV platforms. Building on YOLOv11n, we integrate C3Ghost modules for efficient feature fusion and [...] Read more.
The rapid evolution of UAV technology has increased the demand for lightweight airborne perception systems. This study introduces ADG-YOLO, an optimized model for real-time target detection and ranging on UAV platforms. Building on YOLOv11n, we integrate C3Ghost modules for efficient feature fusion and ADown layers for detail-preserving downsampling, reducing the model’s parameters to 1.77 M and computation to 5.7 GFLOPs. The Extended Kalman Filter (EKF) tracking improves positional stability in dynamic environments. Monocular ranging is achieved using similarity triangle theory with known target widths. Evaluations on a custom dataset, consisting of 5343 images from three drone types in complex environments, show that ADG-YOLO achieves 98.4% mAP0.5 and 85.2% mAP0.5:0.95 at 27 FPS when deployed on Lubancat4 edge devices. Distance measurement tests indicate an average error of 4.18% in the 0.5–5 m range for the DJI NEO model, and an average error of 2.40% in the 2–50 m range for the DJI 3TD model. These results suggest that the proposed model provides a practical trade-off between detection accuracy and computational efficiency for resource-constrained UAV applications. Full article
Show Figures

Figure 1

Back to TopTop