Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (198)

Search Parameters:
Keywords = drone LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2538 KiB  
Article
Dynamic Obstacle Perception Technology for UAVs Based on LiDAR
by Wei Xia, Feifei Song and Zimeng Peng
Drones 2025, 9(8), 540; https://doi.org/10.3390/drones9080540 (registering DOI) - 31 Jul 2025
Abstract
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations [...] Read more.
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations of small quadcopter drones and meet the demands of obstacle perception in dynamic environments, a LiDAR-based obstacle perception algorithm is proposed. First, accumulation, filtering, and clustering processes are carried out on the LiDAR point cloud data to complete the segmentation and extraction of point cloud obstacles. Then, an obstacle motion/static discrimination algorithm based on three-dimensional point motion attributes is developed to classify dynamic and static point clouds. Finally, oriented bounding box (OBB) detection is employed to simplify the representation of the spatial position and shape of dynamic point cloud obstacles, and motion estimation is achieved by tracking the OBB parameters using a Kalman filter. Simulation experiments demonstrate that this method can ensure a dynamic obstacle detection frequency of 10 Hz and successfully detect multiple dynamic obstacles in the environment. Full article
Show Figures

Figure 1

26 pages, 11912 KiB  
Article
Multi-Dimensional Estimation of Leaf Loss Rate from Larch Caterpillar Under Insect Pest Stress Using UAV-Based Multi-Source Remote Sensing
by He-Ya Sa, Xiaojun Huang, Li Ling, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Mungunkhuyag Ariunaa, Dorjsuren Altanchimeg and Davaadorj Enkhnasan
Drones 2025, 9(8), 529; https://doi.org/10.3390/drones9080529 - 28 Jul 2025
Viewed by 249
Abstract
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and [...] Read more.
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and accurate acquisition of the LLR via remote sensing monitoring is crucial. This study is based on drone hyperspectral and LiDAR data as well as ground survey data, calculating hyperspectral indices (HSI), multispectral indices (MSI), and LiDAR indices (LI). It employs Savitzky–Golay (S–G) smoothing with different window sizes (W) and polynomial orders (P) combined with recursive feature elimination (RFE) to select sensitive features. Using Random Forest Regression (RFR) and Convolutional Neural Network Regression (CNNR) to construct a multidimensional (horizontal and vertical) estimation model for LLR, combined with LiDAR point cloud data, achieved a three-dimensional visualization of the leaf loss rate of trees. The results of the study showed: (1) The optimal combination of HSI and MSI was determined to be W11P3, and the LI was W5P2. (2) The optimal combination of the number of sensitive features extracted by the RFE algorithm was 13 HSI, 16 MSI, and hierarchical LI (2 in layer I, 9 in layer II, and 11 in layer III). (3) In terms of the horizontal estimation of the defoliation rate, the model performance index of the CNNRHSI model (MPI = 0.9383) was significantly better than that of RFRMSI (MPI = 0.8817), indicating that the continuous bands of hyperspectral could better monitor the subtle changes of LLR. (4) The I-CNNRHSI+LI, II-CNNRHSI+LI, and III-CNNRHSI+LI vertical estimation models were constructed by combining the CNNRHSI model with the best accuracy and the LI sensitive to different vertical levels, respectively, and their MPIs reached more than 0.8, indicating that the LLR estimation of different vertical levels had high accuracy. According to the model, the pixel-level LLR of the sample tree was estimated, and the three-dimensional display of the LLR for forest trees under the pest stress of larch caterpillars was generated, providing a high-precision research scheme for LLR estimation under pest stress. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

17 pages, 6208 KiB  
Article
A Low-Cost Experimental Quadcopter Drone Design for Autonomous Search-and-Rescue Missions in GNSS-Denied Environments
by Shane Allan and Martin Barczyk
Drones 2025, 9(8), 523; https://doi.org/10.3390/drones9080523 - 25 Jul 2025
Viewed by 346
Abstract
Autonomous drones may be called on to perform search-and-rescue operations in environments without access to signals from the global navigation satellite system (GNSS), such as underground mines, subterranean caverns, or confined tunnels. While technology to perform such missions has been demonstrated at events [...] Read more.
Autonomous drones may be called on to perform search-and-rescue operations in environments without access to signals from the global navigation satellite system (GNSS), such as underground mines, subterranean caverns, or confined tunnels. While technology to perform such missions has been demonstrated at events such as DARPA’s Subterranean (Sub-T) Challenge, the hardware deployed for these missions relies on heavy and expensive sensors, such as LiDAR, carried by costly mobile platforms, such as legged robots and heavy-lift multicopters, creating barriers for deployment and training with this technology for all but the wealthiest search-and-rescue organizations. To address this issue, we have developed a custom four-rotor aerial drone platform specifically built around low-cost low-weight sensors in order to minimize costs and maximize flight time for search-and-rescue operations in GNSS-denied environments. We document the various issues we encountered during the building and testing of the vehicle and how they were solved, for instance a novel redesign of the airframe to handle the aggressive yaw maneuvers commanded by the FUEL exploration framework running onboard the drone. The resulting system is successfully validated through a hardware autonomous flight experiment performed in an underground environment without access to GNSS signals. The contribution of the article is to share our experiences with other groups interested in low-cost search-and-rescue drones to help them advance their own programs. Full article
Show Figures

Figure 1

18 pages, 3178 KiB  
Article
Biomass Estimation of Apple and Citrus Trees Using Terrestrial Laser Scanning and Drone-Mounted RGB Sensor
by Min-Ki Lee, Yong-Ju Lee, Dong-Yong Lee, Jee-Su Park and Chang-Bae Lee
Remote Sens. 2025, 17(15), 2554; https://doi.org/10.3390/rs17152554 - 23 Jul 2025
Viewed by 247
Abstract
Developing accurate activity data on tree biomass using remote sensing tools such as LiDAR and drone-mounted sensors is essential for improving carbon accounting in the agricultural sector. However, direct biomass measurements of perennial fruit trees remain limited, especially for validating remote sensing estimates. [...] Read more.
Developing accurate activity data on tree biomass using remote sensing tools such as LiDAR and drone-mounted sensors is essential for improving carbon accounting in the agricultural sector. However, direct biomass measurements of perennial fruit trees remain limited, especially for validating remote sensing estimates. This study evaluates the potential of terrestrial laser scanning (TLS) and drone-mounted RGB sensors (Drone_RGB) for estimating biomass in two major perennial crops in South Korea: apple (‘Fuji’/M.9) and citrus (‘Miyagawa-wase’). Trees of different ages were destructively sampled for biomass measurement, while volume, height, and crown area data were collected via TLS and Drone_RGB. Regression analyses were performed, and the model accuracy was assessed using R2, RMSE, and bias. The TLS-derived volume showed strong predictive power for biomass (R2 = 0.704 for apple, 0.865 for citrus), while the crown area obtained using both sensors showed poor fit (R2 ≤ 0.7). Aboveground biomass was reasonably estimated (R2 = 0.725–0.865), but belowground biomass showed very low predictability (R2 < 0.02). Although limited in scale, this study provides empirical evidence to support the development of remote sensing-based biomass estimation methods and may contribute to improving national greenhouse gas inventories by refining emission/removal factors for perennial fruit crops. Full article
(This article belongs to the Special Issue Biomass Remote Sensing in Forest Landscapes II)
Show Figures

Figure 1

21 pages, 12122 KiB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 272
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

18 pages, 4939 KiB  
Article
LiDAR-Based Detection of Field Hamster (Cricetus cricetus) Burrows in Agricultural Fields
by Florian Thürkow, Milena Mohri, Jonas Ramstetter and Philipp Alb
Sustainability 2025, 17(14), 6366; https://doi.org/10.3390/su17146366 - 11 Jul 2025
Viewed by 276
Abstract
Farmers face increasing pressure to maintain vital populations of the critically endangered field hamster (Cricetus cricetus) while managing crop damage caused by field mice. This challenge is linked to the UN Sustainable Development Goals (SDGs) 2 and 15, addressing food security [...] Read more.
Farmers face increasing pressure to maintain vital populations of the critically endangered field hamster (Cricetus cricetus) while managing crop damage caused by field mice. This challenge is linked to the UN Sustainable Development Goals (SDGs) 2 and 15, addressing food security and biodiversity. Consequently, the reliable detection of hamster activity in agricultural fields is essential. While remote sensing offers potential for wildlife monitoring, commonly used RGB imagery has limitations in detecting small burrow entrances in vegetated areas. This study investigates the potential of drone-based Light Detection and Ranging (LiDAR) data for identifying field hamster burrow entrances in agricultural landscapes. A geostatistical method was developed to detect local elevation minima as indicators of burrow openings. The analysis used four datasets captured at varying flight altitudes and spatial resolutions. The method successfully detected up to 20 out of 23 known burrow entrances and achieved an F1-score of 0.83 for the best-performing dataset. Detection was most accurate at flight altitudes of 30 m or lower, with performance decreasing at higher altitudes due to reduced point density. These findings demonstrate the potential of UAV-based LiDAR to support non-invasive species monitoring and habitat management in agricultural systems, contributing to sustainable conservation practices in line with the SDGs. Full article
(This article belongs to the Special Issue Ecology, Biodiversity and Sustainable Conservation)
Show Figures

Figure 1

16 pages, 60222 KiB  
Article
Evaluating the Potential of UAVs for Monitoring Fine-Scale Restoration Efforts in Hydroelectric Reservoirs
by Gillian Voss, Micah May, Nancy Shackelford, Jason Kelley, Roger Stephen and Christopher Bone
Drones 2025, 9(7), 488; https://doi.org/10.3390/drones9070488 - 10 Jul 2025
Viewed by 346
Abstract
The construction of hydroelectric dams leads to substantial land-cover alterations, particularly through the removal of vegetation in wetland and valley areas. This results in exposed sediment that is susceptible to erosion, potentially leading to dust storms. While the reintroduction of vegetation plays a [...] Read more.
The construction of hydroelectric dams leads to substantial land-cover alterations, particularly through the removal of vegetation in wetland and valley areas. This results in exposed sediment that is susceptible to erosion, potentially leading to dust storms. While the reintroduction of vegetation plays a crucial role in restoring these landscapes and mitigating erosion, such efforts incur substantial costs and require detailed information to help optimize vegetation densities that effectively reduce dust storm risk. This study evaluates the performance of drones for measuring the growth of introduced low-lying grasses on reservoir beaches. A set of test flights was conducted to compare LiDAR and photogrammetry data, assessing factors such as flight altitude, speed, and image side overlap. The results indicate that, for this specific vegetation type, photogrammetry at lower altitudes significantly enhanced the accuracy of vegetation classification, permitting effective quantitative assessments of vegetation densities for dust storm risk reduction. Full article
Show Figures

Figure 1

30 pages, 4582 KiB  
Review
Review on Rail Damage Detection Technologies for High-Speed Trains
by Yu Wang, Bingrong Miao, Ying Zhang, Zhong Huang and Songyuan Xu
Appl. Sci. 2025, 15(14), 7725; https://doi.org/10.3390/app15147725 - 10 Jul 2025
Viewed by 529
Abstract
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes [...] Read more.
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes the damage detection methods for high-speed trains, and compares and analyzes different detection technologies and application research results. The analysis results show that the detection methods for high-speed train rail damage mainly focus on the research and application of non-destructive testing technology and methods, as well as testing platform equipment. Detection platforms and equipment include a new type of vortex meter, integrated track recording vehicles, laser rangefinders, thermal sensors, laser vision systems, LiDAR, new ultrasonic detectors, rail detection vehicles, rail detection robots, laser on-board rail detection systems, track recorders, self-moving trolleys, etc. The main research and application methods include electromagnetic detection, optical detection, ultrasonic guided wave detection, acoustic emission detection, ray detection, vortex detection, and vibration detection. In recent years, the most widely studied and applied methods have been rail detection based on LiDAR detection, ultrasonic detection, eddy current detection, and optical detection. The most important optical detection method is machine vision detection. Ultrasonic detection can detect internal damage of the rail. LiDAR detection can detect dirt around the rail and the surface, but the cost of this kind of equipment is very high. And the application cost is also very high. In the future, for high-speed railway rail damage detection, the damage standards must be followed first. In terms of rail geometric parameters, the domestic standard (TB 10754-2018) requires a gauge deviation of ±1 mm, a track direction deviation of 0.3 mm/10 m, and a height deviation of 0.5 mm/10 m, and some indicators are stricter than European standard EN-13848. In terms of damage detection, domestic flaw detection vehicles have achieved millimeter-level accuracy in crack detection in rail heads, rail waists, and other parts, with a damage detection rate of over 85%. The accuracy of identifying track components by the drone detection system is 93.6%, and the identification rate of potential safety hazards is 81.8%. There is a certain gap with international standards, and standards such as EN 13848 have stricter requirements for testing cycles and data storage, especially in quantifying damage detection requirements, real-time damage data, and safety, which will be the key research and development contents and directions in the future. Full article
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 1171
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

18 pages, 13123 KiB  
Article
Field Study of UAV Variable-Rate Spraying Method for Orchards Based on Canopy Volume
by Pengchao Chen, Haoran Ma, Zongyin Cui, Zhihong Li, Jiapei Wu, Jianhong Liao, Hanbing Liu, Ying Wang and Yubin Lan
Agriculture 2025, 15(13), 1374; https://doi.org/10.3390/agriculture15131374 - 27 Jun 2025
Viewed by 456
Abstract
The use of unmanned aerial vehicle (UAV) pesticide spraying technology in precision agriculture is becoming increasingly important. However, traditional spraying methods struggle to address the precision application need caused by the canopy differences of fruit trees in orchards. This study proposes a UAV [...] Read more.
The use of unmanned aerial vehicle (UAV) pesticide spraying technology in precision agriculture is becoming increasingly important. However, traditional spraying methods struggle to address the precision application need caused by the canopy differences of fruit trees in orchards. This study proposes a UAV orchard variable-rate spraying method based on canopy volume. A DJI M300 drone equipped with LiDAR was used to capture high-precision 3D point cloud data of tree canopies. An improved progressive TIN densification (IPTD) filtering algorithm and a region-growing algorithm were applied to segment the point cloud of fruit trees, construct a canopy volume-based classification model, and generate a differentiated prescription map for spraying. A distributed multi-point spraying strategy was employed to optimize droplet deposition performance. Field experiments were conducted in a citrus (Citrus reticulata Blanco) orchard (73 trees) and a litchi (Litchi chinensis Sonn.) orchard (82 trees). Data analysis showed that variable-rate treatment in the litchi area achieved a maximum canopy coverage of 14.47% for large canopies, reducing ground deposition by 90.4% compared to the continuous spraying treatment; variable-rate treatment in the citrus area reached a maximum coverage of 9.68%, with ground deposition reduced by approximately 64.1% compared to the continuous spraying treatment. By matching spray volume to canopy demand, variable-rate spraying significantly improved droplet deposition targeting, validating the feasibility of the proposed method in reducing pesticide waste and environmental pollution and providing a scalable technical path for precision plant protection in orchards. Full article
(This article belongs to the Special Issue Smart Spraying Technology in Orchards: Innovation and Application)
Show Figures

Figure 1

15 pages, 3092 KiB  
Article
Geostatistical Vegetation Filtering for Rapid UAV-RGB Mapping of Sudden Geomorphological Events in the Mediterranean Areas
by María Teresa González-Moreno and Jesús Rodrigo-Comino
Drones 2025, 9(6), 441; https://doi.org/10.3390/drones9060441 - 16 Jun 2025
Viewed by 566
Abstract
The use of UAVs for analyzing soil degradation processes, particularly erosion, has become a crucial tool in environmental monitoring. However, the use of LiDAR (Light Detection and Ranging) or TLS (Terrestrial Lasser Scanner) may not be affordable for many researchers because of the [...] Read more.
The use of UAVs for analyzing soil degradation processes, particularly erosion, has become a crucial tool in environmental monitoring. However, the use of LiDAR (Light Detection and Ranging) or TLS (Terrestrial Lasser Scanner) may not be affordable for many researchers because of the elevated costs and difficulties for cloud processing to present a valuable option for rapid landscape assessment following extreme events like Mediterranean storms. This study focuses on the application of drone-based remote sensing with only an RGB camera in geomorphological mapping. A key objective is the removal of vegetation from imagery to enhance the analysis of erosion and sediment transport dynamics. The research was carried out over a cereal cultivation plot in Málaga Province, an area recently affected by high-intensity rainfalls exceeding 100 mm in a single day in the past year, which triggered significant soil displacement. By processing UAV-derived data, a Digital Elevation Model (DEM) was generated through geostatistical techniques, refining the Digital Surface Model (DSM) to improve topographical change detection. The ability to accurately remove vegetation from aerial imagery allows for a more precise assessment of erosion patterns and sediment redistribution in geomorphological features with rapid spatiotemporal changes. Full article
Show Figures

Figure 1

22 pages, 473 KiB  
Review
Monitoring Slope Stability: A Comprehensive Review of UAV Applications in Open-Pit Mining
by Stephanos Tsachouridis, Francis Pavloudakis, Constantinos Sachpazis and Vassilios Tsioukas
Land 2025, 14(6), 1193; https://doi.org/10.3390/land14061193 - 3 Jun 2025
Viewed by 992
Abstract
Unmanned aerial vehicles (UAVs) have increasingly proven to be flexible tools for mapping mine terrain, offering expedient and precise data compared to alternatives. Photogrammetric outputs are particularly beneficial in open pit operations and waste dump areas, since they enable cost-effective and reproducible digital [...] Read more.
Unmanned aerial vehicles (UAVs) have increasingly proven to be flexible tools for mapping mine terrain, offering expedient and precise data compared to alternatives. Photogrammetric outputs are particularly beneficial in open pit operations and waste dump areas, since they enable cost-effective and reproducible digital terrain models. Meanwhile, UAV-based LiDAR has proven invaluable in situations where uniform ground surfaces, dense vegetation, or steep slopes challenge purely photogrammetric solutions. Recent advances in machine learning and deep learning have further enhanced the capacity to distinguish critical features, such as vegetation and fractured rock surfaces, thereby reducing the likelihood of accidents and ecological damage. Nevertheless, scientific gaps remain to be researched. Standardization around flight practices, sensor selection, and data verification persists as elusive, and most mining sites still rely on limited, multi-temporal surveys that may not capture sudden changes in slope conditions. Complexity lies in devising strategies for rehabilitated dumps, where post-mining restoration efforts involve vegetation regrowth, erosion mitigation, and altered land use. Through expanded sensor integration and refined automated analysis, approaches could shift from information gathering to ongoing hazard assessment and environmental surveillance. This evolution would improve both safety and environmental stewardship, reflecting the emerging role of UAVs in advancing a more sustainable future for mining. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

21 pages, 12474 KiB  
Article
Drone Height from Ground Determination Using GNSS-R Based on Dual-Frequency GPS/BDS Signals
by Li Zhang, Weiwei Qin, Fan Gao, Weijie Kang and Yue Zhu
Remote Sens. 2025, 17(10), 1722; https://doi.org/10.3390/rs17101722 - 14 May 2025
Viewed by 512
Abstract
Conventional techniques to measure drone heights from the ground, including global navigation satellite systems (GNSSs), barometers, acoustic sensors, and LiDAR, are limited by their measurement ranges, an inability to directly obtain the height from the ground, or poor concealment. To overcome these shortcomings, [...] Read more.
Conventional techniques to measure drone heights from the ground, including global navigation satellite systems (GNSSs), barometers, acoustic sensors, and LiDAR, are limited by their measurement ranges, an inability to directly obtain the height from the ground, or poor concealment. To overcome these shortcomings, we propose the use of GNSS reflectometry (GNSS-R) to determine a drone’s height from the ground. We conducted experiments over farmland and an urban road using a drone that carried an upward-looking right-hand circularly polarized (RHCP) antenna, a downward-looking left-hand circularly polarized (LHCP) antenna, and an intermediate frequency (IF) data collector to test the performance. Three flights were conducted in a bare soil scenario, a sparse apple orchard scenario, and an urban road scenario. A software-defined receiver was used to process the IF signal data to compute the one-dimensional time-delay-dependent power peak positions of the direct and reflected GNSS signals. Based on these peak positions, the path delay measurements between the direct and reflected signals were derived per second based on the BDS B1C and B2a, GPS C/A, and L5 signals. The drone heights were then retrieved. The results showed that the drone height retrieval accuracy could reach approximately 0.5–2 m. Full article
(This article belongs to the Special Issue SoOP-Reflectometry or GNSS-Reflectometry: Theory and Applications)
Show Figures

Figure 1

27 pages, 658 KiB  
Systematic Review
Advances in the Automated Identification of Individual Tree Species: A Systematic Review of Drone- and AI-Based Methods in Forest Environments
by Ricardo Abreu-Dias, Juan M. Santos-Gago, Fernando Martín-Rodríguez and Luis M. Álvarez-Sabucedo
Technologies 2025, 13(5), 187; https://doi.org/10.3390/technologies13050187 - 6 May 2025
Viewed by 1041
Abstract
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. [...] Read more.
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. However, significant challenges persist, particularly in heterogeneous forest environments with high species diversity and complex canopy structures. This systematic review explores the latest research on drone-based data collection and AI-driven classification techniques, focusing on studies that classify specific tree species rather than generic tree detection. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, peer review studies from the last decade were analyzed to identify trends in data acquisition instruments (e.g., RGB, multispectral, hyperspectral, LiDAR), preprocessing techniques, segmentation approaches, and machine learning (ML) algorithms used for classification. Findings of this study reveal that deep learning (DL) models, particularly convolutional neural networks (CNN), are increasingly replacing traditional ML methods such as random forest (RF) or support vector machines (SVMs) because there is no need for a feature extraction phase, as this is implicit in the DL models. The integration of LiDAR with hyperspectral imaging further enhances classification accuracy but remains limited due to cost constraints. Additionally, we discuss the challenges of model generalization across different forest ecosystems and propose future research directions, including the development of standardized datasets and improved model architectures for robust tree species classification. This review provides a comprehensive synthesis of existing methodologies, highlighting both advancements and persistent gaps in AI-driven forest monitoring. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

23 pages, 5424 KiB  
Review
Recent Developments and Future Prospects in the Integration of Machine Learning in Mechanised Systems for Autonomous Spraying: A Brief Review
by Francesco Toscano, Costanza Fiorentino, Lucas Santos Santana, Ricardo Rodrigues Magalhães, Daniel Albiero, Řezník Tomáš, Martina Klocová and Paola D’Antonio
AgriEngineering 2025, 7(5), 142; https://doi.org/10.3390/agriengineering7050142 - 6 May 2025
Viewed by 1127
Abstract
The integration of machine learning (ML) into self-governing spraying systems is one of the major developments in digital precision agriculture that is significantly improving resource efficiency, sustainability, and production. This study looks at current advances in machine learning applications for automated spraying in [...] Read more.
The integration of machine learning (ML) into self-governing spraying systems is one of the major developments in digital precision agriculture that is significantly improving resource efficiency, sustainability, and production. This study looks at current advances in machine learning applications for automated spraying in agricultural mechanisation, emphasising the new innovations, difficulties, and prospects. This study provides an in-depth analysis of the three main categories of autonomous sprayers—drones, ground-based robots, and tractor-mounted systems—that incorporate machine learning techniques. A comprehensive review of research published between 2014 and 2024 was conducted using Web of Science and Scopus, selecting relevant studies on agricultural robotics, sensor integration, and ML-based spraying automation. The results indicate that supervised, unsupervised, and deep learning models increasingly contribute to improved real-time decision making, performance in pest and disease detection, as well as accurate application of agricultural plant protection. By utilising cutting-edge technology like multispectral sensors, LiDAR, and sophisticated neural networks, these systems significantly increase spraying operations’ efficiency while cutting waste and significantly minimising their negative effects on the environment. Notwithstanding significant advances, issues still exist, such as the requirement for high-quality datasets, system calibration, and flexibility in a range of field circumstances. This study highlights important gaps in the literature and suggests future areas of inquiry to develop ML-driven autonomous spraying even more, assisting in the shift to more intelligent and environmentally friendly farming methods. Full article
(This article belongs to the Section Agricultural Mechanization and Machinery)
Show Figures

Figure 1

Back to TopTop