Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,798)

Search Parameters:
Keywords = LiDAR imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3103 KB  
Article
EdgeDenseCalib: Targetless Camera–LiDAR Calibration via Enhanced Edge Feature Densification
by Zhiyu He, Zhiwei Cao, Ning Xu, Zhipeng Zhao, Junyi Zhao, Zhao Sheng and Xiaoyu Zhao
Sensors 2026, 26(9), 2690; https://doi.org/10.3390/s26092690 (registering DOI) - 26 Apr 2026
Abstract
Accurate camera–LiDAR calibration is a fundamental prerequisite for reliable perception in autonomous systems. However, traditional methods typically rely on manual intervention or specific calibration targets, which restrict their flexibility and scalability in dynamic, real-world environments. To address the challenge of targetless calibration, we [...] Read more.
Accurate camera–LiDAR calibration is a fundamental prerequisite for reliable perception in autonomous systems. However, traditional methods typically rely on manual intervention or specific calibration targets, which restrict their flexibility and scalability in dynamic, real-world environments. To address the challenge of targetless calibration, we propose EdgeDenseCalib, a novel approach driven by enhanced edge feature densification. A key innovation lies in a two-stage process designed to densify the inherently sparse edge features in LiDAR data, thereby making them highly comparable to the fine-grained edges present in images. Consequently, this facilitates more reliable feature matching between the two cross-modal data sources. An optimization algorithm is subsequently employed to refine the alignment and minimize the reprojection error. Experiments on the KITTI dataset show our method achieves accurate calibration results of 0.105° in mean rotation error and 0.903 cm in mean translation error. Compared to state-of-the-art edge-based methods, our approach significantly improves the rotation accuracy by 33.1% to 89.9%. This work provides a practical and automatic calibration solution, contributing to the development of more robust perception systems for autonomous applications. Full article
Show Figures

Figure 1

28 pages, 33073 KB  
Article
Pedestrian Localization Using Smartphone LiDAR in Indoor Environments
by Jaehun Kim and Kwangjae Sung
Electronics 2026, 15(9), 1810; https://doi.org/10.3390/electronics15091810 - 24 Apr 2026
Viewed by 82
Abstract
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied [...] Read more.
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied environments. Since visual place recognition (VPR) methods that rely on images captured by camera sensors are highly sensitive to variations in appearance, including changes in lighting, surface color, and shadows, they can lead to poor place recognition accuracy. In contrast, light detection and ranging (LiDAR)-based place recognition (LPR) approaches based on 3D point cloud data that captures the shape and geometric structure of the environment are robust to changes in place appearance and can therefore provide more reliable place recognition results than VPR methods. This work presents an indoor LPR method called PointNetVLAD-based indoor pedestrian localization (PIPL). PIPL is a deep network model that uses PointNetVLAD to learn to extract global descriptors from 3D LiDAR point cloud data. PIPL can recognize places previously visited by a pedestrian using point clouds captured by a low-cost LiDAR sensor on a smartphone in small-scale indoor environments, while PointNetVLAD performs place recognition for vehicles using high-cost LiDAR, GPS, and inertial measurement unit (IMU) sensors in large-scale outdoor areas. For place recognition on 3D point cloud reference maps generated from LiDAR scans, PointNetVLAD exploits the universal transverse mercator (UTM) coordinate system based on GPS and IMU measurements, whereas PIPL uses a virtual coordinate system designed in this study due to the unavailability of GPS indoors. In experiments conducted in campus buildings, PIPL shows significant advantages over NetVLAD (known as a convolutional neural network (CNN)-based VPR method). Particularly in indoor environments with repetitive scenes where geometric structures are preserved and image-based appearance features are sparse or unclear, PIPL achieved 39% higher top-1 accuracy and 10% higher top-3 accuracy compared to NetVLAD. Furthermore, PIPL achieved place recognition accuracy comparable to NetVLAD even with a small number of points in a 3D point cloud and outperformed NetVLAD even with a smaller model training dataset. The experimental results also indicate that PIPL requires over 76% less place retrieval time than NetVLAD while maintaining robust place classification performance. Full article
(This article belongs to the Special Issue Advanced Indoor Localization Technologies: From Theory to Application)
20 pages, 4123 KB  
Article
Surveying Techniques for Built Heritage Conservation: A Comparative Perspective of Workflows for Monument Restoration
by George Cristian, Sorin Herban, Clara-Beatrice Vîlceanu, Andreea-Diana Clepe and Carmen Grecea
Sustainability 2026, 18(9), 4237; https://doi.org/10.3390/su18094237 (registering DOI) - 24 Apr 2026
Viewed by 136
Abstract
This study presents a comparative evaluation of three modern surveying techniques—UAV photogrammetry, static tripod-based LiDAR scanning, and handheld mobile LiDAR—applied in the context of historic monument restoration. The focus is on analysing workflow efficiency, data accuracy, and adaptability to complex architectural features, including [...] Read more.
This study presents a comparative evaluation of three modern surveying techniques—UAV photogrammetry, static tripod-based LiDAR scanning, and handheld mobile LiDAR—applied in the context of historic monument restoration. The focus is on analysing workflow efficiency, data accuracy, and adaptability to complex architectural features, including interior wall paintings, which are integral to the monument’s heritage value. Particular attention is given to how each technique captures surface texture, color fidelity, and material deterioration. The study also examines performance around intricate architectural elements such as vaulted ceilings, apses, cornices, columns, and carved stone portals, where occlusions, tight clearances, and fine ornamentation challenge coverage and resolution. By evaluating the strengths and limitations of each approach, the research highlights methodological considerations relevant for conservation professionals. The results indicate that the Static TLS is the most demanding workflow, requiring complex total station integration for control and station points. It produced the highest data density, with acquisition rates of one million points per second, making it the most hardware-intensive and difficult to manipulate. UAV photogrammetry provided a balanced middle-ground; it required minimal physical effort during acquisition and produced datasets that were significantly easier to manage. Handheld SLAM LiDAR emerged as the most productive solution for rapid coverage. While the handheld scanner’s image quality was lower than the photogrammetry, it still provided enough detail for the structural assessment and documentation needed. Although the point cloud lacked the extreme geometric detail provided by the TLS, the FARO Connect software made georeferencing and data manipulation significantly more efficient. Full article
23 pages, 9832 KB  
Article
A Fine-Scale Urban Impervious Surface Extraction Method Based on UAV LiDAR and Visible Imagery
by Yanni Bao, Yu Zhao, Shirong Hu, Zhanwei Wang and Hui Deng
Remote Sens. 2026, 18(9), 1275; https://doi.org/10.3390/rs18091275 - 23 Apr 2026
Viewed by 184
Abstract
Accurate extraction of impervious surface areas (ISA) is essential for urban environmental monitoring, yet severe spectral confusion among complex urban land-cover types limits the performance of classifications based solely on optical imagery. To address this issue within a localized context, this study proposes [...] Read more.
Accurate extraction of impervious surface areas (ISA) is essential for urban environmental monitoring, yet severe spectral confusion among complex urban land-cover types limits the performance of classifications based solely on optical imagery. To address this issue within a localized context, this study proposes a multi-source framework integrating UAV-based LiDAR (UAV-LiDAR) and high-resolution visible imagery for fine-scale ISA extraction. An improved segmentation optimization strategy, termed EGS-Optimizer, is developed to enhance boundary delineation within the object-based image analysis (OBIA) framework by coupling edge detection with global segmentation quality evaluation. A comprehensive feature set including spectral, index, texture, geometric, and terrain features is constructed, and Shapley Additive Explanations (SHAP) is applied to select the most informative variables while reducing dimensionality. The proposed framework is validated in a typical 1.45 km2 built-up area in Deyang City, Sichuan Province. Experimental results demonstrate that, within this specific study area, multi-source data fusion improves classification accuracy by 3.59–5.79% compared with single-source data, while feature selection reduces the feature dimension from 45 to 21. Among the evaluated classifiers, the random forest (RF) model achieves the highest performance, with an overall accuracy of 97.24% (Kappa = 0.96). While the high accuracy highlights the efficacy of synergizing spectral and structural information for micro-landscape mapping, these findings are constrained to the demonstrated fine-scale local environment. The results provide an effective, interpretable solution for detailed neighborhood-level ISA mapping, though further validation is required before the framework can be generalized to larger or more heterogeneous urban scenarios. Full article
Show Figures

Figure 1

25 pages, 2660 KB  
Article
Construction and Application of an Emergency Monitoring Indicator Evaluation Model Based on the Spatiotemporal Evolution of Forest Fires
by Jikun Liu, Chenghu Wang, Guiyun Gao and Yiyu Wang
Fire 2026, 9(5), 178; https://doi.org/10.3390/fire9050178 - 22 Apr 2026
Viewed by 604
Abstract
The lack of scientific methods for selecting monitoring indicators and equipment undermines the efficiency of forest fire emergency response. To address this gap, we developed a novel evaluation model for emergency monitoring indicators based on the spatiotemporal evolution of forest fires. The model, [...] Read more.
The lack of scientific methods for selecting monitoring indicators and equipment undermines the efficiency of forest fire emergency response. To address this gap, we developed a novel evaluation model for emergency monitoring indicators based on the spatiotemporal evolution of forest fires. The model, comprising four primary and eight secondary factors, leverages a hybrid TriFAHP and DBN approach to objectively determine factor weights based on survey data from 20 domain experts. The results indicate that the primary factor weights rank as follows: Monitorability (0.3807) > Timeliness (0.3353) > Sensitivity (0.1874) > Feasibility (0.0966). Four indicators (wind speed, temperature, flame, and gas) were identified as the most suitable for core monitoring. Furthermore, stage-specific monitoring strategies were proposed, prioritizing different core indicators across the ignition, spread, and fully developed fire stages. An indicator and equipment association was established, recommending optimal configurations such as UAV-mounted thermal imagers and lidar anemometers. The practical applicability of the proposed framework was successfully validated through real-world case studies, including the 2019 to 2020 Australia bushfires. This study provides a standardized framework aligning indicators, equipment, and scenarios, offering theoretical and practical guidance for optimizing emergency monitoring systems. Full article
(This article belongs to the Special Issue Buoyancy Controlled Fire Behaviors Under Special Environments)
Show Figures

Figure 1

19 pages, 14391 KB  
Article
Exploratory Analyses of Cross-Species Phenological–Structural Relationships in Urban Park Trees by Using Sentinel-2 Images and Handheld LiDAR Data
by Miao Jiang, Yi Lin and Minghua Cheng
Remote Sens. 2026, 18(8), 1192; https://doi.org/10.3390/rs18081192 - 16 Apr 2026
Viewed by 317
Abstract
Understanding the interplay between tree structure and seasonal dynamics, particularly cross-species, is crucial for managing urban forest ecosystems. However, balancing fine-scale inventory of trees with large-area mapping of forest ecosystems is a challenge. This endeavor integrates multi-temporal Sentinel-2 satellite remote sensing (RS) imagery [...] Read more.
Understanding the interplay between tree structure and seasonal dynamics, particularly cross-species, is crucial for managing urban forest ecosystems. However, balancing fine-scale inventory of trees with large-area mapping of forest ecosystems is a challenge. This endeavor integrates multi-temporal Sentinel-2 satellite remote sensing (RS) imagery with high-density handheld light detection and ranging (LiDAR) point clouds to launch exploratory analyses of cross-species phenological–structural relationships (CSPSRs) in urban park trees. We derived plot-level phenological metrics (e.g., start of growing season, SOS) and quantified fine-scale three-dimensional (3D) tree structural attributes (e.g., tree height and trunk curvature), respectively. Then, we investigated how the 3D structural attributes of urban park trees covary with their phenological traits. The results revealed the underlying CSPSRs, e.g., a weak but significant negative correlation between SOS and tree height in the study area. The derived CSPSRs demonstrate that tree structure is a key predictor of its phenology, even across species. Overall, the integrated RS approach can provide a robust framework for associating the structure and phenology of trees, offering valuable insights for the ecological management of urban forests. Full article
(This article belongs to the Special Issue Close-Range LiDAR for Forest Structure and Dynamics Monitoring)
Show Figures

Figure 1

9 pages, 1597 KB  
Communication
High-Gain AlInAsSb SACM Avalanche Photodiode for SWIR Detection at Room Temperature
by Ming Liu, Shupei Jin, Dongliang Zhang, Songlin Yu, Mingxin Yao, Xiaoning Guan, Feng Zhou and Pengfei Lu
Photonics 2026, 13(4), 374; https://doi.org/10.3390/photonics13040374 - 14 Apr 2026
Viewed by 311
Abstract
We report the design, epitaxial growth, and room-temperature operation of a high-gain AlInAsSb-based avalanche photodiode (APD) for short-wavelength infrared (SWIR) detection at 1.55 µm. The device employs SAGCM structure to confine the electric field within the multiplication region while suppressing dark current. High-quality [...] Read more.
We report the design, epitaxial growth, and room-temperature operation of a high-gain AlInAsSb-based avalanche photodiode (APD) for short-wavelength infrared (SWIR) detection at 1.55 µm. The device employs SAGCM structure to confine the electric field within the multiplication region while suppressing dark current. High-quality AlInAsSb layers were grown on GaSb substrates by molecular beam epitaxy using a digital alloy approach, achieving excellent surface morphology (Ra < 0.2 nm) and uniform superlattice periodicity. Electrical characterization reveals a well-defined breakdown voltage near −17 V and a peak internal multiplication gain of 200 at 300 K under 0.2 mW illumination at 1550 nm—among the highest gains reported to date for antimonide-based APDs operating at room temperature. Variable-temperature dark current analysis indicates a transition from tunneling-dominated to thermally generated dark current as temperature increases from 100 K to 300 K. These results demonstrate the strong potential of AlInAsSb SAGCM APDs for eye-safe, high-sensitivity applications in LIDAR, free-space optical communication, and low-light SWIR imaging. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

18 pages, 4334 KB  
Article
Multi-Source Remote Sensing-Constrained Evaluation of CMAQ Aerosol Optical Depth over Major Urban Clusters in China
by Zhaoyang Peng, Yikun Yang, Yuzhi Jin, Bin Wang, Zhouyang Zhang, Ting Pan and Zeyuan Tian
Remote Sens. 2026, 18(8), 1134; https://doi.org/10.3390/rs18081134 - 10 Apr 2026
Viewed by 377
Abstract
Aerosol optical depth (AOD) is a key indicator for quantifying aerosol radiative effects and evaluating air quality. However, atmospheric chemical transport models often exhibit systematic AOD biases, and model capability for column-integrated optical properties is not always consistent with that for near-surface particulate [...] Read more.
Aerosol optical depth (AOD) is a key indicator for quantifying aerosol radiative effects and evaluating air quality. However, atmospheric chemical transport models often exhibit systematic AOD biases, and model capability for column-integrated optical properties is not always consistent with that for near-surface particulate matter concentrations. Here, we evaluate AOD simulated by the Community Multiscale Air Quality (CMAQ) model over five major urban clusters in China, including the Beijing-Tianjin-Hebei (BTH) region, Fenwei Plain (FWP), Sichuan Basin (SCB), Yangtze River Delta (YRD), and Pearl River Delta (PRD), using satellite retrievals from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground-based retrievals from the Aerosol Robotic Network (AERONET), and vertical extinction profiles from the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO). CMAQ reproduces the major spatial patterns and exhibits relatively small biases in near-surface PM2.5. However, it persistently underestimates AOD relative to MODIS, with the largest negative bias occurring in April (i.e., a typical spring month). This contrast indicates a pronounced inconsistency between column-integrated aerosol amount and surface mass density. Relative to AERONET, CMAQ shows a negative bias (NMB = −38%), whereas MODIS shows a positive bias (NMB = 56%), suggesting that both model and retrieval uncertainties contribute to the CMAQ–MODIS disagreements. CALIPSO-constrained vertical analysis further suggests that insufficient extinction above the planetary boundary layer (PBL) is an important contributor to the negative AOD bias, although the relative roles of boundary-layer and upper-layer contributions vary across regions, underscoring the importance of accurately representing aerosol vertical transport and optical processes. These results indicate that evaluations based solely on surface observations may fail to fully capture the overall structure of AOD errors, particularly given the clear differences between near-surface mass concentrations and column optical properties, which vary across regions. This also highlights the importance of improving the representation of aerosol vertical transport and optical processes in chemical transport models. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

47 pages, 3286 KB  
Review
LiDAR-Based Road Surface Damage Classification: A Survey
by Trevor Greene, Meisam Shayegh Moradi, Muhammad Umair, Nafiul Nawjis, Naima Kaabouch and Timothy Pasch
Sensors 2026, 26(8), 2338; https://doi.org/10.3390/s26082338 - 10 Apr 2026
Viewed by 312
Abstract
Unlike image-only systems that falter in shadows, glare, and low contrast, LiDAR directly records surface geometry and supports depth-aware quantification. This survey examines LiDAR-based road surface damage classification across the entire pipeline, encompassing acquisition with mobile and terrestrial laser scanning, preprocessing and representation [...] Read more.
Unlike image-only systems that falter in shadows, glare, and low contrast, LiDAR directly records surface geometry and supports depth-aware quantification. This survey examines LiDAR-based road surface damage classification across the entire pipeline, encompassing acquisition with mobile and terrestrial laser scanning, preprocessing and representation choices, supervised, semi-supervised, and unsupervised learning techniques, as well as multisensor fusion at early, mid, and late stages. A consistent thread is measurement, not just detection: we describe how LiDAR damage classification maps to agency practices such as the Distress Identification Manual and the Pavement Condition Index. We summarize datasets and evaluation protocols for detection, segmentation, 3D reconstruction, and ride quality. We outline practical concerns for corridor-scale deployment: calibration and timing, intensity normalization, tiling/streaming, and runtime budgeting. The review concludes with open problems and outlines directions for robust, severity-aware, and scalable field systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 3484 KB  
Article
IFA-ICP: A Low-Complexity and Image Feature-Assisted Iterative Closest Point (ICP) Scheme for Odometry Estimation in SLAM, and Its FPGA-Based Hardware Accelerator Design
by Jia-En Li and Yin-Tsung Hwang
Sensors 2026, 26(8), 2326; https://doi.org/10.3390/s26082326 - 9 Apr 2026
Viewed by 225
Abstract
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity [...] Read more.
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity of laser point cloud poses a significant challenge to feature extraction and matching in odometry estimation. In this paper, we investigate odometry estimation from two aspects, i.e., algorithm optimization, and system design/implementation. In algorithm optimization, we present an image feature-assisted odometry estimation scheme that leverages the richness of image information captured by a companion camera to enhance the accuracy of laser point cloud matching. This also serves as a screening mechanism to reduce the matching size and lower the computing complexity for a higher estimation rate. In addition, various schemes, such as adaptive threshold in image feature point selection, principal component analysis (PCA)-based plane fitting for laser point interpolation, and Gauss–Newton optimization for calculating the transform matrix, are also employed to improve the accuracy of odometry estimation. The performance of improved odometry estimation is verified using an existing FLOAM (Fast Lidar Odometry and Mapping) framework. The KITTI dataset for autonomous vehicles with ground truth was used as the test bench. Simulation results indicate that the translation error and rotation error can be reduced by 16.6% and 1.3%, respectively. Computing complexity, measured as the software execution time, also reduced by 63%. In system implementation, a hardware/software (HW/SW) co-design strategy was adopted, where complexity profiling was first conducted to determine the task partitioning and time-consuming tasks are offloaded to a hardware accelerator. This facilitates real-time execution on a resource-constrained embedded platform consisting of a microprocessor module (Raspberry Pi) and an attached FPGA board (Pynq Z2). Efficient hardware designs for customized DSP functions (adaptive threshold and PCA) were developed in an FPGA capable of completing one data frame in 20ms. The final system implementation met the target throughput of 10 estimations per second, and can be scaled up further. Full article
(This article belongs to the Topic Advances in Autonomous Vehicles, Automation, and Robotics)
Show Figures

Figure 1

30 pages, 28721 KB  
Article
Dual-Arm Robotic Textile Unfolding with Depth-Corrected Perception and Fold Resolution
by Tilla Egerhei Båserud, Joakim Johansen, Ajit Jha and Ilya Tyapin
Robotics 2026, 15(4), 78; https://doi.org/10.3390/robotics15040078 - 8 Apr 2026
Viewed by 502
Abstract
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a [...] Read more.
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a dual-arm robotic manipulation framework. The system uses two Interbotix WidowX 250s 6-DoF robotic arms and an Intel RealSense L515 LiDAR camera for visual perception. The unfolding process consists of three stages: initial dual-arm stretching to reduce major folds, refinement through a second stretch targeting the lower region, and a machine-learning stage that employs a YOLOv11 framework trained on depth-encoded textile images, followed by a depth-gradient-based estimator for fold direction. The system applies an extremity-based grasping strategy that selects leftmost and rightmost textile points from a custom error-corrected depth map, enabling robust grasp point selection, and a fold direction estimation method based on depth gradients around the detected fold. The most confident fold region is selected, an unfolding direction is determined using depth ranking, and the textile is manipulated until a flat state is confirmed through depth uniformity. Experiments show that depth correction significantly reduces spatial error in the robot frame, while segmentation and extremity detection achieve high accuracy across varied fold configurations, and the YOLOv11n-based model reaches 98.8% classification accuracy, while fold direction is estimated correctly in 87% of test cases. By enabling robust, largely autonomous textile unfolding, the system demonstrates a practical approach that could support safer and more efficient automated textile recycling workflows. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

17 pages, 4632 KB  
Article
Estimation of Nitrogen Status in Zanthoxylum armatum var. novemfolius Using Machine Learning Algorithms and UAV Hyperspectral and LiDAR Data Fusion
by Shangyuan Zhao, Yong Wei, Jinkun Zhao, Shuai Wang, Xin Ye, Xiaojun Shi and Jie Wang
Plants 2026, 15(7), 1119; https://doi.org/10.3390/plants15071119 - 6 Apr 2026
Viewed by 396
Abstract
Accurate monitoring of nitrogen (N) status is critical for precision N management and optimizing the yield and quality of Zanthoxylum armatum var. novemfolius (ZA). However, individual sensors often struggle to simultaneously capture the biochemical variations and complex canopy structural changes of ZA. Therefore, [...] Read more.
Accurate monitoring of nitrogen (N) status is critical for precision N management and optimizing the yield and quality of Zanthoxylum armatum var. novemfolius (ZA). However, individual sensors often struggle to simultaneously capture the biochemical variations and complex canopy structural changes of ZA. Therefore, field experiments were conducted over two consecutive years, applying four N-application rates (0, 150, 300, and 450 kg N ha−1) to ZA. At each phenological stage, hyperspectral imagery and LiDAR point clouds were collected via three UAV flight altitudes (60 m, 80 m, and 100 m), and canopy nitrogen concentration (CNC) and aboveground nitrogen accumulation (AGNA) were measured. This study developed a framework by synergistically fusing UAV-derived hyperspectral imaging (HSI) and LiDAR data for CNC and AGNA monitoring. Results showed that the response of nitrogen status indicators to fertilization was phenology-specific: CNC showed no significant difference (p > 0.05) among treatments during the vigorous vegetative growth stage (VGS) but differed significantly (p < 0.05) during the fruit expansion stage (FES); AGNA differed significantly among treatments at VGS and FES (p < 0.05). The two-step screening yielded NDSI (732, 879) and NDSI (560, 690) as the optimal CNC indicators at VGS and FES, respectively (r = 0.83 and 0.93), whereas the NDSI (711, 986) and NDSI (515, 736) were identified as the optimal AGNA indicators at VGS and FES, respectively (r = 0.91 and 0.71). Across all phenological stages, Random Forest Regression consistently delivered the highest accuracy for CNC (R2 = 0.93–0.98, RMSE = 0.87–1.02 g kg−1) and AGNA (R2 = 0.95–0.97, RMSE = 1.92–2.55 g plant−1), outperforming MLR, PLSR, and SVR. This synergistic framework provides a high-precision, non-destructive methodology for the precision N monitoring of woody crops. Full article
(This article belongs to the Special Issue Remote Sensing for Diagnosis of Plant Health)
Show Figures

Figure 1

33 pages, 10259 KB  
Article
Multimodal Remote Sensing Image Classification Based on Dynamic Group Convolution and Bidirectional Guided Cross-Attention Fusion
by Lu Zhang, Yaoguang Yang, Zhaoshuang He, Guolong Li, Feng Zhao, Wenqiang Hua, Gongwei Xiao and Jingyan Zhang
Remote Sens. 2026, 18(7), 1066; https://doi.org/10.3390/rs18071066 - 2 Apr 2026
Viewed by 380
Abstract
The synergistic integration of Hyperspectral Imaging (HSI) and Light Detection and Ranging (LiDAR) data has become a pivotal strategy in remote sensing for precise land-cover classification. However, existing multimodal deep learning frameworks frequently suffer from intrinsic limitations, including rigid feature extraction protocols, underutilization [...] Read more.
The synergistic integration of Hyperspectral Imaging (HSI) and Light Detection and Ranging (LiDAR) data has become a pivotal strategy in remote sensing for precise land-cover classification. However, existing multimodal deep learning frameworks frequently suffer from intrinsic limitations, including rigid feature extraction protocols, underutilization of LiDAR-derived textural information, and asymmetric fusion mechanisms that fail to balance the contribution of spectral and elevation features effectively. To address these challenges, this paper proposes a novel framework named DGC-BCAF, which integrates Dynamic Group Convolution and Bidirectional Guided Cross-Attention Fusion to achieve adaptive feature representation and robust cross-modal interaction. First, a Dynamic Group Convolution (DGConv) module embedded within a ResNet18 backbone is designed to function as the central spatial context extractor. Unlike traditional group convolution, this module learns a dynamic relationship matrix to automatically group input channels, thereby facilitating flexible and context-aware feature representation that adapts to complex spatial distributions. Second, to overcome the insufficient exploitation of elevation data, we introduce a dedicated LiDAR texture encoding branch. This branch innovatively fuses Gray-Level Co-occurrence Matrix (GLCM) statistical features with multi-scale convolutional representations, capturing both geometric height information and fine-grained surface textural details that are critical for distinguishing objects with similar elevations. Finally, central to our architecture is the Bidirectional Cross-Attention Fusion (BCAF) module. Unlike standard unidirectional fusion approaches, BCAF employs a LiDAR geometry to guide the selection of salient spectral bands, while simultaneously utilizing spectral signatures to emphasize informative LiDAR channels. This mutual guidance ensures a balanced contribution from both modalities. Extensive experiments conducted on three benchmark datasets—Houston 2013, Trento, and MUUFL—demonstrate that DGC-BCAF consistently outperforms state-of-the-art methods in terms of overall accuracy, average accuracy, and Kappa coefficient. The results confirm that the proposed adaptive grouping and bidirectional guidance strategies significantly improve classification performance, particularly in distinguishing spectrally similar materials and delineating complex urban structures. Full article
Show Figures

Figure 1

17 pages, 1889 KB  
Article
Integrating Multi-Sensor Data Fusion to Map Isohydric Responses and Maize Yield Variability in Tropical Oxisols
by Fábio Henrique Rojo Baio, Paulo Eduardo Teodoro, Job Teixeira de Oliveira, Ricardo Gava, Larissa Pereira Ribeiro Teodoro, Cid Naudi Silva Campos, Estêvão Vicari Mellis, Isabella Clerici de Maria, Marcos Eduardo Miranda Alves, Fernanda Ganassim, João Pablo Silva Weigert, Kelver Pupim Filho, Murilo Bittarello Nichele and João Lucas Gouveia de Oliveira
AgriEngineering 2026, 8(4), 131; https://doi.org/10.3390/agriengineering8040131 - 1 Apr 2026
Viewed by 308
Abstract
Maize cultivation in tropical Oxisols during the second growing season faces significant climatic risks, where spatial heterogeneity in soil water retention often dictates economic viability. This study integrated a trimodal sensing approach, combining multispectral, thermal, and LiDAR data, with proximal physiological measurements to [...] Read more.
Maize cultivation in tropical Oxisols during the second growing season faces significant climatic risks, where spatial heterogeneity in soil water retention often dictates economic viability. This study integrated a trimodal sensing approach, combining multispectral, thermal, and LiDAR data, with proximal physiological measurements to map isohydric responses and yield variability. Conducted in the Brazilian Cerrado, the research monitored a one-hectare maize field using UAV-based sensors alongside ground truth evaluations of gas exchange, leaf water potential, and soil moisture. Results revealed high yield variability (6.6 to 13.4 Mg ha−1) primarily governed by clay content-mediated water availability. Maize exhibited strict isohydric behavior, maintaining homeostatic leaf water potential through preventive stomatal closure, which limited CO2 assimilation in zones with lower water retention. A significant statistical decoupling was observed between plant height and final grain yield, as water stress impacted reproductive stages more severely than vegetative growth. Furthermore, the Temperature Vegetation Dryness Index (TVDI) served as a robust proxy for biomass vigor rather than mere water deficit. These results confirm that yield variability in tropical Oxisols was not a product of hydraulic failure, but rather a consequence of carbon limitation necessitated by the crop’s conservative hydraulic management to maintain leaf water potential within safe thresholds. Full article
Show Figures

Graphical abstract

18 pages, 10448 KB  
Article
Forest Density Detection Using a Set of Remotely Sensed Vegetation Indices, Texture Parameters, and Spatial Clustering Metrics
by Stavros Kolios and Mariana Mandilara
Geomatics 2026, 6(2), 33; https://doi.org/10.3390/geomatics6020033 - 27 Mar 2026
Viewed by 383
Abstract
Monitoring forest density is essential for understanding ecosystem health, wildfire risk, and post-disturbance recovery. This study proposes a robust methodology to extract forest density classes exclusively using Sentinel-2 multispectral imagery combined with vegetation indices (VIs), textural parameters, and spatial clustering metrics. The approach [...] Read more.
Monitoring forest density is essential for understanding ecosystem health, wildfire risk, and post-disturbance recovery. This study proposes a robust methodology to extract forest density classes exclusively using Sentinel-2 multispectral imagery combined with vegetation indices (VIs), textural parameters, and spatial clustering metrics. The approach was applied to the northern part of Euboea Island, Greece, as a pilot area severely affected by a wildfire in August 2021. Four cloud-free Sentinel-2 images (2017–2024) were selected to capture pre- and post-fire conditions. A set of nine VIs—representing vegetation vigor, chlorophyll content, soil exposure, and canopy moisture—were calculated and statistically assessed for independence. To enhance classification accuracy, texture measures (homogeneity, correlation, and entropy) and spatial autocorrelation metrics (Moran’s I, Getis-Ord Gi) were derived for selected VIs. Supervised classification was performed using the Maximum Likelihood algorithm, yielding overall accuracies up to 89.4% and kappa coefficients above 0.85 when combining VIs with texture and spatial metrics. Results revealed a dramatic 49.3% reduction in forest cover immediately after the wildfire, with partial recovery (to 77.9% of pre-fire levels) three years later, mainly as a low-density forest. Approximately 12.1% of forest cover failed to regenerate, indicating potential long-term ecosystem degradation. The proposed approach provides a computationally efficient, high-accuracy alternative to data-fusion methods involving (Light Detection and Ranging) LiDAR or (Synthetic Aperture Radar) SAR datasets, making it suitable for operational forest monitoring and fire-risk management. Full article
Show Figures

Figure 1

Back to TopTop