Next Article in Journal
Coating Performance of Heat-Treated Wood: An Investigation in Populus, Quercus, and Pinus at Varying Temperatures
Previous Article in Journal
Assessing and Mapping Forest Fire Vulnerability in Romania Using Maximum Entropy and eXtreme Gradient Boosting
Previous Article in Special Issue
Research on Tree Point Cloud Enhancement Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging

1
Division of Ecological Assessment, National Institute of Ecology, 1210 Geumgang-ro, Maseo-myeon, Seocheon-gun 11186, Chungcheongnam-do, Republic of Korea
2
Department of Bio & Environmental Technology, Seoul Women’s University, 621 Hwarang-ro, Nowon-gu, Seoul 01797, Republic of Korea
*
Author to whom correspondence should be addressed.
Forests 2025, 16(7), 1158; https://doi.org/10.3390/f16071158
Submission received: 11 June 2025 / Revised: 11 July 2025 / Accepted: 11 July 2025 / Published: 14 July 2025

Abstract

This study advances vegetation and forest mapping in temperate mixed forests by integrating airborne hyperspectral imagery (HSI) and light detection and ranging (LiDAR) data, overcoming the limitations of conventional multispectral imaging. Employing a Digital Canopy Height Model (DCHM) derived from LiDAR, our approach integrates these structural metrics with hyperspectral spectral information, alongside detailed remote sensing data extraction. Through machine learning-based clustering, which combines both structural and spectral features, we successfully classified eight specific tree species, community boundaries, identified dominant species, and quantified their abundance, contributing to precise vegetation and forest type mapping based on predominant species and detailed attributes such as diameter at breast height, age, and canopy density. Field validation indicated the methodology’s high mapping precision, achieving overall accuracies of approximately 98.0% for individual species identification and 93.1% for community-level mapping. Demonstrating robust performance compared to conventional methods, this novel approach offers a valuable foundation for National Forest Ecology Inventory development and significantly enhances ecological research and forest management practices by providing new insights for improving our understanding and management of forest ecosystems and various forestry applications.

1. Introduction

Recent decades have witnessed rapid advancements in remote sensing for environmental monitoring, with the synergistic integration of hyperspectral imagery (HSI) and light detection and ranging (LiDAR) technologies, increasingly augmented by advanced machine learning and deep learning methodologies, marking a significant frontier in detailed vegetation mapping. This technological evolution has progressively shifted vegetation research from traditional direct survey methodologies towards more sophisticated, indirect approaches. HSI, capable of capturing continuous wavelength characteristics, combined with LiDAR’s precise three-dimensional structural measurements, offers unparalleled detail in the structural and compositional analysis of vegetation. This powerful data fusion addresses inherent limitations of multispectral imaging in feature extraction [1,2] and extends beyond studies primarily focused on forest physiognomy and biological seasons [3,4].
While deep learning represents a cutting-edge direction, foundational work demonstrating the robust capabilities of HSI-LiDAR fusion remains critical for understanding the baseline performance and challenges in complex forest environments, providing essential context for further algorithmic advancements [5,6]. Research leveraging these technologies is vital in distinguishing between conifers and broadleaf trees, analyzing vegetation communities through indices, and assessing the impacts of climate change on vegetation ecosystems; this is achieved by combining HSI with physiognomic units and tree points derived from LiDAR data, which enables the deduction of attributes such as stratified vegetation structure, physiognomy, tree density, and canopy height, as well as secondary information including age and diameter [7,8]. Notably, nations with significant forest resources, such as Canada, Norway, Finland, and Sweden, have successfully recognized and incorporated these advanced remote sensing technologies into their National Forest Inventory practices, demonstrating their utility in surpassing the limitations of traditional forest survey methods and highlighting their operational efficiency and management benefits in these high-latitude forest ecosystems [9,10].
However, despite these successes, the application of these technologies to mid-latitude mixed forests presents unique and considerable challenges. Unlike the relatively simpler, often coniferous-dominated, and uniform canopy structures of high-latitude forests, mid-latitude mixed forests are characterized by exceptionally high species diversity, leading to significant spectral overlap among morphologically and physiologically similar species, and complex, multi-layered canopy structures [11,12,13]. These inherent complexities, particularly the ‘high mixing ratio of tree species’ and the challenge of ‘low number of reference data for rare tree species’, mean that the accuracy of species extraction using HSI often yields indistinctive spectral values in high-density regions or those densely populated with similar species [14,15,16]. Consequently, while considerable progress has been made in vegetation analysis within alpine and coniferous forest ecosystems, studies in temperate mixed forests have consistently reported lower classification accuracies due to these challenging conditions [17,18]. Such limitations necessitate advanced approaches to mitigate spectral confusion and to improve the spatial extraction of individual tree crowns.
To address these issues, recent research has explored techniques such as optimal band selection, vegetation index applications (e.g., Normalized Difference Vegetation Index, Green Normalized Difference Vegetation Index), and increasingly, advanced machine learning, especially deep learning-based classification [5,6,19,20,21]. These cutting-edge methods, particularly Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), have demonstrated significant improvements in species- or cluster-level mapping accuracy, often outperforming traditional machine learning algorithms and even improving individual tree crown segmentation in complex forest structures [22,23,24]. Nevertheless, a comprehensive methodology capable of fully leveraging the combined power of HSI and LiDAR data to overcome the inherent complexities of mid-latitude mixed forests remains crucial for precise vegetation and forest analysis.
We hypothesize that the synergistic integration of high-resolution HSI and 3D LiDAR data, combined with machine learning-based clustering, can significantly improve the accuracy and scalability of species- and community-level vegetation mapping in structurally complex temperate forests. Based on this hypothesis, this study aims to develop and validate a novel, efficient methodology for precise vegetation and forest mapping in temperate mixed forests by synergistically integrating airborne HSI and LiDAR data. To achieve this objective, this research addresses the following key questions:
(a)
Can the synergistic integration of high-resolution HSI and LiDAR data effectively delineate and classify individual tree species within complex temperate mixed forests?
(b)
To what extent can this integrated approach improve the accuracy of vegetation community mapping compared to traditional methods in temperate mixed forests?
(c)
Can machine learning-based clustering reliably delineate community boundaries and identify dominant species for vegetation and forest type mapping in this complex ecosystem?

2. Materials and Methods

The progression of this study followed a structured sequence, involving the extraction of species-specific features from HSI-LiDAR data, clustering of point data, and the subsequent creation of physiognomic vegetation and forest maps (Figure 1). Point-level tree species data were required for physiognomic classification of the vegetation communities and forests. These data were extracted using HSIs and LiDAR point cloud data. Subsequently, as the features extracted from HSIs have continuous image patterns for physiognomic units, they were combined with tree location data obtained from LiDAR. Point data clustering was then performed using multivariate cluster analysis, considering variables such as species, altitude, slope aspect, slope inclination, and distance from the waterfront (for hygrophilous and halophytic species). This clustering was followed by the generation of Thiessen polygons for each cluster. Dominance, occupancy ratio, crown area, tree height, and diameter at breast height (DBH) for the tree species included in the Thiessen polygons were analysed to study vegetation communities and forest data for each physiognomy (Figure 2). The analysis integrated a suite of specialized software tools. LAStools 2.0.1 [25] was utilized for LiDAR point cloud processing (e.g., noise removal, ground classification, and normalization), and ENVI 5.6 [26] for hyperspectral data preprocessing and spectral analysis. Subsequent spatial analysis and visualization were performed using ArcGIS 10.8 [27] for initial data management and basic geoprocessing, complemented by ArcGIS Pro 2.8.2 [28] for advanced functionalities, particularly spatially constrained multivariate clustering critical for vegetation community mapping based on diverse environmental factors.

2.1. Study Site and Data Acquisition

South Korea’s forests are broadly classified into southern, central, and north-central vegetation zones, based on climate and floristic composition. This study focused on the central temperate region, which is characterized by mixed deciduous forests comprising Quercus acutissima, Quercus serrata, Salix koraiensis, Quercus mongolica, Pinus densiflora, and Robinia pseudoacacia. This region was selected due to its high species richness and structural complexity, providing a suitable environment for evaluating HSI-LiDAR integration for vegetation mapping.
The study site covers 25.4 km2 in total, of which forested areas account for 18.6 km2 (73.2%, Figure 3). This region features gentle to moderate slopes, heterogeneous canopy layers, and varied species compositions. These conditions make it ideal for testing species-level classification algorithms that require both structural and spectral diversity.
To minimize spectral confusion among co-occurring species during their peak photosynthetic activity, aerial image acquisition was scheduled for mid-autumn, when interspecific reflectance differences become more distinct due to senescence and canopy thinning. Accordingly, HSIs were acquired using the AisaFENIX 1k sensor (SPECIM, Oulu, Finland) from an altitude of 2800 m on 18 October 2020. This sensor captured 421 contiguous bands across the visible and near-infrared (VNIR, 380–970 nm, 174 bands) and short-wave infrared (SWIR, 970–2500 nm, 247 bands) regions, with spectral resolutions of 4.5 nm in the VNIR and 14 nm in the SWIR regions, respectively. Raw data were preprocessed through atmospheric, radiometric, and geometric corrections, and orthorectified using ground control points acquired via differential GPS (DGPS), yielding a horizontal RMSE of approximately 0.5 m.
LiDAR data were collected using the Leica TerrainMapper system (Leica Geosystems, Heerbrugg, Switzerland) on 20 June 2021 from an altitude of 2286 m. This airborne laser scanner supports a pulse rate of up to 2 MHz and a scan frequency of 300 Hz. Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) data were integrated for trajectory correction, and point clouds were generated with error filtering and post-flight calibration. The resulting point density and vertical accuracy supported the construction of high-resolution digital surface and canopy models essential for crown segmentation and structural classification.
To validate the accuracy of the HSI–LiDAR vegetation and forest maps, two complementary field surveys were conducted during the HSI acquisition period in October 2020. First, 90 individual trees corresponding to dominant canopy species of candidate vegetation communities were identified in the field by plant ecology experts from the Korea National Institute of Ecology. Each tree’s species was confirmed on-site, and its precise location was recorded using a Trimble R4s GNSS unit (Trimble Inc., Sunnyvale, CA, USA), capable of real-time kinematic positioning with horizontal accuracy at the centimeter level (±10–15 mm). These georeferenced samples were used to assess species-level classification accuracy. Second, 174 ground-truth points were randomly selected across 170 different sites to verify community-level classification. At each point, field surveys measured the crown coverage of tree species in the overstory, and vegetation communities were classified based on crown dominance using the same criteria applied in the remote sensing analysis. Geographic coordinates were again precisely recorded using the Trimble R4s GNSS unit to ensure spatial comparability with the remote sensing data.

2.2. Digital Canopy Height Model Extraction

In subpolar regions with low tree densities, the appearance of conical conifers can be clearly delineated; however, in temperate mixed forests, trees rarely form conical crowns. In these environments, the canopy typically appears as a cluster of multiple tree trunks at the colony level. To effectively extract LiDAR tree point and canopy information in such complex environments, we created a digital canopy height model (DCHM) by exploiting the difference between a digital surface model (DSM) and digital terrain model (DTM) of a forested area. The DSM interpolated ground elevation and vegetation based on the classification code assigned to each LiDAR surface feature point, while the DTM focused only on ground elevation. However, given that LiDAR point cloud data records elevation values based on laser pulse reflections, these pulses can exhibit irregular reflections owing to the interaction of complex factors such as the presence of canopy stems, gaps in the canopy, and stratification. Such irregular returns can compromise the accuracy and shape of the extracted tree point and canopy data, leading to “pit” or “peak” values in the continuous height values displayed in the DCHM. To ensure data accuracy, these pit and peak values were subsequently removed, as they introduce inaccuracies in canopy data extraction or height estimation, potentially leading to over- or underestimation of height (Figure 4).
The pits within the DCHM were eliminated using the pit-free algorithm provided by LAStools [29,30]. This process involved the rasterization of individual tree crowns into height intervals of 2, 5, 10, 15, 20, 25, and 30 m. Within these intervals, pits were addressed by filling in any missing values and subsequently applying interpolation techniques. This enhanced approach is referred to as the layer stacking digital canopy height model (LSDCHM). The pit-free DCHM was created using the following sequential process: rasterization of tree crowns, assignment of altitude values, development of the LSDCHM to progressively smoothen pits from the apex of the canopy down to its base, and finally, the merging of the processed layers (Figure 4, Supplementary Material Document S1 points 1–9).
In dense temperate mixed forests, the forest stratification is intricate, and the canopy is formed by overlapping crowns of trees. In cases where there is no discernible height difference between two crowns within a single canopy, the extracted crown often appears distorted, appearing as if it has been cleaved in half. As a result, the tree position tends to be biased toward the edge of the crown (Figure 5). This issue arises due to the presence of another tree point adjacent to the apex of the canopy, when, theoretically, it should be extracted as part of the same crown (Figure 5).
To address the challenges associated with canopy shape distortion and the biased positioning of trees, improvements were made by applying the canopy closure technique to the pit-free DCHM [31]. The canopy closure was determined by extracting the maximum value from a proximity analysis of circular radius buffer areas within each DCHM block. By implementing the canopy closure technique, the adjacent canopy and tree points merge into a single canopy, instead of creating a new canopy separately around the adjacent crown, thus enabling the identification of a new tree point located at the canopy’s centre (Supplementary Material Document S1 point 10).
To ensure that the DCHM peak values do not compromise the integrity of the canopy shape, a smoothing filter was introduced [31]. Both a Gaussian filter, which retains the 3D parabolic shape of the canopy while reducing its peak values, and a low-pass filter, which smoothens surface values to align with their neighbouring values, were utilised. The Gaussian filter was implemented by specifying a kernel file within the weight option of the focal statistics.
On occasion, during the canopy extraction process following the application of the Gaussian filter, canopies with irregular shapes may be extracted. To ensure the generation of a pit- and peak-free DCHM, a low-pass filter convolution kernel was employed to facilitate the extraction of a canopy formed by a cluster of crowns by the smoothing of overlapping or intermingled crowns (Supplementary Material Document S1 points 11–12).

2.3. Crown Extraction with DCHM

One of the most utilised methods for canopy extraction is the watershed segmentation (WS) algorithm. This algorithm converts the peak within the DCHM area into an inverse peak, designating it as a watershed discharge point, and applies the principle of WS extraction [32]. However, the WS algorithm has the drawback of exhibiting low canopy delineation accuracy and the tendency to merge adjacent canopies into a single canopy.
To address the limitations associated with the WS algorithm and achieve a more precise extraction of individual canopies and tree point locations more accurately, the DCHM proximity search was conducted using circular radii to extract the tree point location with the maximum value. The canopy was generated based on the intersects by performing cross-tabulation analysis between Thiessen polygons and DCHM curvature values greater than 0 [31,33,34].
The minimum tree height in the DCHM that can be detected by LiDAR as a physiognomic community is 3 m. For the generation of tree points and a canopy model, a DHM of 1 m or higher was selected. A tree point was then created by employing the focal statistics technique, where the maximum cell of the circular neighbouring raster was chosen and converted into a point. Using this tree point as a reference point, Thiessen polygons were generated through the Voronoi function, and the boundaries of the study site were clipped to complete the primary model (Figure 6; Supplementary Material Document S1 points 13–19).
Despite the implementation of canopy closure, Gaussian filtering, and low-pass filtering, certain challenges persisted. These included the presence of no data areas in the generated DCHM where there are no trees, scattered distributions of low-height tree points, a concentration of tree points within areas with mixed grasslands and shrubs, and the proximity between two tree points. Abnormal Thiessen polygons were formed, necessitating adjustments to the tree points and the reconfiguration of the Thiessen polygons. For the initially completed tree points, a 1.5 m buffer was applied to detect and eliminate tree points within a 3 m radius. Then, the centroid of the 3 m buffer area was extracted and added back to the tree point, with the height recalculated to create a draft of the secondary Thiessen polygon. For tree point generation, only points greater than zero were restored. Subsequently, polygons that overlapped between the tree points and Thiessen polygons were reselected to complete the secondary Thiessen polygons (Supplementary Material Document S1 points 20–30). Next, tree points and Thiessen polygons were finalised by clipping the secondary Thiessen polygons crafted as masking data, with the DCHM below 3 m processed as a Null value (Supplementary Material Document S1 points 31–47).
In the final step of crown generation, a curvature analysis was conducted on the final Thiessen polygons and DCHM exceeding 3 m. Values with curvature greater than zero were selected, vectorised, and subjected to a cross-tabulation analysis with the Thiessen polygons. Using the Thiessen polygon IDs from the vector intersections, the convex hull was calculated to complete the crown. The finalised crown was further refined using 2 m polygon smoothing (Supplementary Material Document S1 points 48–54).

2.4. Extracting Species Information

The extraction of species information from the HSI targeted eight dominant tree species at the study site: C. crenata, L. kaempferi, P. densiflora, P. koraiensis, P. rigida, P. occidentalis, P. sargentii, and Q. acutissima (Table 1). These species were selected based on their predominant presence within the study area, ecological significance, and the role they play in the forest’s biodiversity and structural complexity. Their selection was further justified by the need to cover a broad spectrum of canopy structures, leaf spectral signatures, and physiological traits, ensuring a comprehensive analysis of the forest ecosystem’s health and dynamics. DGPS coordinates were established at 36 species-specific community locations to gather data for these species, which served as sampling points. These field sample locations provided direct on-site access to positional and species data for the most dominant species within each target community. Subsequently, in the laboratory, the data collected on-site were superimposed onto the corresponding HSIs, allowing for a visual analysis to align each location’s species information with the image features (Supplementary Figure S1). Any mismatches or misidentifications observed during this alignment process prompted necessary location adjustments. From the HSIs, 21 noisy bands were removed from the original 421, retaining 400 bands for pixel analysis. Following this band reduction, species-specific spectral libraries were generated from these processed bands, and their separability was also analyzed to understand the distinctiveness of each target species’ spectral signature. Using these bands, features were extracted from 36 trees representing 18 species.
A supervised classification was performed utilising the spectral angular mapper (SAM) technique in ENVI 5.6. Features were extracted using SAM data with the maximum angle set at 0.3 radians (Formula (1)), a choice driven by its ability to balance classification accuracy and computational efficiency, effectively distinguishing between classes in complex data [35].
α = c o s 1 [ i = 1 n b t i r i [ i = 1 n b t i 2 ] 1 / 2 [ i = 1 n b r i 2 ] 1 / 2 ]
In Equation (1), α and nb are the spectral angle between vectors and the number of spectral bands, respectively; t is the target pixel, and r is the reference pixel. During SAM extraction, additional specified values for supervised classification were added to enhance extraction accuracy, particularly for species that were widely scattered across the study site, such as Q. acutissima and P. rigida (Supplementary Material Document S1 points 55–60). The AisaFENIX 1K hyperspectral sensor utilized in this study captures 421 bands within the wavelength range of 0.396–2.4096 µm, with a spectral wavelength unit of 0.0047852 µm. As the number of continuous bands increases, the wavelength unit becomes shorter, enabling the differentiation of species, even among species within the same genus.

2.5. Physiognomic Community and Forest Type Mapping

It is essential to determine the major species within each community by synthesising information related to the number of species, number of individuals, tree height, canopy density, DBH, and age. They can also be determined using dominance and abundance. However, it is often impractical to measure each of these attributes for every tree individually. Therefore, we applied an average formula based on research results and the characteristic vegetation of the corresponding climatic zone.
For the calculation of DBH, empirical formulas based on crown diameter were used for coniferous and deciduous trees [36].
C o n i f e r o u s   t r e e = 0.83 + 4.42 × c r o w n   d i a m e t e r
D e c i d u o u s   t r e e = 5.97 + 2.32 × c r o w n   d i a m e t e r
The age of a tree was estimated using the relationship between DBH and species-specific growth factor [37].
E s t i m a t e d   a g e   o f   t r e e   y e a r s = D B H   i n c h e s + g r o w t h   f a c t o r   ( b y   s p e c i e s )
where DBH represents the diameter at breast height in inches, and a mean growth factor of 4.5 is used as a generalized estimate [37], given that species-specific growth factors were impractical to determine in this study.
Community-level vegetation parameters were determined using the attribute information presented within the table (Supplementary Material Document S1 point 84). Dominance and abundance of a species were derived through matrix calculations of fields and records for the trees and species in the table, using the QGIS group stats plugin. Subsequent calculations were carried out in accordance with the specified group stats options, encompassing species count, height calculation, area calculation, age calculation, total individual count, and total species count by clustered community unit. The results of these calculations were saved as a CSV file.
The results were processed using an Excel 2021 [38] script to identify the top three dominant species within each community, thereby establishing an abundance ranking among the vegetation communities. A community was assigned a single name when the leading species had an abundance ratio of 70% or higher. If the primary species accounted for less than 70%, the community was assigned a composite name based on the two most dominant species, using the format ‘first dominant species name-second dominant species name’ [39]. The forest map was finalised by creating a forest field database in compliance with the Korean forest mapping and digitisation guidelines [36] (Supplementary Material Document S1 points 87–92).

2.6. Species Distribution

To generate the DCHM, we addressed the limitations of the traditional DSM-DTM method, which exhibited low separability between adjacent canopies and inaccuracies in tree species locations. We extracted sub-classification codes 1 through 5 and recalibrated tree height values within the range of 0 to 30 m to overcome these limitations. To minimise errors in canopy and tree crown delineation, filtering techniques centred around the tree apex were employed. The comprehensive LiDAR processing procedure included the following steps: clipping the target areas, extracting and separating surface feature point codes, reclassifying elevation data, data smoothing, and converting point data to a grid format. This resulted in the establishment of 560,339 tree point locations and their corresponding canopies (Supplementary Figure S2).

3. Results

3.1. Accuracy Assessment of Supervised Classification in Vegetation Mapping

To ensure the reliability of our analysis, combining individual assessments and supervised classification, we conducted a two-step evaluation. The first step involved comparing the values from supervised classification with our pre-set training codes. The second step was a practical test, where we placed random tree points and checked if their species codes matched our expectations.
Our initial test used Spectral Angle Mapper (SAM) analysis on 36 individuals representing 18 species. We found a perfect match (100% accuracy) between the imagery extracted and the point data from supervised classification, once spatially joined (Table 2). This result is illustrated in Figure 7(left).
In a more extensive test, 90 tree points were examined, representing the same 18 species, randomly selected from 54 field sites. This yielded an overall accuracy of 98.4%. However, there were some discrepancies for specific species. For instance, Pinus densiflora showed one mismatch in 12 locations, leading to 92% accuracy. Similarly, Pinus rigida had one mismatch out of 22 locations, resulting in 95% accuracy (Table 3). These findings are visually summarised in Figure 7(right), showcasing both the consistency verification of community locations and the mismatches in randomly selected tree points.

3.2. Species Mapping and Classification Accuracy Using Multi-Sensor Clustering Techniques

We addressed the lack of species information in tree point location data by linking it with features identified in HSI. This approach successfully integrated species information into our tree data (Supplementary Material Document S1 points 61–67).
We connected species location data with tree points by utilising feature raster values from supervised classification. The species location data, merged with image code data from this classification, yielded a comprehensive dataset, including species names and locations for all tree points. We further used species-specific point data in vector format to create physiognomic vegetation (Supplementary Figure S3) and forest maps (Supplementary Figure S4). These maps were produced by clustering the data based on several characteristics, including species codes, elevation, slope inclination, aspect, and proximity to water bodies. We applied multivariate clustering analysis, driven by machine learning, to these five variables (Supplementary Material Document S1 points 68, 69). Our target was to establish 3718 clusters over an area of 18,593,407 m3, using a minimum area threshold of 0.5 ha for each vegetation community. This approach resulted in 3252 Thiessen polygon clusters. Each cluster was then linked to species data, allowing us to analyse dominant species, density, and other key vegetation characteristics in each cluster.
Of the 3252 initially identified communities, 3199 qualified as physiognomic communities with dominant species. Communities with insufficient species diversity or area were excluded. These communities were then categorised into 55 vegetation types, including six single-species and 49 mixed-species communities. The forest physiognomy was classified into eight distinct categories (Table 4).

3.3. Field Validation and Accuracy Assessment of Vegetation and Forest Mapping

We conducted field validation at 174 points across 170 different vegetation and forest sites. This involved comparing the communities identified on our vegetation and forest maps, derived from HSI-LiDAR data, with actual observations from field surveys. Our evaluation criteria included species composition, layering, tree distribution, and dominant species within each community.
Out of these 174 points, 12 showed discrepancies. The most notable inconsistencies were found in Castanea crenata stands. Seven of these points did not match our field observations: two were from single-species communities and five were from mixed-species communities. In three of these mixed-species communities, C. crenata was incorrectly identified as the most dominant species, while the actual dominant species were correctly identified. The other two points were complete mismatches, identified as P. densifloraC. crenata and Metasequoia glyptostroboidesC. crenata communities.
The remaining five points of inconsistency involved two cases of single species being correctly identified within mixed communities (R. pseudoacaciaQ. acutissima and Q. acutissimaQ. serrata), and three cases of complete mismatch, identified as communities of P. rigida, P. densiflora, and L. kaempferi (Table 5 and Figure 8(left)).
In the accuracy verification phase for our forest map, we examined 174 points across 170 forest sites. We found discrepancies in four points, resulting in an on-site accuracy rate of 97.7% (Table 6 and Table 7, and Figure 8(right panel)).

4. Discussion

4.1. Methodological Contributions and Implementation Scope

The HSI and LiDAR technology in our study signifies a notable progression from traditional vegetation mapping methods, enabling the capture of species-level spectral signatures and three-dimensional forest structure critical for representing complex ecosystems. Leveraging the combined strengths of these two modalities, our approach has demonstrated strong capabilities in rapidly and objectively identifying vegetation types across large and heterogeneous areas, including challenging terrains. This aligns with recent findings that HSI–LiDAR fusion significantly enhances classification accuracy over single-sensor methods, especially in complex natural secondary forests [5].
Building upon prior work in forest degradation and regeneration assessment [40], our method integrates the spectral richness of HSI with the precise structural data from LiDAR, offering a comprehensive framework for vegetation characterization. In particular, the use of unsupervised multivariate clustering, which incorporates species, topographic, and structural variables, proved instrumental in delineating physiognomic communities and identifying dominant species at scale. This directly addresses our objective of evaluating whether machine learning-based clustering techniques can reliably map vegetation communities in structurally complex forests, where traditional rule-based approaches often fail [17,18].
With high classification accuracy in identifying individual species (98%) and delineating community-level features (93.1% for communities and 97.7% for forest stands), our study underscores the critical role of combining spectral and structural data for ecological studies and conservation efforts. These results directly support our objective of developing a scalable and accurate methodology for vegetation mapping in temperate mixed forests by integrating HSI and LiDAR data. This approach not only aligns with the trajectory of recent advancements in the field but also enriches the existing body of knowledge, offering new avenues for accurately understanding and conserving the dynamic interactions within diverse ecosystems.
Comparing our approach with preceding work, such as the study by Dian et al. [41], which utilized HSI and LiDAR data for tree species classification in urban areas, our research extends these methodologies to complex forest ecosystems, demonstrating a broader application scope. Furthermore, the work by Hakkenberg et al. [10] on mapping vascular plant composition in forests using a similar data integration approach supports the robustness of our findings, highlighting the complementarity of hyperspectral and LiDAR technologies in ecological monitoring. Moreover, our methodology resonates with the exploration into habitat mapping [42], where the fusion of HSI and LiDAR data achieved high classification accuracy, showcasing the potential for detailed ecosystem analysis. Additionally, the successful mapping of tree species and dead trees in large forest areas by Krzystek et al. [43] using LiDAR and multispectral imagery demonstrates the feasibility of high-accuracy, large-scale forest mapping, further validating the applicability and effectiveness of our approach.

4.2. Methodological Limitations and Future Improvements

Despite the significant advancements our method brings to the field of vegetation mapping, it is important to acknowledge its limitations and areas for future improvement. One of the primary challenges lies in the processing and interpretation of the vast amounts of data generated by hyperspectral and LiDAR technologies. Developing more efficient data processing algorithms and machine learning models could further enhance the accuracy and speed of vegetation mapping. Additionally, while our method demonstrates high accuracy in temperate mixed forests, its application in other ecosystems, such as tropical rainforests or arid landscapes, may require further adaptation and testing.
A critical structural limitation encountered in complex forest environments is crown overlap, especially in multi-layered canopies typical of subtropical and tropical forests. The entanglement of overlapping crowns often results in ambiguous boundaries, leading to segmentation errors such as crown splitting or merging. These issues are exacerbated when co-occurring species share similar spectral features, increasing the likelihood of misclassification [44,45,46]. While our study applied a combination of pit-free DCHM and curvature-based segmentation to mitigate such challenges, recognizing its advancements over conventional methods like watershed algorithms [47], it is not immune to centroid displacement and dominance assignment errors, particularly in densely stratified forest structures. Recent advancements in individual tree crown delineation, particularly with deep learning models, have shown significant promise in overcoming these longstanding issues by accurately handling severe crown overlap and improving boundary delineation, often outperforming traditional algorithms [48]. Our method introduces a hybrid framework that integrates pit-free and peak-free CHM preprocessing with curvature-based boundary detection and Voronoi tessellation. This approach addresses major limitations of standard techniques, namely crown overlap and CHM distortion, by leveraging structural normalization and spatially consistent segmentation based on Thiessen polygons. Furthermore, the curvature model enables adaptive boundary refinement using topographic continuity, improving delineation accuracy for irregular or asymmetric crowns. Although computationally more intensive, this approach enhances robustness in structurally complex forests and provides species-level precision essential for ecological applications.
Platform-specific constraints also pose challenges. In low-altitude UAV applications, although spatial resolution improves, limitations such as restricted sensor payloads, narrow coverage areas, and increased shadow interference due to oblique illumination angles may arise. These factors can exacerbate spectral distortions and reduce classification accuracy, especially in dense forest environments [49]. To apply our workflow effectively in UAV-based contexts, further adaptations in sensor configuration, flight planning, and crown modeling algorithms would be necessary.
Given these structural and platform-related constraints, our methodology is best suited for temperate mixed forests where canopy density and stratification are moderate. Applications in ecosystems with more extreme canopy complexity or spectral ambiguity should be accompanied by rigorous calibration and validation. Integration of additional structural indices or multi-seasonal observations may further enhance performance across diverse ecological contexts. While multi-seasonal image acquisition is known to improve species discrimination, a mid-autumn schedule was prioritized in this study to maximize interspecific spectral contrast driven by senescence and canopy thinning. This timing was deemed optimal for our primary objective of structural–compositional integration in temperate mixed forests. Future studies may benefit from incorporating complementary seasonal acquisitions to capture species with more subtle phenological spectral variation. Additionally, the use of generalized growth estimates, while necessitated by data limitations, may introduce uncertainty in biomass estimation and forest structural inference, particularly in species-rich stands where allometric variation is significant.
To further refine our method and enhance its adaptability, future research should prioritize the integration of next-generation remote sensing technologies, such as advanced UAV-mounted HSI-LiDAR fusion systems, which have shown promise in highly detailed phenotyping and genetic trait differentiation [50]. Additionally, terrestrial laser scanning (TLS), capable of continuous 3D structural and phenological monitoring [51] and advanced wood-leaf classification [52], and full-waveform LiDAR, offering rich waveform returns for improved feature extraction and high-accuracy filtering of complex forest scenes [53], or dual-season HSIs are crucial [16]. These advanced platforms can resolve vertical canopy structure in multi-layered forests and enable finer-scale, high-frequency monitoring in otherwise inaccessible or rapidly changing environments. Moreover, consistent and ecosystem-specific ground-truthing efforts remain indispensable for validating model outputs and ensuring cross-regional applicability. These efforts will help address structural and spectral challenges identified in this study, while also enhancing the ecological relevance and policy utility of remote sensing-based vegetation mapping.

4.3. Policy Applications and Strategic Relevance

This study presents a significant advancement in ecological remote sensing by introducing a replicable, high-resolution mapping framework that fuses HSI and LiDAR data to capture structural and compositional detail at a species-relevant scale. Beyond biodiversity monitoring, the method supports national carbon accounting and ecosystem service valuation. These outcomes are particularly relevant to the implementation of international frameworks such as the System of Environmental-Economic Accounting—Ecosystem Accounting (SEEA EA) and the Kunming-Montreal Global Biodiversity Framework (GBF 2030), both of which emphasize the use of spatially explicit data on ecosystem condition and carbon stocks. Notably, our framework aligns with contemporary efforts in geospatial monitoring of forest carbon fluxes, contributing valuable fine-scale data for national greenhouse gas inventories and supporting IPCC guidelines, although achieving precise annual biomass estimates still presents a challenge [54,55]. It is important to note that the effectiveness of our approach was validated under temperate mixed forest conditions, where canopy structure and spectral variability remained within a manageable range. Application to structurally or phenologically extreme ecosystems may yield different performance outcomes. In the Korean context, national efforts related to carbon sink estimation, forest ecological inventories, and restoration planning may benefit from the fine-resolution structural and compositional data provided by our method. Furthermore, the findings have meaningful implications for domestic land management and climate policy. High-resolution vegetation maps can support land-use planning by helping policymakers balance development with conservation objectives. For instance, in urban and peri-urban settings, our maps may guide the siting of green infrastructure to enhance biodiversity and human well-being. In addition to the ability to generate spatially continuous, fine-scale vegetation data in topographically complex mid-latitude forests, where conventional surveys face operational challenges, our framework positions itself as a valuable input to spatial planning and climate vulnerability assessments. By offering spatially detailed, reproducible, and ecologically grounded insights, our method supports decision-making at both national and local levels in alignment with sustainability and climate resilience goals. In particular, the vegetation structural data derived from this approach can inform flood mitigation strategies and catchment-scale ecosystem management, especially in landscapes where forest structure strongly influences hydrological dynamics. When integrated with climate models, these datasets may yield valuable insights into the ecological impacts of climate change and guide adaptive forest management. Furthermore, they can support national reporting on forest carbon stocks and sequestration targets under international climate agreements such as the United Nations Framework Convention on Climate Change (UNFCCC) and the Paris Agreement. This integration facilitates the design of localized, nature-based solutions for climate adaptation, particularly within the scope of national greenhouse gas reduction commitments and ecosystem-based resilience planning.

5. Conclusions

In this study, we introduced a high-resolution vegetation mapping framework that integrates HSI with 3D LiDAR technology, offering a replicable and scalable method for capturing structural and compositional vegetation patterns at a species-relevant scale. This approach addresses several longstanding limitations of traditional field surveys by enabling objective, efficient, and spatially continuous ecological assessments in complex forest ecosystems, particularly in mid-latitude mixed forests.
Our results demonstrate high classification accuracy at both species and community levels, underscoring the value of combining spectral richness with structural precision. This methodological advancement contributes not only to biodiversity monitoring and forest ecological studies but also to broader agendas such as national carbon accounting, ecosystem service valuation, and spatial planning under climate change. These findings underscore the utility of multi-sensor vegetation data in informing adaptive land management, climate resilience strategies, and cross-sectoral environmental governance.
Despite these achievements, challenges remain. The extensive volume and complexity of multi-sensor data require continued development of efficient preprocessing pipelines and machine learning algorithms to ensure wider applicability. Furthermore, while validated under temperate conditions, the transferability of this method to structurally extreme or phenologically variable ecosystems—such as tropical, arid, or highly urbanized landscapes—warrants further investigation and calibration.
Looking ahead, integrating this framework with next-generation platforms such as UAV-based HSI-LiDAR fusion, full-waveform LiDAR, and ecosystem-specific ground truthing will enhance monitoring capacity across spatial and temporal scales. The potential to link such detailed vegetation datasets with climate models and ecological forecasting tools opens new possibilities for predictive biodiversity assessment and dynamic conservation planning.
Ultimately, this study represents more than a technical contribution, as it provides a foundation for more informed environmental decision-making in an era of rapid ecological transformation. As we refine and expand this approach, it holds promise not only for advancing ecological research but also for supporting sustainability-focused governance and responsible stewardship of forest ecosystems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/f16071158/s1, Figure S1: Hyperspectral image: R79, G52 B38; Figure S2: Enlarged example of tree points and crown; Figure S3: Vegetation community map; Figure S4: Forest type map; Document S1: Hyper-LiDAR modelling procedure.

Author Contributions

Conceptualization, N.S.K.; methodology, N.S.K. and C.H.L.; software, N.S.K.; validation, C.H.L.; investigation, C.H.L.; resources, N.S.K.; data curation, C.H.L.; writing—original draft preparation, N.S.K.; writing—review and editing, N.S.K. and C.H.L.; visualization, N.S.K. and C.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Institute of Ecology (NIE), Republic of Korea, under the research project titled “Development of Policy Decision Support System Based on Ecosystem Services Assessment” (Project No. NIE-B-2025-03).

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, B.; Liu, J.; Li, J.; Li, M. UAV LiDAR and Hyperspectral Data Synergy for Tree Species Classification in the Maoershan Forest Farm Region. Remote. Sens. 2023, 15, 1000. [Google Scholar] [CrossRef]
  2. Wang, A.; Shi, S.; Yang, J.; Luo, Y.; Tang, X.; Du, J.; Bi, S.; Qu, F.; Gong, C.; Gong, W. Integration of LiDAR and Hyperspectral Imagery for Tree Species Identification at the Individual Tree Level. Photogramm. Rec. 2025, 40, e70007. [Google Scholar] [CrossRef]
  3. Lee, A.C.; Lucas, R.M. A LiDAR-derived canopy density model for tree stem and crown mapping in Australian forests. Remote Sens. Environ. 2007, 111, 493–518. [Google Scholar] [CrossRef]
  4. Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef]
  5. Shu, X.; Ma, L.; Chang, F. Integrating hyperspectral images and LiDAR data using vision transformers for enhanced vegetation classification. Forests 2025, 16, 620. [Google Scholar] [CrossRef]
  6. Zhang, H.; Liu, B.; Yang, B.; Guo, J.; Hu, Z.; Zhang, M.; Yang, Z.; Zhang, J. Efficient tree species classification using machine and deep learning algorithms based on UAV-LiDAR data in North China. Front. For. Glob. Change 2025, 8, 1431603. [Google Scholar] [CrossRef]
  7. Havrilla, C.A.; Villarreal, M.L.; DiBiase, J.K.; Duniway, M.C.; Barger, N.N. Ultra-high-resolution mapping of biocrusts with unmanned aerial systems. Remote Sens. Ecol. Conserv. 2020, 6, 441–456. [Google Scholar] [CrossRef]
  8. Ehbrecht, M.; Seidel, D.; Annighöfer, P.; Kreft, H.; Köhler, M.; Zemp, D.C.; Puettmann, K.; Nilus, R.; Babweteera, F.; Willim, K.; et al. Global patterns and climatic controls of forest structural complexity. Nat. Commun. 2021, 12, 519. [Google Scholar] [CrossRef]
  9. Chen, C.; Wang, Y.; Li, Y.; Yue, T.; Wang, X. Robust and parameter-free algorithm for constructing pit-free canopy height models. Int. J. Geogr. Inf. 2017, 6, 219. [Google Scholar] [CrossRef]
  10. Hakkenberg, C.R.; Zhu, K.; Peet, P.K.; Song, C. Mapping multi-scale vascular plant richness in a forest landscape with integrated LiDAR and hyperspectral remote-sensing. Ecology 2018, 99, 474–487. [Google Scholar] [CrossRef]
  11. Sankey, T.T.; McVay, J.; Swetnam, T.L.; McClaran, M.P.; Heilman, P.; Nichols, M.; Pettorelli, N.; Horning, N. UAV hyperspectral and LiDAR data and their fusion for arid and semi-arid land vegetation monitoring. Remote Sens. Ecol. Conserv. 2018, 1, 20–33. [Google Scholar] [CrossRef]
  12. Lopatin, J.; Dolos, K.; Kattenborn, T.; Fassnacht, F.E. How canopy shadow affects invasive plant species classification in high spatial resolution remote sensing. Remote Sens. Ecol. Conserv. 2019, 5, 302–317. [Google Scholar] [CrossRef]
  13. Klehr, D.; Stoffels, J.; Hill, A.; Pham, V.-D.; van der Linden, S.; Frantz, D. Mapping tree species fractions in temperate mixed forests using Sentinel-2 time series and synthetically mixed training data. Remote Sens. Environ. 2025, 323, 114740. [Google Scholar] [CrossRef]
  14. Zhao, Y.; Zeng, Y.; Zheng, Z.; Dong, W.; Zhao, D.; Wu, B.; Zhao, Q. Forest species diversity mapping using airborne LiDAR and hyperspectral data in a subtropical forest in China. Remote Sens. Environ. 2018, 213, 104–114. [Google Scholar] [CrossRef]
  15. Yun, T.; Jiang, K.; Li, G.; Eichhorn, M.P.; Fan, J.; Liu, F.; Chen, B.; An, F.; Cao, L. Individual tree crown segmentation from airborne LiDAR data using a novel Gaussian filter and energy function minimization-based approach. Remote Sens. Environ. 2021, 256, 112307. [Google Scholar] [CrossRef]
  16. Man, Q.; Dong, P.; Zhang, B.; Liu, H.; Yang, X.; Wu, J.; Liu, C.; Han, C.; Zhou, C.; Tan, Z. Precise identification of individual tree species in urban areas with high canopy density by multi-sensor UAV data in two seasons. Int. J. Digit. Earth 2025, 18, 2496804. [Google Scholar] [CrossRef]
  17. Ørka, H.O.; Næsset, E.; Bollandsås, O.M. Classifying species of individual trees by intensity and structure features derived from airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1163–1174. [Google Scholar] [CrossRef]
  18. La, H.P.; Eo, Y.D.; Chang, A.; Kim, C. Extraction of individual tree crown using hyperspectral image and LiDAR data. KSCE J. Civ. Eng. 2015, 19, 1078–1087. [Google Scholar] [CrossRef]
  19. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree species classification using hyperspectral imagery: A comparison of two classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef]
  20. Sothe, C.; Dalponte, M.; de Almeida, C.M.; Schimalski, M.B.; Lima, C.L.; Liesenberg, V.; Miyoshi, G.T.; Tommaselli, A.M.G. Tree species classification in a highly diverse subtropical forest integrating uav-based photogrammetric point cloud and hyperspectral data. Remote Sens. 2019, 11, 1338. [Google Scholar] [CrossRef]
  21. Sinaice, B.B.; Owada, N.; Ikeda, H.; Toriya, H.; Bagai, Z.; Shemang, E.; Adachi, T.; Kawamura, Y. Spectral angle mapping and AI methods applied in automatic identification of placer deposit magnetite using multispectral camera mounted on UAV. Minerals 2022, 12, 268. [Google Scholar] [CrossRef]
  22. Estrada, J.S.; Fuentes, A.; Reszka, P.; Cheein, F.A. Machine learning assisted remote forestry health assessment: A comprehensive state of the art review. Front. Plant Sci. 2023, 14, 1139232. [Google Scholar] [CrossRef]
  23. Jia, J.; Zhang, L.; Yin, K.; Sörgel, U. An Improved Tree Crown Delineation Method Based on a Gradient Feature-Driven Expansion Process Using Airborne LiDAR Data. Remote Sens. 2025, 17, 196. [Google Scholar] [CrossRef]
  24. Budei, B.C.; St-Onge, B.; Hopkinson, C.; Audet, F.-A. Identifying the genus or species of individual trees using a three-wavelength airborne LiDAR system. Remote Sens. Environ. 2018, 204, 632–647. [Google Scholar] [CrossRef]
  25. Isenburg, M. LAStools, 2.0.1; Software for LiDAR Data Processing; rapidlasso GmbH: Gilching, Germany, 2020; Available online: https://rapidlasso.com/lastools (accessed on 10 July 2025).
  26. L3Harris Geospatial. ENVI, 5.6; L3Harris Geospatial: Broomfield, CO, USA, 2021; Available online: https://www.l3harrisgeospatial.com/Software-Technology/ENVI (accessed on 10 July 2025).
  27. Esri. ArcGIS Desktop, 10.8; Esri: Redlands, CA, USA, 2020; Available online: https://www.esri.com/en-us/arcgis/products/arcgis-desktop/overview (accessed on 10 July 2025).
  28. Esri. ArcGIS Pro, 2.8.2; [Software]. 2021. Available online: https://www.esri.com/en-us/arcgis/products/arcgis-pro/overview (accessed on 10 July 2025).
  29. Girouard, G.; Bannari, A.; Harti, A.E.; Desrochers, A. Validated Spectral Angle Mapper algorithm for geological mapping: Comparative study between Quickbird and Landsat-TM. In Proceedings of the 20th ISPRS Congress: Geo-Imagery Bridging Continents, Istanbul, Turkey, 12–23 July 2004; pp. 1099–1113. [Google Scholar]
  30. Marcu, C.; Stătescu, F.; Iurist, N. A GIS-based algorithm to generate a LiDAR pit-free canopy height model. Present. Environ. Sustain. Dev. 2017, 2, 89–95. [Google Scholar] [CrossRef]
  31. Lindsay, J.B.; Francioni, A.; Cockburn, M.H. LiDAR DEM smoothing and the preservation of drainage features. Remote Sens. 2019, 11, 1926. [Google Scholar] [CrossRef]
  32. Matsuki, T.; Yokoya, N.; Iwasaki, A. Fusion of hyperspectral and LiDAR data for tree species classification. In Proceedings of the 34th Asian Conference on Remote Sensing, Bali, Indonesia, 20–24 October 2013; Available online: http://naotoyokoya.com/assets/pdf/TMatsukiACRS2013.pdf (accessed on 10 July 2025).
  33. Lukaszkiewicz, J.; Kosmala, M. Determining the age of streetside trees with diameter at breast height-based multifactorial model. Arboric. Urban For. 2008, 4, 137–143. [Google Scholar] [CrossRef]
  34. Smits, I.; Prieditis, G.; Dagis, S.; Dubrovskis, D. Individual tree identification using different LiDAR and optical imagery data processing methods. Biosyst. Inf. Technol. 2012, 1, 19–24. [Google Scholar] [CrossRef]
  35. Petropoulos, G.P.; Kalaitzidis, C. Spectral angle mapper and object-based classification combined with hyperspectral remote sensing imagery for obtaining land use/cover mapping in a Mediterranean region. Geocarto Int. 2013, 28, 114–129. [Google Scholar] [CrossRef]
  36. Ryu, J.H.; Yu, B.H.; Kim, J.C.; Seo, S.A.; Kim, J.S. Production of the 5th Forest Stand Map Using Aerial Photograph Database Resources; Korea Forest Research Institute: Seoul, Republic of Korea, 2011; Available online: http://scienceon.kisti.re.kr/srch/selectPORSrchReport.do?cn=TRKO201400016922 (accessed on 10 July 2025).
  37. TreesCharlotte. Tree Age Equation. 2020. Available online: https://treescharlotte.org/tree-education/tree-age-equation/ (accessed on 10 July 2025).
  38. Microsoft Corporation. Microsoft Excel; [Software]. 2021. Available online: https://www.microsoft.com/en-us/microsoft-365/excel (accessed on 10 July 2025).
  39. Cha, J.Y.; Kim, H.S.; Park, J.M.; Lee, S.W.; Jeon, H.S.; Song, H.S. Guidelines for the 5th National Survey on Natural Environment; Korea National Institute of Ecology: Seocheon, Republic of Korea, 2019; Available online: https://www.nie.re.kr/nie/bbs/BMSR00071/view.do?boardId=31713172&menuNo=200299 (accessed on 12 May 2022).
  40. de Almeida, C.T.; Galvão, L.S.; Ometto, J.P.H.B.; Jacon, A.D.; Pereira, F.R.d.S.; Sato, L.Y.; Silva-Junior, C.H.L.; Brancalion, P.H.S.; Aragão, L.E.O.e.C.d. Advancing Forest Degradation and Regeneration Assessment Through Light Detection and Ranging and Hyperspectral Imaging Integration. Remote Sens. 2024, 16, 3935. [Google Scholar] [CrossRef]
  41. Dian, Y.; Pang, Y.; Dong, Y.; Li, Z. Urban tree species mapping using airborne LiDAR and hyperspectral data. J. Indian Soc. Remote Sens. 2016, 44, 595–603. [Google Scholar] [CrossRef]
  42. Onojeghuo, A.O.; Onojeghuo, A.R. Object-based habitat mapping using very high spatial resolution multispectral and hyperspectral imagery with LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2017, 59, 79–91. [Google Scholar] [CrossRef]
  43. Krzystek, P.; Serebryanyk, A.; Schnörr, C.; Červenka, J.; Heurich, M. Large-scale mapping of tree species and dead trees in Šumava National Park and Bavarian Forest National Park using Lidar and multispectral Imagery. Remote Sens. 2020, 12, 661. [Google Scholar] [CrossRef]
  44. Ferreira, M.P.; Zortea, M.; Zanotta, D.C.; Shimabukuro, Y.E.; de Souza Filho, C.R. Mapping tree species in tropical seasonal semi-deciduous forests with hyperspectral and multispectral Data. Remote Sens. Environ. 2016, 179, 66–78. [Google Scholar] [CrossRef]
  45. Duncanson, L.I.; Cook, B.D.; Hurtt, G.C.; Dubayah, R.O. An efficient, multi-layered crown delineation algorithm for mapping individual tree structure across multiple ecosystems. Remote Sen. Environ. 2014, 154, 378–386. [Google Scholar] [CrossRef]
  46. Li, W.; Guo, Q.; Jakubowski, M.K.; Kelly, M. A new method for segmenting individual trees from the LiDAR point cloud. Photogram. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar] [CrossRef]
  47. Roussel, J.-R.; Auty, D.; Coops, N.C.; Tompalski, P.; Goodbody, T.R.H.; Meador, A.S.; Bourdon, J.-F.; de Boissieu, F.; Achim, A. lidR: An R package for analysis of airborne laser scanning (ALS) data. Remote Sens. Environ. 2020, 251, 112061. [Google Scholar] [CrossRef]
  48. Liu, Y.; Zhang, A.; Gao, P. From crown detection to boundary segmentation: Advancing forest analytics with enhanced YOLO model and airborne LiDAR point clouds. Forests 2025, 16, 248. [Google Scholar] [CrossRef]
  49. Wallace, L.O.; Lucieer, A.; Watson, C.S. Assessing the feasibility of UAV-based LiDAR for high resolution forest change detection. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, XXXIX-B7, 499–504. [Google Scholar] [CrossRef]
  50. Sankey, T.T. UAV Hyperspectral-Thermal-Lidar fusion in phenotyping: Genetic trait differences among Fremont Cottonwood Populations. Landsc. Ecol. 2025, 40, 45. [Google Scholar] [CrossRef]
  51. Campos, M.B.; Litkey, P.; Wang, Y.; Chen, Y.; Hyyti, H.; Hyyppä, J.; Puttonen, E. A long-term terrestrial laser scanning measurement station to continuously monitor structural and phenological dynamics of boreal forest canopy. Front. Plant Sci. 2021, 11, 606752. [Google Scholar] [CrossRef]
  52. Arrizza, S.; Marras, S.; Ferrara, R.; Pellizzaro, G. Terrestrial Laser Scanning (TLS) for Tree Structure Studies: A Review of Methods for Wood-Leaf Classifications from 3D Point Clouds. Remote Sens. Appl. 2024, 36, 101364. [Google Scholar] [CrossRef]
  53. Luo, W.; Ma, H.; Yuan, J.; Zhang, L.; Ma, H.; Cai, Z.; Zhou, W. High-Accuracy Filtering of Forest Scenes Based on Full-Waveform LiDAR Data and Hyperspectral Images. Remote Sens. 2023, 15, 3499. [Google Scholar] [CrossRef]
  54. Gibbs, D.A.; Rose, M.; Grassi, G.; Melo, J.; Rossi, S.; Heinrich, V.; Harris, N.L. Revised and Updated Geospatial Monitoring of 21st Century Forest Carbon Fluxes. Earth Syst. Sci. Data 2025, 17, 1217–1243. [Google Scholar] [CrossRef]
  55. Bossy, T.; Ciais, P.; Renaudineau, S.; Wan, L.; Ygorra, B.; Adam, E.; Barbier, N.; Bauters, M.; Delbart, N.; Frappart, F.; et al. State of the Art in Remote Sensing Monitoring of Carbon Dynamics in African Tropical Forests. Front. Remote Sens. 2025, 6, 1532280. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of the integrated LiDAR and hyperspectral analysis for vegetation mapping. This schematic outlines the overarching framework for vegetation mapping, demonstrating the synergistic integration of airborne LiDAR and hyperspectral data.
Figure 1. Conceptual framework of the integrated LiDAR and hyperspectral analysis for vegetation mapping. This schematic outlines the overarching framework for vegetation mapping, demonstrating the synergistic integration of airborne LiDAR and hyperspectral data.
Forests 16 01158 g001
Figure 2. Detailed analytical workflow for vegetation mapping: Incorporating Digital Canopy Height Model (DCHM) and Spectral Angle Mapper (SAM). The block diagram illustrates the overall analytical workflow for vegetation mapping in temperate forests, integrating LiDAR and hyperspectral data through key steps like DCHM generation and SAM-based classification.
Figure 2. Detailed analytical workflow for vegetation mapping: Incorporating Digital Canopy Height Model (DCHM) and Spectral Angle Mapper (SAM). The block diagram illustrates the overall analytical workflow for vegetation mapping in temperate forests, integrating LiDAR and hyperspectral data through key steps like DCHM generation and SAM-based classification.
Forests 16 01158 g002
Figure 3. Map of the study site.
Figure 3. Map of the study site.
Forests 16 01158 g003
Figure 4. Digital Canopy Height Model (DCHM) pit and peak elimination process. The figure illustrates the methodology for identifying and removing erroneous pit and peak artifacts from DCHM, which typically arise in LiDAR-derived data.
Figure 4. Digital Canopy Height Model (DCHM) pit and peak elimination process. The figure illustrates the methodology for identifying and removing erroneous pit and peak artifacts from DCHM, which typically arise in LiDAR-derived data.
Forests 16 01158 g004
Figure 5. Method for correcting biased tree positions and abnormal crown geometries. This figure demonstrates the process of correcting abnormal apex detection and crown delineation using uncompensated CHMs. Triangles represent tree apex points. In the top and middle panels, circles indicate conceptual errors in crown geometry or apex placement (e.g., misplaced or duplicated tree points). In the bottom left panel, red circles highlight actual crown or tree point errors detected in the segmentation result.
Figure 5. Method for correcting biased tree positions and abnormal crown geometries. This figure demonstrates the process of correcting abnormal apex detection and crown delineation using uncompensated CHMs. Triangles represent tree apex points. In the top and middle panels, circles indicate conceptual errors in crown geometry or apex placement (e.g., misplaced or duplicated tree points). In the bottom left panel, red circles highlight actual crown or tree point errors detected in the segmentation result.
Forests 16 01158 g005
Figure 6. Individual tree crown delineation using Thiessen polygons and curvature points. The figure illustrates how Thiessen polygons, combined with identified curvature points from the Digital Canopy Height Model (DCHM), are utilized to precisely delineate individual tree crowns for accurate forest inventory and mapping.
Figure 6. Individual tree crown delineation using Thiessen polygons and curvature points. The figure illustrates how Thiessen polygons, combined with identified curvature points from the Digital Canopy Height Model (DCHM), are utilized to precisely delineate individual tree crowns for accurate forest inventory and mapping.
Forests 16 01158 g006
Figure 7. Consistency verification of classified forest communities and field-validated tree locations. The figure illustrates the validation results, showing the spatial consistency of 36 community locations classified by supervised methods (left panel) and the agreement with 90 individual tree points surveyed at random field sites, where mismatches are specifically indicated by squares (right panel).
Figure 7. Consistency verification of classified forest communities and field-validated tree locations. The figure illustrates the validation results, showing the spatial consistency of 36 community locations classified by supervised methods (left panel) and the agreement with 90 individual tree points surveyed at random field sites, where mismatches are specifically indicated by squares (right panel).
Forests 16 01158 g007
Figure 8. Validation results for vegetation community and forest type classifications. The figure presents the validation results for vegetation community classification (left panel) and forest type classification (right panel), where black dots represent field verification points and red squares indicate mismatched classifications.
Figure 8. Validation results for vegetation community and forest type classifications. The figure presents the validation results for vegetation community classification (left panel) and forest type classification (right panel), where black dots represent field verification points and red squares indicate mismatched classifications.
Forests 16 01158 g008
Table 1. Supervised classification of the eight primary species used in the study.
Table 1. Supervised classification of the eight primary species used in the study.
Supervised CodeScientific Name
1, 4, 6, 8, 10, 11, 14, 15, 16, 17, 19Quercus acutissima
3Castanea crenata
2, 9Pinus rigida
5Prunus sargentii
7Larix kaempferi
12Larix kaempferi
13Pinus koraiensis
18Platanus occidentalis
Table 2. Cross-validation results for species classification using airborne LiDAR and hyperspectral imaging data sets. This table illustrates the results of cross-validation for species classification accuracy within airborne LiDAR and hyperspectral imaging data sets. It assesses the consistency of species information codes extracted using Spectral Angle Mapper analysis and supervised classification.
Table 2. Cross-validation results for species classification using airborne LiDAR and hyperspectral imaging data sets. This table illustrates the results of cross-validation for species classification accuracy within airborne LiDAR and hyperspectral imaging data sets. It assesses the consistency of species information codes extracted using Spectral Angle Mapper analysis and supervised classification.
Scientific NameNumber of IndividualsAccuracy (%)
Castanea crenata1100
Larix kaempferi4100
Pinus densiflora3100
Pinus koraiensis2100
Pinus rigida7100
Platanus occidentalis1100
Prunus sargentii1100
Quercus acutissima17100
Table 3. Accuracy and misidentification analysis for randomly selected trees using on-site validation. This table displays the results of accuracy and misidentification analysis for 90 randomly selected trees, verified through on-site data collection points. It lists each tree species with the number of individuals surveyed, errors observed, accuracy percentages, and details of any species misidentified.
Table 3. Accuracy and misidentification analysis for randomly selected trees using on-site validation. This table displays the results of accuracy and misidentification analysis for 90 randomly selected trees, verified through on-site data collection points. It lists each tree species with the number of individuals surveyed, errors observed, accuracy percentages, and details of any species misidentified.
Scientific NameTotal TreesMapped Accuracy (%)ErrorsMisidentified
Species
Castanea crenata2100
Larix kaempferi11100
Pinus densiflora12921Platanus occidentalis
Pinus koraiensis6100
Pinus rigida22951Pinus koraiensis
Platanus occidentalis1100
Prunus sargentii1100
Quercus acutissima35100
Total/Average9098.4 (Average)2
Table 4. Comprehensive summary of hyperspectral imaging and LiDAR mapping results. This table provides a comprehensive summary of the key results obtained from hyperspectral imaging and LiDAR data mapping, detailing the total counts and verification percentages for various components such as tree points, crown points, species, and verifications of communities and forest types.
Table 4. Comprehensive summary of hyperspectral imaging and LiDAR mapping results. This table provides a comprehensive summary of the key results obtained from hyperspectral imaging and LiDAR data mapping, detailing the total counts and verification percentages for various components such as tree points, crown points, species, and verifications of communities and forest types.
ContentsTotal CountVerification (%)Notes
Tree point560,339--
Crown560,339--
Species8-Castanea crenata, Larix kaempferi, etc.
1st Verification-100No errors in 90 points
2nd Verification-97.82 errors in 90 points
Boundary polygons3199--
Communities55-Groups in 3199 forest patches
Forest types8-Groups in 3199 forest patches
Verification of communities-93.112 inconsistencies in 174 points
Verification of forest types-97.74 inconsistencies in 174 points
Table 5. Discrepancy Analysis of Field Validation Points in Vegetation Mapping. This table details the discrepancies between LiDAR-derived vegetation community maps and actual field observations across 174 validation points. C.M.: Complete mismatch, C.I.: Correct identification of dominant species.
Table 5. Discrepancy Analysis of Field Validation Points in Vegetation Mapping. This table details the discrepancies between LiDAR-derived vegetation community maps and actual field observations across 174 validation points. C.M.: Complete mismatch, C.I.: Correct identification of dominant species.
IDHyperspectral
LiDAR-Derived Community
Field Survey ObservationCommunity TypeDominant Species
Accuracy
Notes
1Quercus acutissimaPinus rigidaPinus densiflora
Castanea crenata
MixedIncorrectC.M.
2Quercus acutissimaPinus rigidaRobinia pseudoacacia
Quercus acutissima
MixedCorrectC.I.
3Quercus acutissimaPinus rigidaQuercus acutissima
Castanea crenata
MixedCorrectC.I.
4Quercus acutissimaPinus rigidaQuercus acutissima
Castanea crenata
MixedCorrectC.I.
5Quercus acutissimaPinus rigidaMetasequoia glyptostroboidesCastanea crenataMixedIncorrectC.M.
6Platanus occidentalisQuercus acutissimaPinus rigidaSingleIncorrectC.M.
7Quercus acutissimaPrunus sargentiiCastanea crenataSingleIncorrectC.M.
8Quercus acutissimaLarix kaempferiCastanea crenataSingleIncorrectC.M.
9Quercus acutissimaCastanea crenataPinus densifloraMixedIncorrectC.M.
10Quercus acutissimaLarix kaempferiQuercus acutissima
Quercus serrata
MixedCorrectC.I.
11Quercus acutissimaPlatanus occidentalisLarix kaempferiSingleIncorrectC.M.
12Quercus acutissimaPinus rigidaQuercus acutissima
Castanea crenata
MixedCorrectC.I.
Table 6. Field consistency rate for forest types (97.7%; 4 inconsistent points out of 174).
Table 6. Field consistency rate for forest types (97.7%; 4 inconsistent points out of 174).
IDHyperspectral LiDAR-Derived
Community
Field Survey Observation
1Platanus occidentalisPinus rigida
2Quercus acutissimaCastanea crenata
3Quercus acutissimaPinus densiflora
4Quercus acutissimaLarix kaempferi
Table 7. Sample attributes for vegetation communities, forest types, and statistical summary. The table provides a consolidated view of vegetation communities and forest types with associated statistical data. Abbreviations are used for brevity: Community (Comm.), Forest Type (FT.), Area (m2) for the combined area and canopy area, Ht./Age (m/yrs) for the average height and age, Dens./Diam. for density percentage and diameter class, and Ind./Sp. Num. for the individual and species number.
Table 7. Sample attributes for vegetation communities, forest types, and statistical summary. The table provides a consolidated view of vegetation communities and forest types with associated statistical data. Abbreviations are used for brevity: Community (Comm.), Forest Type (FT.), Area (m2) for the combined area and canopy area, Ht./Age (m/yrs) for the average height and age, Dens./Diam. for density percentage and diameter class, and Ind./Sp. Num. for the individual and species number.
Comm.FT.Area (Canopy) (m2)Ht./Age
(m/yrs)
Dens.(%)/
Diam. (cm)
Ind./
Sp. (count)
Quercus acutissimaPinus rigidaQuercus acutissima318
(212)
5.5/31.266.7/17.510/2
Quercus acutissimaQuercus acutissima235
(156.6)
11.5/30.666.5/17.28/2
Quercus acutissimaPinus rigidaQuercus acutissima22,208
(15,342.9)
10.1/34.569.1/19.4646/8
Quercus acutissimaQuercus acutissima88
(51.9)
8.7/32.359.2/18.23/1
Quercus acutissimaPinus densifloraQuercus acutissima11,588
(7049)
13.6/35.960.8/20.3304/8
Pinus rigidaQuercus acutissimaPinus rigida91
(54)
2.2/3359.4/18.72/2
Quercus acutissimaPinus rigidaQuercus acutissima67,200
(38,249.4)
17.2/36.856.9/20.71432/8
Quercus acutissimaPinus rigidaQuercus acutissima847
(357.3)
9.4/34.542.2/19.517/5
Quercus acutissimaLarix kaempferiQuercus acutissima51,119
(37,710.5)
16.7/36.273.8/20.41477/8
Quercus acutissimaPinus rigidaQuercus acutissima13,071
(10,220.5)
19.1/37.178.2/20.9391/8
Quercus acutissimaPinus densifloraQuercus acutissima28,469
(21,396.7)
17.1/38.375.2/21.6836/8
Quercus acutissimaPlatanus occidentalisQuercus acutissima177
(71.7)
16.4/38.540.6/21.62/2
Pinus koraiensis
Larix kaempferi
Pinus koraiensis154
(112.3)
13.1/45.373.1/25.53/3
Prunus sargentiiPrunus sargentii80
(12.6)
5.6/2715.8/15.31/1
Quercus acutissimaPrunus sargentiiQuercus acutissima197
(109.1)
6.5/28.355.2/15.97/3
Quercus acutissimaPrunus sargentiiQuercus acutissima2034
(887.7)
3.7/25.843.6/14.581/5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, N.S.; Lim, C.H. Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging. Forests 2025, 16, 1158. https://doi.org/10.3390/f16071158

AMA Style

Kim NS, Lim CH. Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging. Forests. 2025; 16(7):1158. https://doi.org/10.3390/f16071158

Chicago/Turabian Style

Kim, Nam Shin, and Chi Hong Lim. 2025. "Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging" Forests 16, no. 7: 1158. https://doi.org/10.3390/f16071158

APA Style

Kim, N. S., & Lim, C. H. (2025). Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging. Forests, 16(7), 1158. https://doi.org/10.3390/f16071158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop