remotesensing-logo

Journal Browser

Journal Browser

Digital Modeling for Sustainable Forest Management

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Forest Remote Sensing".

Deadline for manuscript submissions: 15 September 2025 | Viewed by 1252

Special Issue Editors


E-Mail Website
Guest Editor
1. Faculty of Forestry, Artvin Coruh University, Artvin 08100, Turkey
2. Warnell School of Forestry and Natural Resources, The University of Georgia, Athens, GA, USA
Interests: forest management planning; mobile laser scanning; forest inventory; ecosystem services

E-Mail Website
Guest Editor
College of Forestry Wildlife and Environment, Auburn University, Auburn, AL 36849, USA
Interests: airborne/spaceborne LiDAR; machine learning; remote sensing of vegetation; geospatial analysis

E-Mail Website
Guest Editor
Department of Geography, University College London, London WC1E 6BT, UK
Interests: forest ecosystem monitoring; proximal remote sensing; terrestrial/mobile LiDAR; photogrammetry; UAV
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sustainable forest management, as defined by Forest Europe, refers to “the stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfil, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems”. Achieving this goal necessitates accurate and timely information on forest resources to assess the sustainability of ecosystems and monitor progress over long planning horizons.

In recent decades, advancements in remote sensing technologies (e.g., spaceborne LiDAR, drones), computing power (e.g., supercomputing, cloud computing), and modeling techniques (e.g., quantitative structure models, digital twins, machine and deep learning) have significantly enhanced the ability of resource managers and researchers to measure and model various aspects of forest sustainability. This leads to the possibility of applying new approaches to forest management, such as precision forestry or climate-smart forestry. As such, acquiring, processing, and analyzing remotely sensed data have become indispensable, with digital modeling emerging as a critical tool for characterizing large forestlands, often in a spatially explicit manner. 

This Special Issue aims to explore the latest developments and applications of digital modeling in forestry, leveraging cutting-edge remote sensing tools and geospatial technologies to promote the sustainable management of forests. Since improved forest management often supports the sustainable provision of ecosystem goods and services, research focusing on characterizing and mapping forest ecosystem services is also encouraged.

We invite submissions employing remote sensing sources (e.g., optical sensors, laser scanning, radar) and platforms, including proximal (e.g., static/mobile scanners, multi-camera systems, depth sensors, drones), airborne (e.g., airborne laser scanning and imagery), and spaceborne (e.g., GEDI, ICESat-2, ALOS, Landsat, Sentinel satellites, to name a few). Studies integrating traditional field-measured data into 3D models through geographical information systems (GIS) are also welcome. To address global challenges in sustainable forest management, we particularly encourage research demonstrating how digital modeling can bridge the gap between remote sensing innovations and operational forestry practices across diverse regions worldwide.        

Possible topics include, but are not limited to, the following:

  • Two- and three-dimensional forest ecosystem mapping and monitoring,
  • Virtual or augmented reality applications for forestry,
  • Characterization and modeling of forest structure or ecosystem services,
  • Forest cover mapping and/or automated stand delineation,
  • Digital twin frameworks for forest management planning,
  • Forest mensuration and operational forest inventories,
  • Developing and/or implementing forest simulators with visualization tools,
  • 3D modeling of tree components, single trees, forest stands or forested landscapes.

Dr. Can Vatandaslar
Dr. Lana L. Narine
Dr. Martin Mokroš
Prof. Dr. Marguerite Madden
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • geospatial modeling
  • precision forestry
  • enhanced forest inventory
  • digital forestry
  • remote sensing of forests
  • image classification
  • climate smart forestry

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 4947 KiB  
Article
From Coarse to Crisp: Enhancing Tree Species Maps with Deep Learning and Satellite Imagery
by Taebin Choe, Seungpyo Jeon, Byeongcheol Kim and Seonyoung Park
Remote Sens. 2025, 17(13), 2222; https://doi.org/10.3390/rs17132222 (registering DOI) - 28 Jun 2025
Abstract
Accurate, detailed, and up-to-date tree species distribution information is essential for effective forest management and environmental research. However, existing tree species maps face limitations in resolution and update cycle, making it difficult to meet modern demands. To overcome these limitations, this study proposes [...] Read more.
Accurate, detailed, and up-to-date tree species distribution information is essential for effective forest management and environmental research. However, existing tree species maps face limitations in resolution and update cycle, making it difficult to meet modern demands. To overcome these limitations, this study proposes a novel framework that utilizes existing medium-resolution national tree species maps as ‘weak labels’ and fuses multi-temporal Sentinel-2 and PlanetScope satellite imagery data. Specifically, a super-resolution (SR) technique, using PlanetScope imagery as a reference, was first applied to Sentinel-2 data to enhance its resolution to 2.5 m. Then, these enhanced Sentinel-2 bands were combined with PlanetScope bands to construct the final multi-spectral, multi-temporal input data. Deep learning (DL) model training data was constructed by strategically sampling information-rich pixels from the national tree species map. Applying the proposed methodology to Sobaeksan and Jirisan National Parks in South Korea, the performance of various machine learning (ML) and deep learning (DL) models was compared, including traditional ML (linear regression, random forest) and DL architectures (multilayer perceptron (MLP), spectral encoder block (SEB)—linear, and SEB-transformer). The MLP model demonstrated optimal performance, achieving over 85% overall accuracy (OA) and more than 81% accuracy in classifying spectrally similar and difficult-to-distinguish species, specifically Quercus mongolica (QM) and Quercus variabilis (QV). Furthermore, while spectral and temporal information were confirmed to contribute significantly to tree species classification, the contribution of spatial (texture) information was experimentally found to be limited at the 2.5 m resolution level. This study presents a practical method for creating high-resolution tree species maps scalable to the national level by fusing existing tree species maps with Sentinel-2 and PlanetScope imagery without requiring costly separate field surveys. Its significance lies in establishing a foundation that can contribute to various fields such as forest resource management, biodiversity conservation, and climate change research. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

29 pages, 3799 KiB  
Article
Forest Three-Dimensional Reconstruction Method Based on High-Resolution Remote Sensing Image Using Tree Crown Segmentation and Individual Tree Parameter Extraction Model
by Guangsen Ma, Gang Yang, Hao Lu and Xue Zhang
Remote Sens. 2025, 17(13), 2179; https://doi.org/10.3390/rs17132179 - 25 Jun 2025
Viewed by 152
Abstract
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and [...] Read more.
Efficient and accurate acquisition of tree distribution and three-dimensional geometric information in forest scenes, along with three-dimensional reconstructions of entire forest environments, hold significant application value in precision forestry and forestry digital twins. However, due to complex vegetation structures, fine geometric details, and severe occlusions in forest environments, existing methods—whether vision-based or LiDAR-based—still face challenges such as high data acquisition costs, feature extraction difficulties, and limited reconstruction accuracy. This study focuses on reconstructing tree distribution and extracting key individual tree parameters, and it proposes a forest 3D reconstruction framework based on high-resolution remote sensing images. Firstly, an optimized Mask R-CNN model was employed to segment individual tree crowns and extract distribution information. Then, a Tree Parameter and Reconstruction Network (TPRN) was constructed to directly estimate key structural parameters (height, DBH etc.) from crown images and generate tree 3D models. Subsequently, the 3D forest scene could be reconstructed by combining the distribution information and tree 3D models. In addition, to address the data scarcity, a hybrid training strategy integrating virtual and real data was proposed for crown segmentation and individual tree parameter estimation. Experimental results demonstrated that the proposed method could reconstruct an entire forest scene within seconds while accurately preserving tree distribution and individual tree attributes. In two real-world plots, the tree counting accuracy exceeded 90%, with an average tree localization error under 0.2 m. The TPRN achieved parameter extraction accuracies of 92.7% and 96% for tree height, and 95.4% and 94.1% for DBH. Furthermore, the generated individual tree models achieved average Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores of 11.24 and 0.53, respectively, validating the quality of the reconstruction. This approach enables fast and effective large-scale forest scene reconstruction using only a single remote sensing image as input, demonstrating significant potential for applications in both dynamic forest resource monitoring and forestry-oriented digital twin systems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

20 pages, 39846 KiB  
Article
MTCDNet: Multimodal Feature Fusion-Based Tree Crown Detection Network Using UAV-Acquired Optical Imagery and LiDAR Data
by Heng Zhang, Can Yang and Xijian Fan
Remote Sens. 2025, 17(12), 1996; https://doi.org/10.3390/rs17121996 - 9 Jun 2025
Viewed by 268
Abstract
Accurate detection of individual tree crowns is a critical prerequisite for precisely extracting forest structural parameters, which is vital for forestry resources monitoring. While unmanned aerial vehicle (UAV)-acquired RGB imagery, combined with deep learning-based networks, has demonstrated considerable potential, existing methods often rely [...] Read more.
Accurate detection of individual tree crowns is a critical prerequisite for precisely extracting forest structural parameters, which is vital for forestry resources monitoring. While unmanned aerial vehicle (UAV)-acquired RGB imagery, combined with deep learning-based networks, has demonstrated considerable potential, existing methods often rely exclusively on RGB data, rendering them susceptible to shadows caused by varying illumination and suboptimal performance in dense forest stands. In this paper, we propose integrating LiDAR-derived Canopy Height Model (CHM) with RGB imagery as complementary cues, shifting the paradigm of tree crown detection from unimodal to multimodal. To fully leverage the complementary properties of RGB and CHM, we present a novel Multimodal learning-based Tree Crown Detection Network (MTCDNet). Specifically, a transformer-based multimodal feature fusion strategy is proposed to adaptively learn correlations among multilevel features from diverse modalities, which enhances the model’s ability to represent tree crown structures by leveraging complementary information. In addition, a learnable positional encoding scheme is introduced to facilitate the fused features in capturing the complex, densely distributed tree crown structures by explicitly incorporating spatial information. A hybrid loss function is further designed to enhance the model’s capability in handling occluded crowns and crowns of varying sizes. Experiments conducted on two challenging datasets with diverse stand structures demonstrate that MTCDNet significantly outperforms existing state-of-the-art single-modality methods, achieving AP50 scores of 93.12% and 94.58%, respectively. Ablation studies further confirm the superior performance of the proposed fusion network compared to simple fusion strategies. This research indicates that effectively integrating RGB and CHM data offers a robust solution for enhancing individual tree crown detection. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

Back to TopTop