Next Article in Journal
Semi-Supervised Object Detection for Remote Sensing Images Using Consistent Dense Pseudo-Labels
Previous Article in Journal
Temporal and Spatial Prediction of Column Dust Optical Depth Trend on Mars Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage

1
Key Lab of State Efficient Production of Forest Resources, Beijing Forestry University, Beijing 100083, China
2
Key Lab of State Forestry Administration on Forestry Equipment and Automation, Beijing Forestry University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(8), 1473; https://doi.org/10.3390/rs17081473
Submission received: 26 March 2025 / Revised: 14 April 2025 / Accepted: 17 April 2025 / Published: 20 April 2025

Abstract

:
Three-dimensional models of trees can help simulate forest resource management, field surveys, and urban landscape design. With the advancement of Computer Vision (CV) and laser remote sensing technology, forestry researchers can use images and point cloud data to perform digital modeling. However, modeling leafless tree models that conform to tree growth rules and have effective branching remains a major challenge. This article proposes a method based on 3D Gaussian Splatting (3D GS) to address this issue. Firstly, we compared the reconstruction of the same tree and confirmed the advantages of the 3D GS method in tree 3D reconstruction. Secondly, seven landscape trees were reconstructed using the 3D GS-based method, to verify the effectiveness of the method. Finally, the 3D reconstructed point cloud was used to generate the QSM and extract tree feature parameters to verify the accuracy of the reconstructed model. Our results indicate that this method can effectively reconstruct the structure of real trees, and especially completely reconstruct 3rd-order branches. Meanwhile, the error of the Diameter at Breast Height (DBH) of the model is below 1.59 cm, with a relative error of 3.8–14.6%. This proves that 3D GS effectively solved the problems of inconsistency between tree models and real growth rules, as well as poor branch structure in tree reconstruction models, providing new insights and research directions for the 3D reconstruction and visualization of landscape trees in the leafless stage.

1. Introduction

With the acceleration of urbanization and the increasing demand for ecological environments, the construction of green spaces has become an indispensable component of modern urban planning [1]. Trees play an increasingly prominent role in landscape design, urban greening, and ecological restoration [2,3]. Moreover, realistic 3D models of trees have extensive applications in virtual reality, video games, the film industry, and biomass calculations for forest inventory [4]. Landscape trees refer to trees used in urban, park, courtyard, and other landscape designs to enhance aesthetic appeal, provide shade, improve air quality, and increase biodiversity. They play significant importance in ecological, aesthetic, and social functions. As essential elements in greening designs, the health and maintenance of landscape trees are crucial to ensuring their long-term economic benefits and aesthetic value. This includes regular pruning, pest control, and appropriate irrigation. However, traditional 3D tree models usually lack scientific accuracy and precision, particularly in representing growth patterns, branching structures, and similarity, making it difficult to provide reliable tree models for digital management.
In recent years, with the rapid advancement of Computer Vision (CV) and laser remote sensing technologies, these methods have demonstrated significant potential in the field of tree 3D reconstruction. Recently, many research projects have focused on this topic. Firstly, some studies use the LiDAR point cloud as input data for subsequent research. This includes using point cloud self-registration methods to achieve 3D reconstruction of tree seedlings [5]. Three-dimensional morphological algorithms, clustering methods, and multi-scale curve fitting techniques were employed to robustly reconstruct 3D trees from the incomplete laser point cloud [6]. Airborne LiDAR point cloud data and the 3D alpha-shape algorithm were used to construct models of trees, shrubs, and ground vegetation, achieving realistic forest scene reconstruction [7]. A voxel-based approach was adopted to reconstruct single leafy trees using terrestrial laser scanning (TLS) data [8]. The matrix-based rotational surface modeling method [9] converts the irregular LiDAR point cloud into regular matrix structures, models tree models, and supports parameter calculation and visualization. Crown information of individual trees has been extracted using LiDAR point cloud data through 2D surface modeling or analyzing 3D data [10]. Secondly, some studies conducted further research through deep learning or image data. For example, the evaluation of point clouds generated by single tree reconstruction using Neural Radiation Fields (NeRFs) has demonstrated its enormous potential for higher success rates in single tree reconstruction [11]. Conditional Generative Adversarial Networks (cGANs) have been applied to reconstruct 3D trees from a single image [12]. In addition, there are some studies based on an L-system for tree modeling, which calculates prior information about trees and generates tree models based on this prior information. For example, using the point cloud reconstruction of the generated tree skeleton [13] to form multiple compact L-system expressions, then constructing a set of tree models with different growth patterns. These studies have achieved 3D tree modeling through various methods, some of which can provide point cloud data and extract tree feature parameters [14]. The 3D model of trees and these characteristic parameters are very beneficial for forest resource management [15], field investigations [16], and urban landscape design [17], laying the foundation for the digital management of forestry.
The above methods have achieved good results and progress has been made in tree and forest reconstruction, but several challenges remain unresolved. Firstly, research on reconstruction based on LiDAR point cloud data is not always feasible due to location, cost, or environmental or physical limitations, making it impractical to directly use precise 3D laser scanners [18,19,20,21,22] to obtain the 3D point cloud. Secondly, deep learning-based networks require extensive training time, resulting in high temporal costs. Finally, the growth rules of trees are complex and diverse, and there is still a gap between the current reconstruction effect of trees and their true growth rules. The pruning of landscape trees is divided into winter dormancy pruning and summer growth pruning. And most garden maintenance personnel choose to prune the branches of landscape trees during the winter leaf-free period. Therefore, the accurate branching structure model of trees during the leaf free period provides a necessary scientific basis for the design, maintenance, and management of landscape trees. However, there is little attention and research on the reconstruction of tree branches during the leaf-free period. For landscape trees, the branching phenotype characteristics directly affect their aesthetic appearance, and the morphology and layout of tree crown branches have strong dynamic adaptability to the environment, which is directly related to their growth habits [23,24]. Therefore, reconstructing growth patterns that conform to trees is extremely important for forest management. Starting from this branching characteristic of trees, it is found that it is consistent with the scene representation logic of Gaussian distribution, and can describe changes in branch thickness through multi-scale Gaussian kernels [25]. The 3D Gaussian Splatting [25] uses adaptive Gaussian density control, backpropagation for iterative optimization, and combines differential tile rasterizers to enhance rendering speed. Compared to traditional neural networks, this method enables low-cost, high-quality, and real-time 3D reconstruction, offering a significant advancement in the 3D reconstruction field [25]. Based on this, this article aims to propose a method for the 3D reconstruction of landscape trees in the leaf-free period, which applies 3D GS to the reconstruction of landscape trees and adds KD-Tree [26,27] to optimize the Gaussian kernel initialization scale. The effectiveness of the method was evaluated using image data collected from single landscape trees with different branching patterns in the park environment. The results showed that the method can generate highly realistic reconstruction models and has good effects on tree branches, expanding the research direction of the 3D reconstruction of landscape trees during the leafless period.
In summary, the motivation behind the work proposed in this paper is to provide a 3D GS-based reconstruction method for leafless landscape trees, which innovatively applies 3D GS to the 3D reconstruction of leafless landscape trees and can effectively reconstruct real branching structures. This article focuses on the following: (1) Collecting handheld LiDAR data and multi view image data of the same tree. Reconstructing LiDAR using Alpha Shapes [28], and reconstructing multi view image using 3D-GS based methods and NeRF [29]. Then comparing the reconstruction results and verify the advantages of the proposed method in 3D tree reconstruction. (2) Adding KD-Tree to extract distances to optimize the Gaussian kernel scale initialization process. (3) Using the 3D GS-based method for the 3D reconstruction of the image data of seven different trees to obtain the final reconstruction model.

2. Materials

Landscape tree datasets The tree data used in this study are divided into two parts: one is to verify the feasibility of the 3D GS, and the other is to verify the 3D GS-based reconstruction effect. The first part of the sample was collected at Beijing Forestry University in Beijing, China, and we selected a Fraxinus pennsylvanica on campus as the sample. The other part of the sample was located in the Olympic Forest Park in Beijing, China and the Sakura Park in Hebei Province, China. The sample locations were all located in the temperate continental semi humid monsoon climate zone, with four distinct seasons and concentrated precipitation. We selected a total of seven typical landscape trees with different branching structures as experimental samples.
We use a handheld laser scanner, CHCNAV RS10 (Shanghai CHCNAV, China), to collect LiDAR point cloud data, and use an HONOR GT90 phone (HONOR, China) camera to obtain image data. The point cloud data are obtained through SmartGo data collection and CoPre data calculation. The point cloud data and image data are collected from the initial position in a clockwise direction, with the roots of the landscape tree as the center, and collected from a 360° perspective. The heights of the equipment are all set to 1.5 m above the ground, and the distance is adaptively adjusted according to the size of the landscape tree and the surrounding conditions. The collection equipment and method are shown in Figure 1. The resolution of the image is 1080 × 1920. Each dataset used for the experiment consists of image data from multiple perspectives of single landscape trees. Due to the focus of this study on the three-dimensional reconstruction of branching structures, we chose to collect landscape trees during the leaf-free period.
In October 2023, we collected LiDAR point cloud data and image data of the Fraxinus pennsylvanica on campus.
In December 2023, when the landscape trees in the park were almost leafless, we collected image data of seven landscape trees in two parks and constructed datasets for each. The information of the datasets is shown in Table 1.

3. Methods

In this section, we introduce relevant comparative experiments to verify the good performance of 3D GS in tree 3D reconstruction; sparse point cloud generation for multi view images; model framework and training method for 3D reconstruction of landscape trees; the methods used to construct the Quantitative Structure Model (QSM) and extract parameters; and a detailed explanation of how we validate these models and methods.

3.1. Methodology Overview

Our work is divided into two parts. The first part compares the reconstruction of LiDAR data and multi view image data of the same tree, demonstrating the advantages of the 3D GS method in tree 3D reconstruction and verifying the research significance of the method. The second part is method construction and evaluation. Firstly, the COLMAP tool is used to generate initial sparse point clouds from images from different angles, which are used as the initial point cloud input for the 3D GS method. Secondly, we embed the KD-tree into the Gaussian distribution scale initialization process of scene representation, considering the feature distance of adjacent points, further rationalizing the initial scale. Then, the 3D GS-based reconstruction method generates a 3D Gaussian distribution from the initial sparse point cloud and optimizes the addition/removal of Gaussian distributions to enhance scene representation. Finally, we use TreeQSM [30,31,32,33] to generate the QSM for analyzing branch structure and feature parameters. Figure 2 is the flowchart of our work. The method proposed can effectively reconstruct landscape trees with different branch structures and extract tree feature parameters.

3.2. Generation of Sparse Point Cloud

In this study, the COLMAP tool was used to generate sparse point cloud. COLMAP(3.9.1) is a widely used Structure from Motion (SfM) [34,35] multi view stereo (MVS) reconstruction software that can generate point cloud data from images from multiple perspectives. The following is the detailed process of generating sparse point clouds using COLMAP.
Image preprocessing and feature extraction Firstly, COLMAP preprocesses the input images and extracts feature points from each image. COLMAP uses a feature extraction algorithm based on SIFT (Scale Invariant Feature Transform) [36], which can effectively extract robust key point features from images. Then, these features will be matched between different images to find corresponding points of the same 3D scene from different perspectives.
Feature matching and camera pose estimation After feature extraction and matching, COLMAP calculates the relative position and pose of the camera through algorithms. Firstly, it performs Incremental Structure from Motion to gradually estimate the camera’s motion trajectory. This process determines the position and orientation of each image relative to the global coordinate system by calculating the relative position and orientation between image pairs. COLMAP estimates the position of 3D points based on feature matching results and camera parameters.
Sparse point cloud reconstruction Through the above steps, COLMAP can restore the shared 3D feature points in the image. These feature points are called “sparse point cloud” because they are matching points extracted from different images, with each point representing a position in three-dimensional space. Although the density of the sparse point cloud is low, it provides a preliminary 3D scene structure for subsequent dense reconstruction.
Optimization and post-processing After generating the initial sparse point cloud, COLMAP will globally optimize the camera pose and 3D points to reduce reconstruction errors. This is usually achieved through an optimization method called “Bundle Adjustment” (BA), aimed at improving reconstruction accuracy by minimizing reprojection errors. Ultimately, COLMAP will output an accurate sparse point cloud that contains the 3D coordinates of each feature point in the scene.
Output result Finally, COLMAP saves the generated sparse point cloud in PLY, LAS format or other common formats for use in subsequent 3D reconstruction algorithms. These sparse point cloud data are the foundation for subsequent fine reconstruction, helping to further restore the geometric shape of the scene.
The sparse point cloud generated by COLMAP can provide a rough 3D structure of the scene at a lower computational cost. Although these point clouds are sparse and unevenly distributed, they can provide necessary preliminary data support for image-based 3D reconstruction.

3.3. 3D Reconstruction Method Based on 3D Gaussian Splatting

In this section, we introduce the process of reconstruction based on 3D GS. This paper did not change the method greatly, merely making it fit our needs; we added KD-Tree optimization for Gaussian kernel initial generation in this method. Figure 3 shows the structure of the method.
The input of the 3D GS method is sparse point clouds generated by combining a set of static scene images with corresponding cameras calibrated by SfM. This method uses 3D Gaussian representation of the scene and optimized anisotropic covariance for scene optimization, with adaptive density control on interleaving to achieve accurate representation of the scene while avoiding unnecessary calculations in blank space. Three-dimensional Gaussian is a differentiable volume representation, where the properties of a 3D Gaussian include its center (position) μ , opacity α , 3D covariance matrix Σ , and color c . All attributes are learnable and optimized through backpropagation. Among them, direct optimization of Σ may result in non-positive definite matrices. Therefore, 3D GS chooses to optimize a quaternion q and a three-dimensional vector s . q , s represent the rotation and scaling matrices, and Σ is represented as follows.
Σ = R S S T R T
A key part of the method is adaptive density control, which involves adding and occasionally removing Gaussians during the optimization process. The point dense process clones small Gaussians in areas with insufficient reconstruction or splits large Gaussians in areas with excessive reconstruction. The pruning process removes almost transparent Gaussians and saves computational resources.
We introduced the KD-tree method in our approach to further calculate the Gaussian kernel scale reasonably, which was added to the initialization of the Gaussian kernel scale, we set k=3, which means the initial scale is the average distance between 3-nearest neighbors, making Gaussian kernel initialization more reasonable. KD-Tree (k-dimensional tree) is a hierarchical tree data structure used for efficiently organizing multidimensional spatial data, widely used in scenarios such as Nearest Neighbor Search, Range Query, and spatial partitioning. The core idea is to recursively divide the k-dimensional space into two parts along the coordinate axis, forming a hierarchical spatial division. The construction steps are as follows.
Input K-dimensional dataset D = x 1 , x 2 , , x n , x i R k .
Output KD-Tree root node.
  • Select the segmentation dimension, calculate the variance of all dimensions, and choose the dimension with the highest variance d :
    σ d 2 = 1 n i = 1 n x i , d μ d 2 , μ d = 1 n i = 1 n x i , d
  • Determine the segmentation point and take the median value m on dimension d :
    m = m x i , d , x 2 , d , , x n , d
  • Partition dataset, left subtree: D l e f t = X i x i m , right subtree: D r i g h t = X i x i > m .
  • Recursive construction; repeat steps 1–3 for the left and right subtrees until the termination condition is met.

3.4. QSM and Feature Parameter Extraction

This article uses the TreeQSM method to construct a QSM model from the tree point cloud obtained from 3D reconstruction, and obtains the QSM model and tree feature parameters of each landscape tree. TreeQSM(v2.4.1) is a free and open-source MATLAB(R2023b) software program with high precision, high compilation, fast working speed, and high reliability. In most cases, the model can accurately estimate tree parameters such as parent–child relationships between branches, the number of branches, and the angles and lengths of these branches. The surface area and coverage area of the canopy, canopy closure, and other parameters can be obtained. But the input of point clouds of this method performs better when it only contains the branches of trees, so we manually segment the reconstructed point clouds, removing the point clouds of the background environment and only retaining the point clouds of the branches of trees. The manual segmentation is performed using CloudCompare(v2.13.1) software, which provides a user-friendly visual interface and convenient segmentation tools, and is widely used for point cloud processing tasks. It supports spatial rotation, allowing us to view a point cloud at any angle and select polygon areas. This software can achieve fast and accurate segmentation. We segment the target object by selecting several perspectives that cover 360 degrees in the same coordinate direction, and then perform spatial rotation inspection to complete the segmentation work. Figure 4 shows the process of manual segmentation using CloudCompare.
This article focuses on the reconstruction of branching structures; therefore, the landscape tree’s diameter at breast height obtained from the QSM model and the point cloud branching segmentation model were analyzed and compared with the manually measured data.

3.5. Training and Performance Measurement

In this section, we will provide a more detailed introduction to the training process and parameters of the 3D reconstruction model, as well as how we validate our 3D reconstruction model. All data processing, model training, and testing are conducted on the same personal computer. During the model training process, Nvidia 4070 GPU is used for CUDA accelerated computation. The 3D Gaussian Splatting method uses the method suggested by Mip-NeRF360 [37] to train/test the dataset, with testing conducted every 8 photos. During the training process, the order of the spherical harmonic function is increased every 1000 iterations to gradually improve the model’s expressive power. The learning rate of each parameter is dynamically adjusted using the adaptive learning rate in the Adam optimizer to gradually converge the loss function. Meanwhile, to avoid an unreasonable increase in Gaussian distribution, the opacity parameter is reset every 1000 iterations.
In this study, we evaluated the 3D reconstruction effect of landscape trees from two aspects: visual realism, comparing the similarity between the reconstructed model and real image data, and comparing the branch structure and QSM feature parameters of the reconstructed model with real values. These two comparisons demonstrate the effectiveness of our method.
We use common evaluation metrics in the field of 3D reconstruction, such as PSNR, SSIM, and LPIPS [38], to evaluate the similarity, pixels, and so on between the reconstructed image and the real image.
PSNR (Peak Signal to Noise Ratio) is one of the most commonly used image quality assessment metrics, primarily used to measure the quality of image reconstruction or the difference between compressed and original images. The larger its value, the smaller the distortion and the better the image quality. The formula is as follows.
P S N R = 10 log 10 M A X 2 M S E
  • M A X represents the maximum possible value of an image pixel (usually 255 for 8-bit images).
  • M S E stands for Mean Squared Error, which is calculated by taking the difference between two images pixel by pixel, then squaring and taking the average.
  • PSNR ≥ 30 dB, it is difficult for the human eye to detect image distortion, meaning that the test image is very close to the original image.
  • 20 dB ≤ PSNR < 30 dB indicates that the human eye can perceive some differences in the image, but these differences are usually not too obvious.
  • 10 dB ≤ PSNR < 20 dB means that the human eye can clearly see the differences in the image, but still recognize the basic structure of the image.
SSIM (Structural Similarity Index) is an indicator that measures the structural similarity between two images, taking into account brightness, contrast, and structural information. The range of SSIM values is [−1,1], and the closer the value is to 1, the more similar the two images are. The formula is as follows.
S S I M ( x , y ) = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2
  • x and y are two images to be compared.
  • μ x and μ y are the average brightness values of image x and y , respectively.
  • The brightness variances of images x and y are represented by σ x 2 and σ y 2 , respectively.
  • σ x y is the brightness covariance between image x and y .
  • C 1 and C 2 are constants used for stable calculations, usually set to a smaller positive value.
LPIPS (Learned Perceptual Image Patch Similarity), also known as “perceptual loss”, is a deep learning-based image quality assessment metric that extracts image features through neural networks and calculates the distance between features to measure image similarity. The smaller the value of LPIPS, the more similar the image. We have chosen a pretrained AlexNet network as the feature extractor here.
We use the point clouds obtained during the 3D reconstruction process of landscape trees to generate a QSM, and compare the extracted branch order and diameter at breast height of the model with manually measured values.
Diameter at Breast Height (DBH) is a key indicator used in forestry and ecology to describe the thickness of trees. It is usually defined as the diameter of the tree trunk at a height of 1.3 m above the ground. For irregular trunks, the diameters in two vertical directions need to be measured and averaged. DBH is an important parameter for evaluating the growth status, biomass, and carbon storage of trees [39]. This article evaluates the relative error between the QSM parameters of DBH and the manually measured values. Relative error is an indicator that measures the deviation between a measured or calculated value and the true value, typically used to evaluate the accuracy of an experiment or calculation. The calculation formula is as follows.
r e l a t i v e   e r r o r = | M T | T × 100 %
  • M : the value obtained through experimentation or calculation.
  • T : widely accepted standard or theoretical values.
Branching Order describes the topological hierarchy of a tree’s branching structure, gradually increasing from the main stem (order 1). The Maximum Branching Order refers to the maximum branching level in a tree, used to describe the complexity of the tree’s branching structure and the degree of crown development. Here, we use the hierarchical branching reconstruction rate for evaluation, and the formula is as follows.
b r a n c h   r e c o n s t r u c t i o n   r a t e = N r N t × 100 %
  • N t : the total number of branches at a certain level of a real tree.
  • N r : the total number of branches at a certain level in the reconstruction model.
These indicators have important applications in forestry, ecology, and tree modeling, helping researchers quantify the morphological characteristics and growth status of trees.

4. Results

In this section, we present a comparative experiment on the reconstruction effect of the same tree, demonstrating the advantages of 3D GS. Then we use the 3D GS-based method to reconstruct seven landscape tree datasets with different shapes, and evaluate reconstruction models from image and feature parameters.

4.1. Experimental Verification of the Advantages of 3D GS in Tree 3D Reconstruction

This study systematically explored the 3D reconstruction method of landscape trees using different methods for two types of data. Figure 5a and Figure 6a show the morphology and texture of trees.
Regarding the characteristics of dense point cloud data obtained by LiDAR, we used the Alpha Shapes method to reconstruct the point clouds data. This algorithm controls the complexity and accuracy of the constructed shape by constructing dynamic alpha parameters, mainly used to construct a convex hull or approximate shapes from point clouds data. This method is suitable for noisy data or incomplete regular point sets and has been applied to tree scene modeling. Figure 5 shows the reconstruction results of this method. It can be clearly seen that the reconstruction model of the former has a lower reconstruction rate, especially for the branches and leaves of trees, with many missing reconstruction areas and topological breaks in the reconstruction effect.
The Neural Radiance Fields (NeRFs) method was used for image data reconstruction. This method utilizes a Multilayer Perceptron (MLP) to represent a 3D scene as a continuous neural radiation field. It takes the coordinates of a spatial point x = ( x ,   y ,   z ) and the observation direction   d = ( θ ,   ϕ ) as inputs, and outputs the color c = ( r ,   g ,   b ) and volumetric density σ of that point. By using voxel rendering technology to integrate along light rays, it generates 2D images from any perspective. This method only requires images as input, avoiding the problem of holes caused by uneven density of point cloud data. Figure 6 shows the reconstruction results of this method, which has a higher reconstruction rate and smoother effect compared to the Alpha Shapes method. But the texture details, branching, and leaf reconstruction effects of the trees are relatively blurry.
The 3D Gaussian Splatting (3D GS) method was also used for image data reconstruction. This method establishes a multi-scale Gaussian kernel representation, which can flexibly represent structures of different scales. The shape of the Gaussian kernel is adjusted by the covariance matrix to adapt to the geometric direction and true structure of the branches. This method can also avoid the problem of voids caused by uneven point cloud density, and shows significant advantages in tree branch structure and texture details. Figure 7 shows the reconstruction result of this method, which has a high reconstruction rate and better reconstruction effect on the texture details, branching, and leaf of trees, resulting in smoother results. This indicates that using this method for the 3D reconstruction of trees has research significance.

4.2. 3D Reconstruction Results of Landscape Tree Based on 3D GS

In order to verify the universality and robustness of the method, seven datasets of typical landscape tree species were constructed in the experiment, with different tree branch structures, tree characteristic parameters, and background environments for each dataset sample. As shown in Figure 8, the 3D reconstruction model (a)–(g) has high visual realism, and maintains high similarity in tree geometric features and clear branching structure with the input landscape tree images (A)–(G).
At the same time, the quality of similarity reconstruction was evaluated through PSNR, SSIM, and LPIPS, and the evaluation results are shown in Table 2. It can be seen that the performance of each indicator is fairly good. The PSNR of each dataset is above 24.22, and the LPIPS is below 0.157, indicating that the reconstructed model conforms to human visual perception, especially in terms of tree appearance. SSIM above 0.801 verified a certain degree of accuracy in the color and branch direction. This indicates that our method has a high similarity to real trees when reconstructing landscape trees in 3D, and can effectively reconstruct tree models. This provides a new direction for providing accurate 3D models for pest control, pruning design, and landscape tree layout design in the digital management of gardens.

4.3. TreeQSM Model Results and Extracted Parameters

This section quantitatively analyzes tree feature parameters and visualizes branching orders on the T1–T7 dataset based on QSM. In landscaping, the management and pruning of branching structures are crucial. Different types of trees have different natural branching characteristics, but through appropriate pruning and guidance, trees can form ideal branching structures, enhance their ornamental value and wind resistance, and improve their growth and health level. Figure 9 shows the visualization of the branching hierarchy of point clouds, which reveals the topological development patterns of different tree species, indicating that our method can reconstruct effective point clouds data. This method provides a reliable tool for the construction of tree growth models and the extraction of tree feature parameters.
From Figure 9 and Table 3, it can be seen that our method can accurately reconstruct tree structures with 3rd-order branches and below. Due to the evergreen nature of Pine trees, the needles exist partially in winter and the diameter of the branches where needles are located is relatively small, which makes QSM identify some needles and locate branches as one branch. However, because QSM recursively segments from the fork point, this does not affect the segmentation of the branches. For other trees, the 4th-order branches and terminal branches, branches with a diameter > 0.5 cm can be reconstructed partially. But for branches with a diameter 0.5 cm, due to the limitation of data resolution, the point cloud generated by the method have a lower density at the terminal branch, resulting in the inaccurate judgment of complex branches by the QSM model, and the corresponding terminal branch boundaries of the 3D reconstruction model are blurred. But in practical situations, the pruning of landscape trees is generally around the third-level branches, so our method can provide simulated data for pruning robots in garden maintenance.
We also compared the DBH extracted from the feature parameters of the QSM model, and the results are shown in Figure 10 and Table 4. The error between the DBH of QSM and the manual value is below 1.59 cm, and the relative error is between 3.8% and 14.6%, which basically reaches the range of measurement error for DBH. This indicates that our method can accurately reconstruct the structure of landscape trees and has great potential in the high-precision reconstruction of landscape trees. It can provide digital models for the layout design and maintenance management of landscape trees in garden landscapes.

5. Discussion

5.1. Evaluation of Our Method

This study proposes a 3D reconstruction method for landscape trees based on 3D GS. The KD-Tree is introduced to optimize Gaussian kernel generation, and the QSM model is used to generate parameters. This method merely requires image data as input to obtain a 3D reconstruction effect that conforms to tree growth structures, with a 3rd-order branch reconstruction rate of 100%, DBH error less than 1.59 cm, and relative error between 3.8% and 14.6%, which demonstrates the enormous potential of this method in tree structure reconstruction and expands the research direction of landscape tree 3D reconstruction. The current research lacks sufficient emphasis on tree branch reconstruction and is not suitable for reconstruction scenarios of landscape trees. For landscape design and the maintenance management of landscape trees, this method can achieve low-cost and effective reconstruction, reconstructing landscape trees of different shapes that conform to the growth laws of trees, and providing digital support for landscape greening layout design. Our method can also be extended to the field of urban forestry by combining with drone image acquisition to achieve synchronous modeling of multiple trees. Moreover, the reconstruction of point clouds combined with deep learning segmentation algorithms can quickly extract tree feature parameters, making high-precision non-destructive growth monitoring, maintenance, and risk assessment of urban forestry possible, providing new ideas for urban forestry management platforms. However, there are limitations in large-scale scenarios. Firstly, there are many types of trees in urban forestry, and deep learning segmentation algorithms and generalization require a large number of tree images or point clouds as training data to achieve good results. Secondly, using methods based on drones and ground radar often leads to high costs and operational difficulties. Finally, this method relies on the information expressed by multi view images, which may lead to excessive overlapping and missing perspectives in high-density green spaces, resulting in poor reconstruction models.
Overall, our method can reconstruct the branching structure of real trees with high visual realism, providing digital support for pruning robots in landscape branch shape design and maintenance management. This provides a reliable technical approach, new ideas, and a research direction for the 3D reconstruction of landscape trees. And it has certain scalability with great potential in the field of urban forestry.

5.2. Comparison with Similar Methods

Our method focuses on the branching structure of landscape trees, and there are a few similar methods that mostly reconstruct trees in the leaf stage. Therefore, we extract the branching parts of the results from existing methods for display. Figure 11a,b are schematic diagrams of the research results [12]. As shown in Figure 11, it can be seen that there are still differences between the curvature of other model branches and the actual branches, and the curvature of the branches is almost ignored. In contrast, our method has high similarity between branches and real branches. This demonstrates the effectiveness of our method in tree structure reconstruction, which can reconstruct tree models that conform to tree growth patterns.

5.3. Future Work

Through summarizing existing work, we have identified three areas that need to be optimized. Firstly, in landscape trees with branch levels higher than 5th-order or tree heights greater than 20 m, due to the limitation of GPU devices’ computing speed, the resolution of the image acquisition camera device is limited to 1080 × 1920 to ensure normal reconstruction operation, resulting in a loss of details in the landscape tree and unsatisfactory reconstruction results. This is also the reason why the DBH of the reconstructed trees is smaller than the manually measured value. If we do not consider reconstruction time or better GPU devices, a higher resolution image input may improve the effect. Our future work is to optimize methods to accelerate rendering time with higher resolution input. Secondly, in areas with dense trees, the collected data may be obstructed, and branches are prone to overlap and adhere. Difficulty in data collection and the generation of sparse point clouds makes our method unsuitable for trees with overly dense surroundings. However, in practical scenarios, occlusion is inevitable. Therefore, the reconstruction of dense scenes is also a key focus of our future research. Finally, by combining point cloud segmentation algorithms, batch tree parameter extraction can be achieved, providing a more comprehensive automated method for the digital management of urban forestry. In the follow-up work, we will try to combine segmentation algorithms to achieve further functional expansion.

6. Conclusions

This article aims to obtain a 3D reconstruction method for leafless landscape trees, in order to achieve high structural fidelity and realistic reconstruction of landscape trees. We divide our work into comparing reconstruction methods, constructing landscape tree datasets, 3D reconstruction based on 3D Gaussian Splatting, extracting tree parameters based on QSM, and evaluating reconstruction models. Traditional modeling methods have limitations in restoring tree branch topology. Therefore, reconstructing real leafless trees is a suitable approach undoubtedly. We focus on the 3D reconstruction of leafless landscape trees based on 3D GS. Our method demonstrates positive 3D reconstruction performance on tree datasets with different branching structures. In terms of the visual aspect, quantitative evaluation shows that the PSNR of the method is above 24.22, the SSIM is above 0.801, and the LPIPS is below 0.157, indicating that the visual difference is not significant between the 3D reconstruction model and the real tree. In terms of reconstructing point clouds, the error of the DBH is below 1.59 cm, and the relative error is 3.8–14.6%. The reconstruction rate of the 3rd-order branch reaches 100%, and it can also reconstruct some 4th-order branches or terminal branches with a diameter > 0.5 cm, indicating that this method can effectively reconstruct tree structures and has great potential for expansion in the urban forestry field. Compared with traditional 3D modeling, our method based on 3D GS has significant advantages in providing digital models for landscape design and management, providing new ideas and research directions for creating high-precision 3D reconstruction methods for leafless landscape trees.

Author Contributions

Conceptualization, J.L. and Q.H.; Data curation, J.L., H.Y. and Q.H.; Investigation, J.L., Q.H., X.W., B.X., J.D. and L.L.; Project administration, Q.H., B.X. and J.D.; Resources J.L., Q.H., X.W., B.X. and J.D.; Software, J.L.; Writing—original draft, J.L.; Writing—review and editing, J.L. and Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “Open Bidding for Selecting the Best Candidates” Project of National Forestry and Grassland Administration (No. CAFYBB2024ZA005) and the 5·5 Engineering Research & Innovation Team Project of Beijing Forestry University (No. BLRC2023C05).

Data Availability Statement

The datasets presented in this article are not readily available. The landscape tree data in this article are part of an ongoing study and have high research value. The obtained dataset is confidential, but we will publish some of the data in our subsequent work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Feng, C.; Wang, M.; Liu, G.C.; Huang, J.B. Green development performance and its influencing factors: A global perspective. J. Clean. Prod. 2017, 144, 323–333. [Google Scholar] [CrossRef]
  2. Lee, L.S.H.; Zhang, H.; Jim, C.Y. Serviceable tree volume: An alternative tool to assess ecosystem services provided by ornamental trees in urban forests. Urban For. Urban Green. 2021, 59, 127003. [Google Scholar] [CrossRef]
  3. Liu, D. Application of modern urban landscape design based on machine learning model to generate plant landscaping. Sci. Program. 2022, 2022, 1610427. [Google Scholar] [CrossRef]
  4. Lv, Y.X.; Zhang, Y.D.; Dong, S.Y.; Yang, L.; Zhang, Z.Y.; Li, Z.R.; Hu, S.J. A convex hull-based feature descriptor for learning tree species classification from als point clouds. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6501005. [Google Scholar] [CrossRef]
  5. Yang, T.T.; Ye, J.H.; Zhou, S.Y.; Xu, A.J.; Yin, J.X. 3d reconstruction method for tree seedlings based on point cloud self-registration. Comput. Electron. Agric. 2022, 200, 107210. [Google Scholar] [CrossRef]
  6. Wang, W.X.; Li, Y.Y.; Huang, H.S.; Hong, L.P.; Du, S.Q.; Xie, L.F.; Li, X.M.; Guo, R.Z.; Tang, S.J. Branching the limits: Robust 3d tree reconstruction from incomplete laser point clouds. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103557. [Google Scholar] [CrossRef]
  7. Xu, D.; Yang, X.B.; Wang, C.; Xi, X.H.; Fan, G.F. Three-dimensional reconstruction of forest scenes with tree-shrub-grass structure using airborne lidar point cloud. Forests 2024, 15, 1627. [Google Scholar] [CrossRef]
  8. Xie, D.H.; Wang, X.Y.; Qi, J.B.; Chen, Y.M.; Mu, X.H.; Zhang, W.M.; Yan, G.J. Reconstruction of single tree with leaves based on terrestrial lidar point cloud data. Remote Sens. 2018, 10, 686. [Google Scholar] [CrossRef]
  9. Kurdi, F.T.; Lewandowicz, E.; Shan, J.; Gharineiat, Z. Three-dimensional modeling and visualization of single tree lidar point cloud using matrixial form. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3010–3022. [Google Scholar] [CrossRef]
  10. Lindberg, E.; Holmgren, J. Individual tree crown methods for 3d data from remote sensing. Curr. For. Rep. 2017, 3, 19–31. [Google Scholar] [CrossRef]
  11. Huang, H.Y.; Tian, G.J.; Chen, C.C. Evaluating the point cloud of individual trees generated from images based on neural radiance fields (nerf) method. Remote Sens. 2024, 16, 967. [Google Scholar] [CrossRef]
  12. Liu, Z.H.; Wu, K.; Guo, J.W.; Wang, Y.H.; Deussen, O.; Cheng, Z.L. Single image tree reconstruction via adversarial network. Graph. Models 2021, 117, 101115. [Google Scholar] [CrossRef]
  13. Chen, C.; Wang, D. 3D tree modeling based on abstract parametric L-system. In Proceedings of the 2023 9th International Conference on Virtual Reality (ICVR) 2023, Xianyang, China, 12–14 May 2023; pp. 49–55. [Google Scholar]
  14. Bogdanovich, E.; Perez-Priego, O.; El-Madany, T.S.; Guderle, M.; Pacheco-Labrador, J.; Levick, S.R.; Moreno, G.; Carrara, A.; Pilar Martín, M.; Migliavacca, M. Using terrestrial laser scanning for characterizing tree structural parameters and their changes under different management in a mediterranean open woodland. For. Ecol. Manag. 2021, 486, 118945. [Google Scholar] [CrossRef]
  15. Li, X.; Lin, H.; Long, J.; Xu, X. Mapping the growing stem volume of the coniferous plantations in north china using multispectral data from integrated gf-2 and sentinel-2 images and an optimized feature variable selection method. Remote Sens. 2021, 13, 2740. [Google Scholar] [CrossRef]
  16. Luoma, V.; Yrttimaa, T.; Kankare, V.; Saarinen, N.; Pyörälä, J.; Kukko, A.; Kaartinen, H.; Hyyppä, J.; Holopainen, M.; Vastaranta, M. Revealing changes in the stem form and volume allocation in diverse boreal forests using two-date terrestrial laser scanning. Forests 2021, 12, 835. [Google Scholar] [CrossRef]
  17. Zheng, J.; Tarin, M.W.K.; Jiang, D.; Li, M.; Ye, J.; Chen, L.; He, T.; Zheng, Y. Which ornamental features of bamboo plants will attract the people most? Urban For. Urban Green. 2021, 61, 127101. [Google Scholar] [CrossRef]
  18. O’Sullivan, H.; Raumonen, P.; Kaitaniemi, P.; Perttunen, J.; Sievänen, R. Integrating terrestrial laser scanning with functional–structural plant models to investigate ecological and evolutionary processes of forest communities. Ann. Bot. 2021, 128, 663–684. [Google Scholar] [CrossRef]
  19. Jacobs, M.; Rais, A.; Pretzsch, H. How drought stress becomes visible upon detecting tree shape using terrestrial laser scanning (TLS). For. Ecol. Manag. 2021, 489, 118975. [Google Scholar] [CrossRef]
  20. Jafri, S.R.u.N.; Rehman, Y.; Faraz, S.M.; Amjad, H.; Sultan, M.; Rashid, S.J. Development of georeferenced 3d point cloud in gps denied environments using backpack laser scanning system. Elektron. Elektrotech. 2021, 27, 25–34. [Google Scholar] [CrossRef]
  21. Muumbe, T.P.; Baade, J.; Singh, J.; Schmullius, C.; Thau, C. Terrestrial laser scanning for vegetation analyses with a special focus on savannas. Remote Sens. 2021, 13, 507. [Google Scholar] [CrossRef]
  22. Ko, C.; Lee, S.; Yim, J.; Kim, D.; Kang, J. Comparison of forest inventory methods at plot-level between a backpack personal laser scanning (bpls) and conventional equipment in jeju island, South Korea. Forests 2021, 12, 308. [Google Scholar] [CrossRef]
  23. Ding, C.J.; Wang, N.N.; Huang, Q.J.; Zhang, W.X.; Huang, J.; Yan, S.L.; Chen, B.Y.; Liang, D.J.; Dong, Y.F.; Shen, Y.B.; et al. The importance of proleptic branch traits in biomass production of poplar in high-density plantations. J. For. Res. 2022, 33, 463–473. [Google Scholar] [CrossRef]
  24. Hildebrand, M.; Perles-Garcia, M.D.; Kunz, M.; Härdtle, W.; von Oheimb, G.; Fichtner, A. Tree-tree interactions and crown complementarity: The role of functional diversity and branch traits for canopy packing. Basic Appl. Ecol. 2021, 50, 217–227. [Google Scholar] [CrossRef]
  25. Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D gaussian splatting for real-time radiance field rendering. Acm Trans. Graph. 2023, 42, 139. [Google Scholar] [CrossRef]
  26. Moore, A. IEEE Colloquium on Quantum Computing: Theory, Applications & Implications. In An Introductory Tutorial on kd-Trees; IET: London, UK, 1991. [Google Scholar]
  27. Zhang, P.C.; Xu, H.H.; Bian, M.J.; Gao, H.H. Research on parallel kd-tree construction for ray tracing. Int. J. Grid Distrib. Comput. 2016, 9, 49–59. [Google Scholar]
  28. Gardiner, J.D.; Behnsen, J.; Brassey, C.A.J.B.C. Alpha shapes: Determining 3d shape complexity across morphologically diverse structures. BMC Evol. Biol. 2018, 18, 184. [Google Scholar] [CrossRef]
  29. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance 2-2fields for view synthesis. Commun. ACM 2022, 65, 99–106. [Google Scholar] [CrossRef]
  30. Raumonen, P.; Casella, E.; Calders, K.; Murphy, S.; Åkerblom, M.; Kaasalainen, M. Massive-scale tree modelling from TLS data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 189–196. [Google Scholar] [CrossRef]
  31. Raumonen, P.; Kaasalainen, M.; Åkerblom, M.; Kaasalainen, S.; Kaartinen, H.; Vastaranta, M.; Holopainen, M.; Disney, M.; Lewis, P. Fast automatic precision tree models from terrestrial laser scanner data. Remote Sens. 2013, 5, 491–520. [Google Scholar] [CrossRef]
  32. Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.; Avitabile, V.; Disney, M.; Armston, J.; et al. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2015, 6, 198–208. [Google Scholar] [CrossRef]
  33. Markku, Å.; Raumonen, P.; Kaasalainen, M.; Casella, E. Analysis of geometric primitives in quantitative structure models of tree stems. Remote Sens. 2015, 7, 4581–4603. [Google Scholar] [CrossRef]
  34. Schonberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the 2016 IEEE Conference on Computer Vision & Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
  35. Guan, H.; Smith, W.A.P. Structure-from-motion in spherical video using the von mises-fisher distribution. IEEE Trans. Image Process. 2017, 26, 711–723. [Google Scholar] [CrossRef] [PubMed]
  36. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  37. Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  38. Jamil, S. Review of image quality assessment methods for compressed images. J. Imaging 2024, 10, 113. [Google Scholar] [CrossRef]
  39. Wells, L.A.; Chung, W. Evaluation of ground plane detection for estimating breast height in stereo images. For. Sci. 2020, 66, 612–622. [Google Scholar] [CrossRef]
Figure 1. Data collection equipment and methods: (a) Point cloud data acquisition equipment: CHCNAV RS10; (b) Image data collection method.
Figure 1. Data collection equipment and methods: (a) Point cloud data acquisition equipment: CHCNAV RS10; (b) Image data collection method.
Remotesensing 17 01473 g001
Figure 2. Main steps for our work: Comparative experiment 1; reconstruction of landscape tree based on 3D Gaussian Splatting and step of feature parameter extraction 2; Quantitative Structure Model (QSM) is used to extract features to evaluate 3D model.
Figure 2. Main steps for our work: Comparative experiment 1; reconstruction of landscape tree based on 3D Gaussian Splatting and step of feature parameter extraction 2; Quantitative Structure Model (QSM) is used to extract features to evaluate 3D model.
Remotesensing 17 01473 g002
Figure 3. The structure of reconstruction of landscape tree based on 3D Gaussian Splatting.
Figure 3. The structure of reconstruction of landscape tree based on 3D Gaussian Splatting.
Remotesensing 17 01473 g003
Figure 4. The reconstruct point cloud manual segmentation interface and segment results: (a) Original reconstructed point cloud; (b,c) Selection of point cloud and segmentation regions from two angles; (d) Manually segmented point cloud result.
Figure 4. The reconstruct point cloud manual segmentation interface and segment results: (a) Original reconstructed point cloud; (b,c) Selection of point cloud and segmentation regions from two angles; (d) Manually segmented point cloud result.
Remotesensing 17 01473 g004
Figure 5. Fraxinus pennsylvanica: (a) LiDAR data; (b) Reconstruction model used LiDAR data with Alpha Shapes.
Figure 5. Fraxinus pennsylvanica: (a) LiDAR data; (b) Reconstruction model used LiDAR data with Alpha Shapes.
Remotesensing 17 01473 g005
Figure 6. Fraxinus pennsylvanica: (a) Image data; (b) Reconstruction model used images with NeRF.
Figure 6. Fraxinus pennsylvanica: (a) Image data; (b) Reconstruction model used images with NeRF.
Remotesensing 17 01473 g006
Figure 7. Fraxinus pennsylvanica: (a) Image data; (b) Reconstruction model used images with 3D GS.
Figure 7. Fraxinus pennsylvanica: (a) Image data; (b) Reconstruction model used images with 3D GS.
Remotesensing 17 01473 g007
Figure 8. Reconstruction of landscape trees: (AG) Landscape trees image data of T1–T7; (ag) Reconstruction effect of T1–T7.
Figure 8. Reconstruction of landscape trees: (AG) Landscape trees image data of T1–T7; (ag) Reconstruction effect of T1–T7.
Remotesensing 17 01473 g008aRemotesensing 17 01473 g008b
Figure 9. Segmented branches structure (blue = stem, green = 1st-order branches, red = 2nd-order branches, sky-blue = 3rd-order branches): (a) T1, (b) T2, (c) T3, (d) T4, (e) T5, (f) T6, (g) T7.
Figure 9. Segmented branches structure (blue = stem, green = 1st-order branches, red = 2nd-order branches, sky-blue = 3rd-order branches): (a) T1, (b) T2, (c) T3, (d) T4, (e) T5, (f) T6, (g) T7.
Remotesensing 17 01473 g009
Figure 10. Error between DBH of manual measurement and DBH of our method.
Figure 10. Error between DBH of manual measurement and DBH of our method.
Remotesensing 17 01473 g010
Figure 11. Reconstruction results comparison: (a) input image of research [12], (b) result of research [12], (c) input image of our method, (d) result of our method.
Figure 11. Reconstruction results comparison: (a) input image of research [12], (b) result of research [12], (c) input image of our method, (d) result of our method.
Remotesensing 17 01473 g011
Table 1. Dataset information.
Table 1. Dataset information.
Data NameTree SpeciesNumber
of Images
Data ResolutionSampling LocationDBH (cm) 1Branching
Order 2
T-LiDARFraxinus pennsylvanica--Beijing Forestry
University
49.03-
TFraxinus pennsylvanica3331080 × 1920Beijing Forestry
University
49.03-
T1Ginkgo biloba3101080 × 1920Olympic forest park14.463
T2Sophora japonica3241080 × 1920Olympic forest park28.696
T3Ziziphus jujuba3471080 × 1920Olympic forest park35.855
T4Prunus sargentii4021080 × 1920Sakura Park18.816
T5Prunus ‘Kanzan’2901080 × 1920Sakura Park3.3615
T6Magnolia denudata3471080 × 1920Sakura Park2.1633
T7Larix gmelinii3581080 × 1920Sakura Park15.293
1 DBH and 2 Branching Order are actual values measured manually.
Table 2. Evaluation effect of 3D reconstruction of landscape trees.
Table 2. Evaluation effect of 3D reconstruction of landscape trees.
DATAPSNRSSIMLPIPS
T126.510.8780.128
T225.160.8410.137
T324.570.8370.152
T424.410.8060.150
T524.230.8080.152
T624.220.8010.157
T724.270.8020.155
Table 3. Landscape tree’s branch reconstruction rate and true branching order.
Table 3. Landscape tree’s branch reconstruction rate and true branching order.
DATARate of 3rd (%) *Real Branch Order
T11003
T21006
T31005
T41006
T51005
T61003
T71003
* The reconstruction rate of 3rd-order branches.
Table 4. Comparison of DBH data of landscape trees.
Table 4. Comparison of DBH data of landscape trees.
DATAQSM DBH (cm)Manual DBH (cm)Error (cm)Relative Error (%)
T112.9714.46−1.4910.3
T227.1028.69−1.595.5
T334.4935.85−1.363.8
T417.2618.81−1.558.2
T52.8713.361−0.4914.6
T61.8532.163−0.3114.3
T714.2115.29−1.087.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Huang, Q.; Wang, X.; Xi, B.; Duan, J.; Yin, H.; Li, L. A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage. Remote Sens. 2025, 17, 1473. https://doi.org/10.3390/rs17081473

AMA Style

Li J, Huang Q, Wang X, Xi B, Duan J, Yin H, Li L. A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage. Remote Sensing. 2025; 17(8):1473. https://doi.org/10.3390/rs17081473

Chicago/Turabian Style

Li, Jiaqi, Qingqing Huang, Xin Wang, Benye Xi, Jie Duan, Hang Yin, and Lingya Li. 2025. "A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage" Remote Sensing 17, no. 8: 1473. https://doi.org/10.3390/rs17081473

APA Style

Li, J., Huang, Q., Wang, X., Xi, B., Duan, J., Yin, H., & Li, L. (2025). A Method for the 3D Reconstruction of Landscape Trees in the Leafless Stage. Remote Sensing, 17(8), 1473. https://doi.org/10.3390/rs17081473

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop