Next Article in Journal
Biological Rotation Age of Community Teak (Tectona grandis) Plantation Based on the Volume, Biomass, and Price Growth Curve Determined through the Analysis of Its Tree Ring Digitization
Previous Article in Journal
Assessing the Effects of Conservation Measures on Soil Erosion in Arasbaran Forests Using RUSLE
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Plant Population Classification Based on PointCNN in the Daliyabuyi Oasis, China

1
College of Resources and Environment Science, Xinjiang University, Urumqi 830046, China
2
Key Laboratory of Oasis Ecology, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(10), 1943; https://doi.org/10.3390/f14101943
Submission received: 2 September 2023 / Revised: 20 September 2023 / Accepted: 22 September 2023 / Published: 24 September 2023
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Populus euphratica and Tamarix chinensis hold significant importance in wind prevention, sand fixation, and biodiversity conservation. The precise extraction of these species can offer technical assistance for vegetation studies. This paper focuses on the Populus euphratica and Tamarix chinensis located within Daliyabuyi, utilizing PointCNN as the primary research method. After decorrelating and stretching the images, deep learning techniques were applied, successfully distinguishing between various vegetation types, thereby enhancing the precision of vegetation information extraction. On the validation dataset, the PointCNN model showcased a high degree of accuracy, with the respective regular accuracy rates for Populus euphratica and Tamarix chinensis being 92.106% and 91.936%. In comparison to two-dimensional deep learning models, the classification accuracy of the PointCNN model is superior. Additionally, this study extracted individual tree information for the Populus euphratica, such as tree height, crown width, crown area, and crown volume. A comparative analysis with the validation data attested to the accuracy of the extracted results. Furthermore, this research concluded that the batch size and block size in deep learning model training could influence classification outcomes. In summary, compared to 2D deep learning models, the point cloud deep learning approach of the PointCNN model exhibits higher accuracy and reliability in classifying and extracting information for poplars and tamarisks. These research findings offer valuable references and insights for remote sensing image processing and vegetation study domains.

1. Introduction

Desertification is a global ecological and socioenvironmental issue [1,2]. Populus euphratica and Tamarix chinensis, the dominant species in desert vegetation, possess advantages such as drought resistance and wind/sand tolerance [3,4,5]. An ecosystem composed of Populus euphratica and Tamarix chinensis provides invaluable economic and ecosystem service value to arid regions [6,7,8]. These include windbreaks, sand fixation, water resource regulation, biodiversity conservation, pasture restoration, and carbon sequestration [9]. Tamarix chinensis and Populus euphratica play crucial roles in determining regional environmental changes in arid regions [3,9]. The accurate extraction of vegetation information can provide technical support for subsequent studies, such as vegetation surveys and biomass estimation. Therefore, research on Tamarix chinensis and Populus euphratica is important for the conservation and restoration of oasis ecosystems in the study area [10,11].
Traditional vegetation detection methods rely on periodic sampling [12,13]. A periodic sample survey is a statistical approach where segments of a sample are examined at consistent intervals; rather than conducting continuous or exhaustive observations of the entire population, periodic sampling surveys provide information on species type, density, crown width, height, and other vegetation characteristics within sample plots [14,15]. This survey method typically covers a small area, is time consuming, and the results may be subjective. Moreover, it is challenging to spatially integrate survey results with other data. Additionally, conducting surveys in the desert hinterlands is difficult and costly because of the environmental conditions required for the growth of Populus euphratica and Tamarix chinensis.
The use of unmanned aerial vehicles (UAVs) offers the advantage of collecting a wide range of geospatial information over a large area over relatively short timescales. The use of unmanned aerial vehicles provides a more efficient, safe, and accurate method for vegetation surveys, thereby enhancing survey efficiency [16,17,18].The inclusion of geolocation information of trees in the results can greatly assist in precise tree management [19,20,21], which, in turn, aids in assessing vegetation during its growth cycle [22].
With advancements in technology, new methods are continuously being applied to vegetation surveys. Although satellite remote sensing images can cover a relatively large area [23,24], their spatial resolution is often too low for accurate tree species identification [25]. This can pose considerable challenges for individual tree recognition among multiple taxa. Compared with satellite remote sensing images [19,26,27,28], such as MODIS, Landsat, and Sentinel-2, unmanned aerial vehicle (UAV) data can achieve subcentimeter spatial resolution and provide clearer imagery. UAVs have become widely used for acquiring high-resolution images with resolutions below 10 cm [29]. Currently, there are various methods for extracting vegetation information based on spectral information, but their accuracy is highly dependent on the type and distribution of vegetation. Using multispectral UAV imagery, the successful identification of coniferous forests with a resolution of over 1 m has been achieved, with identification accuracy regularly reaching 100% at a height of 2 m [29]. Similarly, Goodbody et al. [30] achieved an identification accuracy of over 86% for vegetation using RGB-derived spectral indices. These studies demonstrate the potential of UAVs for vegetation identification, including in complex scenarios.
Recently, deep learning has attracted considerable attention in the field of remote sensing. Two-dimensional deep learning algorithms have been effectively applied to the automatic classification of images and videos in various domains, such as precision agriculture [31], autonomous driving [32,33,34], and urban–rural surveys [35]. However, compared to two-dimensional images, three-dimensional point clouds contain more information, leading to many attempts to apply deep learning to large scale point clouds. For instance, Fareed et al. [36] achieved a land-type classification accuracy of 90% using PointCNN for farmland classification. Shen et al. [37] extracted tree features from ground-based LiDAR point clouds using point-cloud deep learning with accuracy rates of 93% and 95%. These studies have demonstrated the feasibility of three-dimensional deep learning in vegetation classification, although most of the experimental subjects were densely distributed artificial forests [38]; in contrast, the study area in this paper has the characteristics of intense sunlight and a mixed distribution of Tamarix chinensis and Populus euphratica [4]. Vegetation shadows caused by strong sunlight can obscure the vegetation boundaries. In addition, in RGB imagery, the spectral characteristics of dry branches can be similar to those of deserts [39], leading to suboptimal classification results and low confidence levels. Considering the blurry vegetation boundaries and complex features in the study area, this study aims to train a PointCNN-based model to automatically and efficiently extract features to distinguish different vegetation types and extract individual tree information (Figure 1).

2. Materials and Methods

2.1. Study Area

The Daliyabuyi Oasis is located in Yutian County, Hetian Prefecture, Xinjiang Uygur Autonomous Region, China. It has geographical coordinates ranging from 38°16′ to 38°37′ N and 81°05′ to 81°46′ E, covering a total area of approximately 342 km2. The oasis is situated downstream of the Keriya River and was formed as a terminal oasis through the effects of flooding and sediment deposition [40]. The Daliyabuyi Oasis is characterized by a relatively flat terrain with elevations ranging from 1061 m to 1177 m and an average elevation of approximately 1108 m. The primary water source for the Daliyabuyi Oasis is melting snow and ice in the mountainous regions. It is this water supply that has shaped the oasis landscape in the Daliyabuyi area [41]. In terms of landscape, the oasis comprises a natural green vegetation belt that has formed around the river channels, which serves as the main vegetation structural component of the oasis. On the flat land between river channels and towering mountains, mixed forests are dominated by Populus euphratica and Tamarix chinensis. In areas with ample water supply, there is relatively high levels of vegetation cover, with the waterlogged areas being less vegetated [42]. The plant flora in the Daliyabuyi Oasis belongs to the southern Xinjiang desert subregion of the Mongolian–Xinjiang region of the Palearctic Realm and the Tarim Basin subregion. The dominant plant species are adapted to arid desert environments [43]. Representative desert vegetation elements include the Populus euphratica, Tamarix chinensis, and reed communities. The plant communities comprise a structure consisting of trees, shrubs, and grasses (Figure 2 and Figure 3).

2.2. Data

The data used in this study consisted of unmanned aerial vehicle (UAV) imagery and UAV point cloud data. The data were collected over a four-year period, from 2018 to 2022, resulting in 117 UAV images and point-cloud maps. Data were collected from various regions within the oasis. The drones used for capturing images are the DJI Phantom 4 RTK and the DJI Phantom 4 Multispectral. During capture, the flight altitude was set at 35 m, with forward and side overlap rates of 80% and 70%, respectively. The point cloud data utilized in this study were obtained by post-processing the RTK images with Pix4D. After processing, additional shots were taken to address any blurred regions. The resulting point cloud data had an average density of 1563.62 points/m3, with the original spectral characteristics of the point cloud being in the visible light range, encompassing the RGB bands. The imagery has a resolution of 0.5 cm, and the multispectral imagery has a resolution of 2.5 cm. The Populus euphratica and Tamarix chinensis samples used to construct the dataset were obtained through field surveys and sample collection as well as through the manual visual interpretation of the UAV imagery based on the sample data. In total, 11,250 samples were collected, including 6045 Populus euphratica and 5205 Tamarix chinensis samples (Table 1).

2.3. Methods

2.3.1. Decorrelation Stretching

Decorrelation stretching is an image enhancement method based on principal component analysis (PCA). It is one of the image augmentation features in Keras or PyTorch, also known as Zero Component Analysis (ZCA). This method preserves the color information from the original image while minimizing distortion [44,45]. The main principle of decorrelation stretching is to amplify the information in an image that has a low correlation [46,47], thereby increasing the saturation of the resulting image. This involves three stages [48,49], that is, transforming the original image bands into their principal components, individually stretching the transformed principal components, performing an inverse transformation of the principal components, and displaying them in the original color space. In a desert ecosystem, vegetation usually appears green against a yellow background. Through this transformation, the vegetation can be clearly distinguished from the background, which provides a solid foundation for the subsequent PointCNN deep learning classification. We assigned the image that has been decorrelated stretched to the point cloud using the ‘Shade LAS’ tool in ArcGIS10.8.

2.3.2. PointCNN

PointCNN is a network that simulates traditional convolutional neural networks and introduces a new method that considers the irregularity and unordered nature of point clouds [50]. PointCNN has the ability to consider the shapes of points without altering their order, making it particularly suitable for classifying complex vegetation information over large spatial areas [36].
The core component of the PointCNN framework is X-Conv, as shown in Figure 4. This operation involves grid reduction, where the grid resolution is successively reduced and the number of channels is increased. Although each reduction decreases the number of points, each representative point contains additional information [37,51]. Similar to traditional convolutional neural networks, the point-cloud classification neural network learns from pre-classified point-cloud data, where each point has a unique class code. These class codes were used to represent the features that the neural network was expected to recognize. Additionally, the presence of other attributes in the training dataset, such as intensity, RGB, and return count, improved the accuracy of the model [52]. In point clouds, the term “neighborhood” usually refers to the spatial neighbors of a given point. In PointCNN, for each point, we need to define a neighborhood centered on that point, encompassing its K closest neighbors. Here, K is typically a hyperparameter. K-nearest neighbor search is often used in PointCNN to define this neighborhood. Once the neighborhood of each point is identified, the X-Conv operation interacts with the features of the points in these neighborhoods through learned weights, extracting new features for each point. These newly extracted features capture local patterns and structures within the point cloud, providing valuable information for tasks such as point cloud classification, segmentation, and more. During the training process of the PointCNN model, the original point cloud dataset is first converted into point patches containing a specific number of points. These point patches are cubes of the same size, and these patches and their labels are passed to the model for training. This approach allows the processing of inputs with a specific order using X transformation matrices, followed by convolutional operations to obtain order invariant features [50].
f 1 = C o n v K , X 1 × f a , f b , f c , f d T
f 2 = C o n v K , X 2 × f a , f b , f c , f d T
f 3 = C o n v K , X 3 × f a , f b , f c , f d T
Different input orders corresponded to different X-transformation matrices. In the above equations, X 2 = X 3 × I and f a , f b , f c , f d T = I × f c , f a , f b , f d T , which implies that different X-transformation matrices are applied based on the input order. By satisfying these equations, the values of f 2 and   f 3 will theoretically be equal. As a result, the output after the convolution will be independent of the input-point order. Figure 5 shows the specific algorithm for the PointCNN model.
In terms of describing model accuracy, this study utilizes the F1 score metric. The F1 score is particularly useful in ensuring a balance between precision and recall, especially in datasets where there is an imbalance between positive and negative samples. Relying solely on precision or recall might offer a skewed view of model performance. The F1 score combines both of these metrics, offering a more comprehensive evaluation of performance.
F 1 = 2 × P × R P + R
where P stands for Precision and R for Recall. F1 is the harmonic mean of precision and recall.
In this study, to quantify the discrepancy between the model’s predictions and the actual values, and to provide an optimization objective for model training, the chosen loss function is the cross-entropy function. Taking a single point cloud as an example, PointCNN generates a probability distribution for each point cloud, indicating the probability that the point cloud belongs to each category. Let the model’s output for a given point cloud be   p 1 , p 2 , , p N , where N is the total number of categories and p i is the probability that the point cloud belongs to the i-th category. Each point cloud has an actual category label. This label can be transformed into a “one-hot” vector. For instance, if there are three categories, and a point cloud’s actual category is the second one, then its “one-hot” vector representation is 0,1,0. The cross-entropy loss for a single point cloud is defined as follows [50,52]:
L = i = 1 N y i l o g ( p i )
where y i is the i element of the “one-hot” vector (either 0 or 1).

2.4. PointCNN Dataset Construction

In this study, the spectral information of the point-cloud data was transformed by assigning the RGB values from the enhanced DS spectral information of the UAV orthoimage data to the point-cloud data. Based on the assigned UAV orthoimage data, the point cloud data were divided into 11,250 ground validation samples and the point cloud features of the objects, resulting in Populus euphratica and Tamarix chinensis samples, and the dataset was constructed. When constructing the dataset, we ensured that each block contained as many objects of interest as possible, considering the need to include information on classified objects in the dataset. The dataset included training, validation, and test data. Systematic sampling was used for the training dataset to eliminate correlations between samples. This involved randomly selecting one sample in the first interval and sequentially adding the samples to obtain the subsequent samples. Before starting deep learning, 80 images were randomly selected using systematic sampling from 117 images as the training dataset. The remaining 37 images were randomly selected, with 10 images used as the validation dataset. The sample sets included Populus euphratica, Tamarix chinensis, and the background as the three object types. The PointCNN model was used for deep learning on the classified Populus euphratica and Tamarix chinensis point clouds in the training and validation datasets to capture the deep relationships between the point clouds and obtain a network model for testing. The test dataset was then classified, and the results were used for accuracy verification.
In addition, to verify the advantages of PointCNN in terms of deep learning classification results and applicability, this study also conducted a control experiment using deep learning on images. The control experimental dataset followed the same structure as the point-cloud dataset. For the control group, U-net was used as the image segmentation network model and ResNet-50 was used as the core feature extraction network, considering the characteristics of the decorrelation-stretched images.

3. Results and Analysis

3.1. Decorrelation Stretching Results

The PointCNN point-cloud deep learning model used in this study incorporates the spectral features of the point cloud into the learning process. Therefore, the spectral features of the point cloud affect the deep learning results. Enhancing the DS spectral information of images is beneficial for distinguishing different objects and improving the accuracy of vegetation information extraction. Green vegetation absorbs red light and reflects green light, which helps differentiate vegetation from other objects. In Figure 6, the original RGB information of the image is the true color, and there is a high correlation between the three bands. Tamarix chinensis appears greyish pink in color, whereas Populus euphratica appears light green. The colors are close to the surrounding desert colors, and there are shadows connected to the vegetation, making classification more challenging. After enhancing the DS spectral information, Populus euphratica showed a vibrant green color, and Tamarix chinensis exhibited a greenish pink color. Meanwhile, the shadowed areas became brighter and the contours of the vegetation became clearer, effectively reducing the interference of shadows on the results. As shown in Figure 7, by comparing the grayscale histograms of the images before and after processing, the enhanced image with DS spectral information showed greater differences in spectral features among the different objects, increased information coupling, and improved contrast. This image was more conducive to vegetation classification, resulting in more accurate results.

3.2. PointCNN Classification Results

The validation dataset for the PointCNN deep learning model showed an average accuracy of 92.106% for Populus euphratica and 91.936% for Tamarix chinensis. The loss function is shown in Figure 8b. As the number of learning iterations increased, the loss function for the training samples decreased, indicating that the parameters of the deep-learning model satisfied global optimization and convergence. However, when training the PointCNN deep-learning network model with complex samples, such as the mixed occurrence of Populus euphratica and Tamarix chinensis, local fluctuations may occur in the regression loss function values. Binary classification experiments were conducted separately for the point-cloud samples of Populus euphratica and Tamarix chinensis. In the classification of Populus euphratica, Populus euphratica was classified as the background value in the Tamarix chinensis region of interest and, similarly, in the classification of Tamarix chinensis, Tamarix chinensis was classified as the background value in the Populus euphratica region of interest. The results obtained from the two separate classifications were compared to those obtained when Populus euphratica and Tamarix chinensis were set as different regions of interest. The results of the loss function are shown in Figure 8a. The results obtained by setting different regions of interest for the samples are better than the results of binary sample classification. This is because binary samples do not transmit other vegetation information within the background during the learning and classification process of the deep learning network, resulting in misclassification in cases where Populus euphratica and Tamarix chinensis are in contact or have similar heights or RGB values, leading to a decrease in the accuracy of deep learning. The classification results are shown in Figure 9. When Tamarix chinensis was distributed in large patches, it was more challenging to extract individual tree information, and the classification model tended to classify multiple plants as a single patch.

3.3. Extraction of Individual Tree Information for Populus euphratica

Populus euphratica individuals were spaced further apart from Tamarix chinensis. Using the PointCNN classification results, individual tree information for Populus euphratica, such as tree height, crown width, crown area, and crown volume, was extracted. However, unmanned aerial vehicles (UAV) captured incomplete images of the tree trunks owing to occlusions. Therefore, the extraction of trunk diameters does not yield accurate results. Therefore, this study did not analyze trunk diameter extraction.
Figure 10 shows the classified samples from Populus euphratica. Individual tree information for Populus euphratica was obtained by performing statistical analyses of these samples. As the ground validation data only provided tree height and crown width for Populus euphratica, a fitting analysis was conducted to analyze the relationship between the actual and predicted values. Figure 11a,b shows the relationship between the actual and predicted values, with a Pearson correlation coefficient for tree height of 0.89659 and an R-squared value of 0.80388. The Pearson correlation coefficient for crown diameter was 0.86406 and an R-squared value of 0.7466. A comparison reveals that the extraction errors for tree height range from −1.8 to 1.6, while the extraction errors for crown diameter range from −1.6 to 1.4. However, most of the tree height errors were concentrated within the range of −0.4 to 0.4, while the crown diameter errors are concentrated within the range of −0.8 to 0.8. An analysis of Figure 11e,f indicates that the tree height could affect the errors. Outliers may occur during the extraction process because some individual noisy points that are not removed can lead to inflated results. In addition, incomplete scanning or data loss from the top canopy of individual trees during image capture can result in an underestimation. The extraction results were compared with the actual values within this range, and the extracted tree heights, crown areas, and crown volumes were plotted on a surface graph, as shown in Figure 12.

3.4. Comparison with Two-Dimensional Deep Learning

In classification, this study categorized the features into Populus euphratica, Tamarix, and background (mainly consisting of desert and herbaceous vegetation). In the accuracy statistics, the precision of Populus euphratica and Tamarix, as well as the accuracy were listed. As shown in Table 2. Based on deep-learning classification using true-color imagery in the validation dataset, the average accuracy for Populus euphratica and Tamarix chinensis was 85.343% and 82.293%, respectively. Based on deep-learning classification using multispectral imagery in the validation dataset, the average accuracy for Populus euphratica and Tamarix chinensis was 75.922% and 75.632%, respectively. Both models exhibited global optimization and convergence, but their classification accuracy was lower than that of the PointCNN model (Figure 13).
The low classification accuracy of the original imagery was attributed to confusion with the surrounding environment when the model identified the vegetation boundaries. Additionally, the diverse shapes of different vegetation samples significantly increased the difficulty of extracting individual tree contours, making it challenging for the model to accurately recognize and classify them. Although multispectral imagery includes four bands, its lower resolution fails to express object information accurately, resulting in reduced classification accuracy. Moreover, variations in lighting intensity and direction can introduce biases in the UAV data captured at different times, and shadows cast by objects could be misclassified as noisy data during deep learning model training, affecting the accuracy of the training results and even causing training failure. The classification results for the validation dataset are presented in Table 2 (Figure 14).

4. Discussion

Table 2 shows that the classification accuracy of the PointCNN deep learning model for Populus euphratica ranges from 88% to 96%, while, for Tamarix chinensis, it ranges from 87% to 94%. The higher accuracy of the Populus euphratica classification compared to that of Tamarix chinensis is because Populus euphratica grows as individual trees with larger distances between them and the density of ground objects will affect the accuracy of classification [53], making it easier to distinguish it from other objects during classification. In contrast, Tamarix chinensis grows in clusters, and its shape is similar to that of common sand mounds in the desert, making it more prone to confusion and lower levels of accuracy during classification. Overall, the PointCNN deep-learning model achieved more successful classification results than the two-dimensional image-based deep learning models [54].
During the training of the deep learning model, it was observed that the batch size affected the accuracy of the model, with a batch size of 16 yielding the best results. Therefore, a batch size of 16 cells was selected. Additionally, the size of the segmentation blocks was considered to affect the accuracy of object recognition [55]. Different block volumes result in variations in the extraction accuracy. Larger block volumes contain more information, but they also lead to slower processing, require more memory, and generate more noisy data, which can cause confusion and decrease classification accuracy. By contrast, smaller block volumes may result in overfitting and deep learning failure. In this study, an optimal block size of 200 × 200 × 400 ft was determined by combining the volumes of Populus euphratica and Tamarix chinensis within the study area.
This study focused on the populations of Populus euphratica and Tamarix chinensis in the Daliyabuyi Oasis. The PointCNN deep learning model was used to improve the recognition ability and classification accuracy of the different vegetation communities by modifying the RGB attributes of the point cloud samples. By comparing the results of two-dimensional deep learning and point-cloud deep learning, it could be observed that the accuracy of extracting Populus euphratica was higher than that of Tamarix chinensis. This is because most Populus euphratica trees grow individually with distinct contours, allowing for the more effective recognition and extraction of crown information.
The relationship function was obtained by fitting ground validation data. The relationship function indicated a positive correlation between the crown diameter and tree height of Populus euphratica. However, significant growth variations were observed, resulting in different crown diameters for trees of the same height. An analysis of the distribution of the validation points in relation to the position of the function reveals that the distribution tends to be in the lower part of the function. This is because the model categorizes a few contour points as noise during point-cloud computation, and subsequently removes them. By plotting the extracted tree height, crown diameter, crown area, and crown volume as separate surface graphs, we observed that the distribution of the crown area was generally from the bottom left to the top right. As tree height increased, the crown diameter and crown volume tended to increase. However, there were exceptions in which trees with significant heights did not exhibit a proportional increase in crown diameter and crown area, even falling below the values of vegetation at the same height. This can be attributed to factors, such as diseases, pests, and interspecific competition, which affect the growth of Populus euphratica. The overall distribution trend of the crown volume is similar to that of the crown area; however, a sudden decrease or increase in the crown volume may occur. This is because, in addition to diseases, pests, and interspecific competition, individual growth variations contribute to these fluctuations.
To demonstrate the superior accuracy of the point-cloud-based deep learning model, compared to traditional pixel- or grayscale-based deep learning models for large-scale vegetation classification, this study compared the PointCNN classification results with the results from multispectral and true-color UAV imagery. It could be observed that PointCNN has a significant advantage in background extraction compared to two-dimensional deep learning methods. This is because, in remote sensing images, background elements, such as grassland, have spectral information that is relatively similar to that of Populus euphratica and Tamarix chinensis. This similarity increases the difficulty for two dimensional models in accurately distinguishing targets, leading to biased results. In contrast, PointCNN considers positional information between points with similar spectral characteristics, enabling the better discrimination of these points. Therefore, compared with traditional models, the point cloud deep learning model not only has higher accuracy, but can also classify vegetation types and extract vegetation information, which is an ability that traditional models do not have. It can reduce field work when conducting vegetation surveys and provide better conditions for vegetation information surveys. The experiments proved the effectiveness of the network model, and the accuracy of the test data satisfied the requirements for practical applications. This provides a practical and effective deep learning model for the rapid extraction of vegetation communities in large-scale desert oases. It also provides strong support for studying coupling relationships, carbon sources and sinks, vegetation biomass estimation, soil and water conservation, soil physicochemical properties, forest fire prevention, and ecological environment protection among different vegetation populations [56].
In addition, this study has certain limitations. Firstly, obtaining large-scale and highly accurate point cloud data can be challenging. The process of delineating the training sample dataset is time-consuming and requires the careful delineation of vegetation boundaries to ensure the accuracy of the experimental results. Additionally, it is important to ensure consistency in lens models, flight altitudes, and other parameters when acquiring orthoimagery or point cloud data. In experiments, it was found that inconsistent pixel sizes of training data and verification data will worsen the classification results, especially for two-dimensional deep learning. So inconsistencies in these factors can result in variations in pixel size, which can affect the classification results and lead to reduced accuracy. These limitations should be taken into consideration when interpreting the results and considering the practical implementation of the proposed approach.
In the future, one can enhance the classification of land cover by combining UAV multispectral and hyperspectral imagery. By acquiring a larger and more diverse set of samples, the training data can better represent the variability and complexity of the land cover classes. This helps in capturing a wider range of features and improving the discriminative power of the classification model. Additionally, by ensuring the quality of the samples through rigorous data collection protocols, preprocessing techniques, and accurate ground truth labeling, the reliability and accuracy of the classification results can be further improved. By effectively leveraging both the spatial and spectral information provided by the multispectral and hyperspectral data, it is possible to achieve more precise and detailed land cover classification, enabling various applications in environmental monitoring, precision agriculture, land management, and ecological research.

5. Conclusions

(1) When training deep learning models, factors such as the learning rate, batch size, and block size should be considered. In this study, the selected parameters were 0.001 for the learning rate, a batch size of 16, and a block size of 200 × 200 × 400 feet.
(2) Compared with two-dimensional image-based deep learning models, the PointCNN method demonstrates considerable advantages in terms of classification accuracy, particularly in flat terrain areas with clear target objects and minimal interference.
(3) Based on PointCNN, a strong performance was achieved in the single-tree classification of vegetation populations in the oasis with an accuracy of 93.5443% in the test dataset. These favorable classification results have demonstrated the feasibility of PointCNN for vegetation classification and provide an effective method for extracting vegetation populations from point-cloud data.

Author Contributions

Conceptualization, Q.S.; investigation, D.L., Y.W. and L.P.; writing—original draft preparation, D.L.; writing—review and editing, D.L. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (No. 32160260, U1703237).

Data Availability Statement

Not available.

Acknowledgments

The authors would like to express their gratitude to Yanbo Wan and Lei Peng, from the School of Ecology and Environmental Sciences, Xinjiang University, for their valuable assistance in the writing of this paper. Their guidance and support greatly contributed to the completion of this research work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bezerra, F.G.S.; Aguiar, A.P.D.; Alvalá, R.C.S.; Giarolla, A.; Bezerra, K.R.A.; Lima, P.V.P.S.; Do Nascimento, F.R.; Arai, E. Analysis of Areas Undergoing Desertification, Using EVI2 Multi-Temporal Data Based on MODIS Imagery as Indicator. Ecol. Indic. 2020, 117, 106579. [Google Scholar] [CrossRef]
  2. Ma, X.; Zhu, J.; Yan, W.; Zhao, C. Projections of Desertification Trends in Central Asia under Global Warming Scenarios. Sci. Total Environ. 2021, 781, 146777. [Google Scholar] [CrossRef] [PubMed]
  3. Aishan, T.; Halik, Ü.; Betz, F.; Gärtner, P.; Cyffka, B. Modeling Height–Diameter Relationship for Populus euphratica in the Tarim Riparian Forest Ecosystem, Northwest China. J. For. Res. 2016, 27, 889–900. [Google Scholar] [CrossRef]
  4. Xiao, F.; Li, Y.; Li, G.; He, Y.; Lv, X.; Zhuang, L.; Pu, X. High throughput sequencing-based analysis of the soil bacterial community structure and functions of Tamarix shrubs in the lower reaches of the Tarim River. PeerJ 2021, 9, e12105. [Google Scholar] [CrossRef]
  5. Bencherif, K.; Trodi, F.; Hamidi, M.; Dalpè, Y.; Hadj-Sahraoui, A.L. Biological overview and adaptability strategies of Tamarix plants, T. articulata and T. gallica to abiotic Stress. Plant Stress Biol. Strateg. Trends 2020, 401–433. [Google Scholar] [CrossRef]
  6. Chen, Y.; Chen, Y.; Xu, C.; Li, W. The Effects of Groundwater Depth on Water Uptake of Populus Euphratica and Tamarix Ramosissima in the Hyperarid Region of Northwestern China. Environ. Sci. Pollut. Res. 2016, 23, 17404–17412. [Google Scholar] [CrossRef] [PubMed]
  7. Li, D.; Si, J.; Zhang, X.; Gao, Y.; Luo, H.; Qin, J.; Gao, G. Comparison of Branch Water Relations in Two Riparian Species: Populus Euphratica and Tamarix Ramosissima. Sustainability 2019, 11, 5461. [Google Scholar] [CrossRef]
  8. Zhang, T.; Chen, Y.; Ali, S. Abiotic Stress and Human Activities Reduce Plant Diversity in Desert Riparian Forests. Ecol. Indic. 2023, 152, 110340. [Google Scholar] [CrossRef]
  9. Lang, P.; Jeschke, M.; Wommelsdorf, T.; Backes, T.; Lv, C.; Zhang, X.; Thomas, F.M. Wood Harvest by Pollarding Exerts Long-Term Effects on Populus Euphratica Stands in Riparian Forests at the Tarim River, NW China. For. Ecol. Manag. 2015, 353, 87–96. [Google Scholar] [CrossRef]
  10. Venter, Z.S.; Scott, S.L.; Desmet, P.G.; Hoffman, M.T. Application of Landsat-Derived Vegetation Trends over South Africa: Potential for Monitoring Land Degradation and Restoration. Ecol. Indic. 2020, 113, 106206. [Google Scholar] [CrossRef]
  11. Li, H.; Shi, Q.; Wan, Y.; Shi, H.; Imin, B. Influence of Surface Water on Desert Vegetation Expansion at the Landscape Scale: A Case Study of the Daliyabuyi Oasis, Taklamakan Desert. Sustainability 2021, 13, 9522. [Google Scholar] [CrossRef]
  12. Buffi, G.; Manciola, P.; Grassi, S.; Barberini, M.; Gambi, A. Survey of the Ridracoli Dam: UAV–Based Photogrammetry and Traditional Topographic Techniques in the Inspection of Vertical Structures. Geomat. Nat. Hazards Risk 2017, 8, 1562–1579. [Google Scholar] [CrossRef]
  13. Ruppert, K.M.; Kline, R.J.; Rahman, M.S. Past, Present, and Future Perspectives of Environmental DNA (EDNA) Metabarcoding: A Systematic Review in Methods, Monitoring, and Applications of Global EDNA. Glob. Ecol. Conserv. 2019, 17, e00547. [Google Scholar] [CrossRef]
  14. Lee, S.; Xiao, C.; Pei, S. Ethnobotanical Survey of Medicinal Plants at Periodic Markets of Honghe Prefecture in Yunnan Province, SW China. J. Ethnopharmacol. 2008, 117, 362–377. [Google Scholar] [CrossRef]
  15. Xiao, Q.; Ustin, S.L.; McPherson, E.G. Using AVIRIS data and multiple-masking techniques to map urban forest tree species. Int. J. Remote Sens. 2004, 25, 5637–5654. [Google Scholar] [CrossRef]
  16. Ayhan, B.; Kwan, C.; Budavari, B.; Kwan, L.; Lu, Y.; Perez, D.; Li, J.; Skarlatos, D.; Vlachos, M. Vegetation Detection Using Deep Learning and Conventional Methods. Remote Sens. 2020, 12, 2502. [Google Scholar] [CrossRef]
  17. Fawcett, D.; Panigada, C.; Tagliabue, G.; Boschetti, M.; Celesti, M.; Evdokimov, A.; Biriukova, K.; Colombo, R.; Miglietta, F.; Rascher, U.; et al. Multi-Scale Evaluation of Drone-Based Multispectral Surface Reflectance and Vegetation Indices in Operational Conditions. Remote Sens. 2020, 12, 514. [Google Scholar] [CrossRef]
  18. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.; Pajuelo Madrigal, V.; Mallinis, G.; Ben Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the use of unmanned aerial systems for environmental monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef]
  19. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  20. Farrell, S.L.; Collier, B.A.; Skow, K.L.; Long, A.M.; Campomizzi, A.J.; Morrison, M.L.; Hays, K.B.; Wilkins, R.N. Using LiDAR-Derived Vegetation Metrics for High-Resolution, Species Distribution Models for Conservation Planning. Ecosphere 2013, 4, art42. [Google Scholar] [CrossRef]
  21. Immitzer, M.; Böck, S.; Einzmann, K.; Vuolo, F.; Pinnel, N.; Wallner, A.; Atzberger, C. Fractional Cover Mapping of Spruce and Pine at 1 Ha Resolution Combining Very High and Medium Spatial Resolution Satellite Imagery. Remote Sens. Environ. 2018, 204, 690–703. [Google Scholar] [CrossRef]
  22. Kaartinen, H.; Hyyppä, J.; Yu, X.; Vastaranta, M.; Hyyppä, H.; Kukko, A.; Holopainen, M.; Heipke, C.; Hirschmugl, M.; Morsdorf, F.; et al. An International Comparison of Individual Tree Detection and Extraction Using Airborne Laser Scanning. Remote Sens. 2012, 4, 950–974. [Google Scholar] [CrossRef]
  23. Vescovo, L.; Wohlfahrt, G.; Balzarolo, M.; Pilloni, S.; Sottocornola, M.; Rodeghiero, M.; Gianelle, D. New Spectral Vegetation Indices Based on the Near-Infrared Shoulder Wavelengths for Remote Detection of Grassland Phytomass. Int. J. Remote Sens. 2012, 33, 2178–2195. [Google Scholar] [CrossRef] [PubMed]
  24. Ji, W.; Wang, L. Phenology-Guided Saltcedar (Tamarix Spp.) Mapping Using Landsat TM Images in Western U.S. Remote Sens. Environ. 2016, 173, 29–38. [Google Scholar] [CrossRef]
  25. Diao, C.; Wang, L. Incorporating Plant Phenological Trajectory in Exotic Saltcedar Detection with Monthly Time Series of Landsat Imagery. Remote Sens. Environ. 2016, 182, 60–71. [Google Scholar] [CrossRef]
  26. Gandhi, G.M.; Parthiban, S.; Thummalu, N.; Christy, A. Ndvi: Vegetation Change Detection Using Remote Sensing and Gis—A Case Study of Vellore District. Procedia Comput. Sci. 2015, 57, 1199–1210. [Google Scholar] [CrossRef]
  27. Zhou, L.; Lyu, A. Investigating Natural Drivers of Vegetation Coverage Variation Using MODIS Imagery in Qinghai, China. J. Arid Land 2016, 8, 109–124. [Google Scholar] [CrossRef]
  28. Næsset, E.; Nelson, R. Using Airborne Laser Scanning to Monitor Tree Migration in the Boreal–Alpine Transition Zone. Remote Sens. Environ. 2007, 110, 357–369. [Google Scholar] [CrossRef]
  29. Dash, J.P.; Watt, M.S.; Paul, T.S.H.; Morgenroth, J.; Hartley, R. Taking a Closer Look at Invasive Alien Plant Research: A Review of the Current State, Opportunities, and Future Directions for UAVs. Methods Ecol. Evol. 2019, 10, 2020–2033. [Google Scholar] [CrossRef]
  30. Goodbody, T.R.H.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the Status of Forest Regeneration Using Digital Aerial Photogrammetry and Unmanned Aerial Systems. Int. J. Remote Sens. 2018, 39, 5246–5264. [Google Scholar] [CrossRef]
  31. Roy, K.; Chaudhuri, S.S.; Pramanik, S. Deep Learning Based Real-Time Industrial Framework for Rotten and Fresh Fruit Detection Using Semantic Segmentation. Microsyst. Technol. 2021, 27, 3365–3375. [Google Scholar] [CrossRef]
  32. Kim, W.-S.; Lee, D.-H.; Kim, T.; Kim, H.; Sim, T.; Kim, Y.-J. Weakly Supervised Crop Area Segmentation for an Autonomous Combine Harvester. Sensors 2021, 21, 4801. [Google Scholar] [CrossRef]
  33. Wu, H.; Liang, C.; Liu, M.; Wen, Z. Optimized HRNet for Image Semantic Segmentation. Expert Syst. Appl. 2021, 174, 114532. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Lu, Z.; Zhang, X.; Xue, J.-H.; Liao, Q. Deep Learning in Lane Marking Detection: A Survey. IEEE Trans. Intell. Transport. Syst. 2022, 23, 5976–5992. [Google Scholar] [CrossRef]
  35. Zhang, M.; Li, Z.; Wu, X. Semantic segmentation method accelerated quantitative analysis of the spatial characteristics of traditional villages. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2021, 46, 933–939. [Google Scholar] [CrossRef]
  36. Fareed, N.; Flores, J.P.; Das, A.K. Analysis of UAS-LiDAR Ground Points Classification in Agricultural Fields Using Traditional Algorithms and PointCNN. Remote Sens. 2023, 15, 483. [Google Scholar] [CrossRef]
  37. Shen, X.; Huang, Q.; Wang, X.; Li, J.; Xi, B. A Deep Learning-Based Method for Extracting Standing Wood Feature Parameters from Terrestrial Laser Scanning Point Clouds of Artificially Planted Forest. Remote Sens. 2022, 14, 3842. [Google Scholar] [CrossRef]
  38. Jeong, S.; Otsuki, K.; Shinohara, Y.; Inoue, A.; Ichihashi, R. Stemflow Estimation Models for Japanese Cedar and Cypress Plantations Using Common Forest Inventory Data. Agric. For. Meteorol. 2020, 290, 107997. [Google Scholar] [CrossRef]
  39. Yang, H.; Du, J. Classification of Desert Steppe Species Based on Unmanned Aerial Vehicle Hyperspectral Remote Sensing and Continuum Removal Vegetation Indices. Optik 2021, 247, 167877. [Google Scholar] [CrossRef]
  40. Peng, L.; Shi, Q.-D.; Wan, Y.-B.; Shi, H.-B.; Kahaer, Y.; Abudu, A. Impact of Flooding on Shallow Groundwater Chemistry in the Taklamakan Desert Hinterland: Remote Sensing Inversion and Geochemical Methods. Water 2022, 14, 1724. [Google Scholar] [CrossRef]
  41. Allison, G.B.; Gee, G.W.; Tyler, S.W. Vadose-Zone Techniques for Estimating Groundwater Recharge in Arid and Semiarid Regions. Soil Sci. Soc. Am. J. 1994, 58, 6–14. [Google Scholar] [CrossRef]
  42. Tayir, M.; Dai, Y.; Shi, Q.; Abdureyim, A.; Erkin, F.; Huang, W. Distinct leaf functional traits of Tamarix chinensis at different habitats in the hinterland of the Taklimakan desert. Front. Plant Sci. 2023, 13, 1094049. [Google Scholar] [CrossRef] [PubMed]
  43. Wang, N.; Cheng, W.; Wang, B.; Liu, Q.; Zhou, C. Geomorphological Regionalization Theory System and Division Methodology of China. J. Geogr. Sci. 2020, 30, 212–232. [Google Scholar] [CrossRef]
  44. Ali-Bik, M.W.; Gabr, S.S.; Hassan, S.M. Spectral Characteristics, Petrography and Opaque Mineralogy of the Oligo-Miocene Basalts at Wadi Abu Qada- Wadi Wata Area, West-Central Sinai, Egypt. Egypt. J. Remote Sens. Space Sci. 2022, 25, 529–540. [Google Scholar] [CrossRef]
  45. Gillespie, A.R.; Kahle, A.B.; Walker, R.E. Color Enhancement of Highly Correlated Images. I. Decorrelation and HSI Contrast Stretehe. Remote Sens. Environ. 1986, 20, 209–235. [Google Scholar] [CrossRef]
  46. Shakya, A.K.; Ramola, A.; Vidyarthi, A.; Sawant, K. Satellite Image Enhancement for Small Particle Observation Using Decorrelation Stretcher. In Proceedings of the 2020 International Conference on Advances in Computing, Communication & Materials (ICACCM), Dehradun, India, 21–22 August 2020; IEEE: Dehradun, India, 2020; pp. 65–70. [Google Scholar]
  47. Rajendran, S.; Vethamony, P.; Sadooni, F.N.; Al-Kuwari, H.A.-S.; Al-Khayat, J.A.; Govil, H.; Nasir, S. Sentinel-2 Image Transformation Methods for Mapping Oil Spill—A Case Study with Wakashio Oil Spill in the Indian Ocean, off Mauritius. MethodsX 2021, 8, 101327. [Google Scholar] [CrossRef] [PubMed]
  48. Campbell, N.A. The Decorrelation Stretch Transformation. Int. J. Remote Sens. 1996, 17, 1939–1949. [Google Scholar] [CrossRef]
  49. Ch. Miliaresis, G. Spatial Decorrelation Stretch of Annual (2003–2014) Daymet Precipitation Summaries on a 1-Km Grid for California, Nevada, Arizona, and Utah. Environ. Monit. Assess. 2016, 188, 361. [Google Scholar] [CrossRef]
  50. Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution On X-Transformed Points. Adv. Neural Inf. Process. Syst. 2018, 31, 828–838. [Google Scholar]
  51. Lee, J.; Cheon, S.-U.; Yang, J. Connectivity-Based Convolutional Neural Network for Classifying Point Clouds. Pattern Recognit. 2021, 112, 107708. [Google Scholar] [CrossRef]
  52. Widyaningrum, E.; Bai, Q.; Fajari, M.K.; Lindenbergh, R.C. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sens. 2021, 13, 859. [Google Scholar] [CrossRef]
  53. Ao, Z.; Wu, F.; Hu, S.; Sun, Y.; Su, Y.; Guo, Q.; Xin, Q. Automatic Segmentation of Stem and Leaf Components and Individual Maize Plants in Field Terrestrial LiDAR Data Using Convolutional Neural Networks. Crop J. 2022, 10, 1239–1250. [Google Scholar] [CrossRef]
  54. Zhou, Y.; Ji, A.; Zhang, L. Sewer defect detection from 3D point clouds using a transformer-based deep learning model. Autom. Constr. 2022, 136, 104163. [Google Scholar] [CrossRef]
  55. Guan, H.; Yu, Y.; Ji, Z.; Li, J.; Zhang, Q. Deep Learning-Based Tree Classification Using Mobile LiDAR Data. Remote Sens. Lett. 2015, 6, 864–873. [Google Scholar] [CrossRef]
  56. Yan, Y.; Deng, L.; Liu, X.; Zhu, L. Application of UAV-Based Multi-Angle Hyperspectral Remote Sensing in Fine Vegetation Classification. Remote Sens. 2019, 11, 2753. [Google Scholar] [CrossRef]
Figure 1. In the figure, (14) are populus euphratica and Tamarix chinensis field pictures, (5) is P4_RTK sampling, and (6) is the collection of vegetation information.
Figure 1. In the figure, (14) are populus euphratica and Tamarix chinensis field pictures, (5) is P4_RTK sampling, and (6) is the collection of vegetation information.
Forests 14 01943 g001
Figure 2. Schematic diagram of the research area.
Figure 2. Schematic diagram of the research area.
Forests 14 01943 g002
Figure 3. The sample point map is shown in Figure 3. (A) are the sample data of the field survey, and (BF) are the sample data of visual interpretation (example). (A) is the Sentinel-2 image (RGB) and (BF) are the drone images (RGB).
Figure 3. The sample point map is shown in Figure 3. (A) are the sample data of the field survey, and (BF) are the sample data of visual interpretation (example). (A) is the Sentinel-2 image (RGB) and (BF) are the drone images (RGB).
Forests 14 01943 g003
Figure 4. Illustrates the concept of PointCNN, which includes two convolutional operations.
Figure 4. Illustrates the concept of PointCNN, which includes two convolutional operations.
Forests 14 01943 g004
Figure 5. Includes a PointCNN model with two X-Conv layers.
Figure 5. Includes a PointCNN model with two X-Conv layers.
Forests 14 01943 g005
Figure 6. Decorrelation stretching diagram: (a) is the original image and (b) is the image after decorrelation stretching.
Figure 6. Decorrelation stretching diagram: (a) is the original image and (b) is the image after decorrelation stretching.
Forests 14 01943 g006
Figure 7. Comparison of gray values before and after decorrelation stretching.
Figure 7. Comparison of gray values before and after decorrelation stretching.
Forests 14 01943 g007
Figure 8. (a) is the loss of samples in different regions of interest, and (b) is the loss of binary sample classification.
Figure 8. (a) is the loss of samples in different regions of interest, and (b) is the loss of binary sample classification.
Forests 14 01943 g008
Figure 9. Deep learning classification map of Populus euphratica and Tamarix chinensis point cloud. ① is the classification result of Populus euphratica, ②, ④ is the classification result of mixed Populus euphratica and Tamarix chinensis, and ③ is the classification result of interval growth of Populus euphratica and Tamarix chinensis.
Figure 9. Deep learning classification map of Populus euphratica and Tamarix chinensis point cloud. ① is the classification result of Populus euphratica, ②, ④ is the classification result of mixed Populus euphratica and Tamarix chinensis, and ③ is the classification result of interval growth of Populus euphratica and Tamarix chinensis.
Forests 14 01943 g009
Figure 10. Parameter identification of Populus euphratica.
Figure 10. Parameter identification of Populus euphratica.
Forests 14 01943 g010
Figure 11. Comparison of extraction results and real values.
Figure 11. Comparison of extraction results and real values.
Forests 14 01943 g011
Figure 12. The relationship between crown area and crown volume and tree height and crown diameter. The changes in different colors in the figure represent the relationship between Crown area, Tree Height, and Crown diameter. The same color is the relationship between Tree height and Crown diameter.
Figure 12. The relationship between crown area and crown volume and tree height and crown diameter. The changes in different colors in the figure represent the relationship between Crown area, Tree Height, and Crown diameter. The same color is the relationship between Tree height and Crown diameter.
Forests 14 01943 g012
Figure 13. Comparison of PointCNN and ResNet-50 results.
Figure 13. Comparison of PointCNN and ResNet-50 results.
Forests 14 01943 g013
Figure 14. Classification results.
Figure 14. Classification results.
Forests 14 01943 g014
Table 1. P4_Multispectral and P4_RTK technical data sheet.
Table 1. P4_Multispectral and P4_RTK technical data sheet.
DescriptionSpecification
P4_Multispectral componentAircraftTake off weight1487 g
Maximum flight altitude 6000 m
Flight time27 min
Operating frequency5.725 to 5.850 GHz
FootageVisible-light imagingRed, Green, and Blue (RGB) synthesis
Multiband imagingBlue (B) 450 ± 16 nm;
Green (G) 560 ± 16 nm;
Red (R) 650 ± 16 nm;
Red edge (RE) 730 ± 16 nm;
Near infrared (NIR) 840 ± 16 nm
P4_RTKAircraftTake off weight1391 g
Maximum flight altitude6000 m
Flight time30 min
Operating frequency5.725 to 5.850 GHz
footageVisible-light imagingRed, Green, and Blue (RGB) synthesis
Table 2. Classification results of each verification data set.
Table 2. Classification results of each verification data set.
ParameterImageValidation Dataset
12345678910
Populus euphratica recognition rate/%point cloud0.92360.89010.96270.90390.95450.94530.95610.88640.89210.9281
true color0.87250.8260.83110.87240.83150.79540.81540.85640.8160.8545
multispectral0.73560.76320.67420.78560.77420.78850.820.71450.75320.7832
Tamarix chinensis recognition rate/%point cloud0.90210.87650.94210.88420.94880.93240.92430.90750.92830.9152
true color0.83620.81050.73420.77620.87790.86430.84560.85460.73630.8935
multispectral0.71560.83210.68420.78190.79370.77460.74320.78210.75440.7014
F1point cloud0.87890.8110 0.84360.79850.84060.75080.85230.73310.70580.8746
true color0.84370.82630.790.8420 0.75930.93040.84670.84950.77440.7459
multispectral0.73850.78190.83310.84370.76150.65840.74160.73730.65430.73855
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, D.; Shi, Q.; Peng, L.; Wan, Y. Plant Population Classification Based on PointCNN in the Daliyabuyi Oasis, China. Forests 2023, 14, 1943. https://doi.org/10.3390/f14101943

AMA Style

Li D, Shi Q, Peng L, Wan Y. Plant Population Classification Based on PointCNN in the Daliyabuyi Oasis, China. Forests. 2023; 14(10):1943. https://doi.org/10.3390/f14101943

Chicago/Turabian Style

Li, Dinghao, Qingdong Shi, Lei Peng, and Yanbo Wan. 2023. "Plant Population Classification Based on PointCNN in the Daliyabuyi Oasis, China" Forests 14, no. 10: 1943. https://doi.org/10.3390/f14101943

APA Style

Li, D., Shi, Q., Peng, L., & Wan, Y. (2023). Plant Population Classification Based on PointCNN in the Daliyabuyi Oasis, China. Forests, 14(10), 1943. https://doi.org/10.3390/f14101943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop