Extraction of Yardang Characteristics Using Object-Based Image Analysis and Canny Edge Detection Methods

: Parameters of geomorphological characteristics are critical for research on yardangs. However, methods which are low-cost, accurate, and automatic or semi-automatic for extracting these parameters are limited. We present here semi-automatic techniques for this purpose. They are object-based image analysis (OBIA) and Canny edge detection (CED), using free, very high spatial resolution images from Google Earth. We chose yardang ﬁelds in Dunhuang of west China to test the methods. Our results showed that the extractions registered an overall accuracy of 92.26% with a Kappa coe ﬃ cient of agreement of 0.82 at a segmentation scale of 52 using the OBIA method, and the exaction of yardangs had the highest accuracy at medium segmentation scales (138, 145). Using CED, we resampled the experimental image subset to a series of lower spatial resolutions for eliminating noise. The total length of yardang boundaries showed a logarithmically decreasing (R 2 = 0.904) trend with decreasing spatial resolution, and there was also a linear relationship between yardang median widths and spatial resolutions (R 2 = 0.95). Despite the di ﬃ culty of identifying shadows, the CED method achieved an overall accuracy of 89.23% with a kappa coe ﬃ cient of agreement of 0.72, similar to that of the OBIA method at medium segmentation scale (138). the whole image subset was split into four parts, one part for training, and the others for testing. In this work, the eCognition software system was used to complete the OBIA classification. For the CED method, we first coarsened the image subset’s resolution to reduce image

. The original very high-resolution image subset fromGoogle Earth (displayed in true color) of the study area.

Methods
In our study, two methods (OBIA and CED) were used for extracting yardang parameters. For the OBIA method, the whole image subset was split into four parts, one part for training, and the others for testing. In this work, the eCognition software system was used to complete the OBIA classification. For the CED method, we first coarsened the image subset's resolution to reduce image We here seek to fill this important research niche by focusing on: (1) extraction of yardang parameters using a very high spatial resolution image subset in Google Earth based on OBIA and Canny edge detection (CED) methods; (2) discussing the relationship between segmentation scale, spatial image resolution, and yardang size.

The Study Area
The study area is located in the northwest of Dunhuang, Gansu Province, China, 160 km away from Dunhuang City [41], the northeast margin of Tibetian Plateau. It is characterized by a typical arid climate, with mean annual temperature and precipitation of 9.2 • C and 44.5 mm, respectively. The mean annual evaporation is 2444 mm [42]. The main landform types in the area are yardangs, black gobi in the corridors among yardangs, sand ripples, and dunes (barchans, transverse and linear sand dunes) [43].Yardangs have developed on interbedded strata of hard, cemented lacustrine/fluvial deposits and loose soft aeolian deposits [20]. The height of the yardangs ranges from 10 to 20 m, and length ranges from 10 to 1800 m, and their width ranges from 5 to 300 m [20].

Data
The very high spatial resolution image subset for this study was acquired in Google Earth. It was free downloaded using the website (www.91weitu.com). The image subset was captured in May 2016, only available in R (red) G (green) B (blue) bands (no sensor and bands information), with a resolution of 1.19 m. The projection used was the WGS 1984 Web Mercator Auxiliary Sphere. The image was re-projected to the UTM (Universal Transverse Mercator) Zone 46N system with WGS 1984 Datum, presenting an area of approximately 5.04 km 2 ( Figure 1).

Methods
In our study, two methods (OBIA and CED) were used for extracting yardang parameters. For the OBIA method, the whole image subset was split into four parts, one part for training, and the others for testing. In this work, the eCognition software system was used to complete the OBIA classification. For the CED method, we first coarsened the image subset's resolution to reduce image

Image Segmentation
Image segmentation is the creation of image objects according to specific criteria. The multi-resolution segmentation algorithm was used in the study. It is a bottom-up region-merging technique starting with one-pixel objects and well used for detecting object boundaries [44]. The algorithm is controlled by three user-defined parameters: scale, shape, and compactness. The scale parameter determines the maximum standard deviation of heterogeneity for image objects. The higher is the value, the larger the generated image objects. The shape parameter defines the textural heterogeneity of the resulting image objects. If we wanted to make more use of spectral information, the shape value would have to be lower. The compactness value optimizes the image objects in regards to overall compactness and smoothing borders [45]. The values of shape and compactness parameters are on a scale (0, 1).
In our study, the yardangs had an obvious spectral difference from other objects, and the shapes of segmentation objects can reflect the original geometry of yardangs, so the shape parameter was set to 0.1 with less weight. The compactness parameter was set to 0.5, balancing compactness and smoothness.
ESP (estimation of scale parameter) is a tool to estimate scale parameter for multi-resolution image segmentation. The fundamental idea of this tool is in the following: multi-resolution segmentation is a bottom-up process in which objects are merged from small to large, and the criterion for merging is the heterogeneity of image objects. Local variance (LV) of image object heterogeneity can be expressed as the mean standard deviation of image objects at the same level. In general, with the increase of segmentation scale, LV will present a stable change process. On the contrary, when the segmentation scale is close to the optimal scale of ground objects, LV will have a large change. The rate of change of Remote Sens. 2020, 12, 726 5 of 20 LV (ROC-LV) can well reflect this process, and the peaks of ROC-LV indicate the scales at which the image can be segmented [46].
where L = LV at the target level and L − 1 = LV at the next lower level, LV represents the pixels' average variance of gray value in the image objects.

Shadow Detection
An obvious characteristic of yardang shadows is that their brightness values are significantly lower than those of other categories. Thus, it is easy to extract them using brightness information. We calculated the maximum brightness value of shadows in the training area, then used the thresholds to separate them from other objects.
There are two types of shadows: (1) true shadows next to the yardangs, (2) false shadows on the yardangs caused by the undulation of the surface, which should be classified as yardangs after accuracy assessment.

Object Features
Object features are the basis for image analysis and information extraction. One of the main advantages of OBIA is that a variety of spectral, textural, and geometric features can be extracted from segmented image objects and can be utilized in classification [47].
The spectral features refer to grey-level values in image bands. Common spectral features include meaning, standard deviation (SD), brightness, and maxdiff (see below). Mean and SD are the mean value and standard deviation of the pixel grey values for one object in each band. Brightness is the mean object grey value of all bands. Maxdiff represents the ratio of the maximum and minimum mean value difference to the brightness [48].
Textural features describe the spatial arrangements of grey-level values and spatial correlation among image objects [49]. The grey level co-occurrence matrix (GLCM) [50] is the most commonly used method for texture feature analysis. Main textural measures such as homogeneity (Hom), entropy (Ent), contrast (Con), mean, and standard deviation (Std) can be extracted based on GLCM [51]. As the alignment of yardangs in the study area was almost from north to south, we selected directions from 0 • to 135 • with 45 • intervals (0 • in the north), which were either parallel to or perpendicular to the alignment of yardangs for calculating textural features. All of them could be calculated in five ways depending on the directions: 0 • , 45 • , 90 • , 135 • , and all (summing up the four directional GLCM textures).
Geometric features mainly reflect the geometric parameters of the objects. A set of geometry features such as area, compactness, density, main direction, roundness, and shape index can be calculated. Compactness, density, and roundness are features used to measure the irregularity of the edge of the object. The shape index reflects the smoothness of the object boundary.

Feature Space Optimization
A separability and thresholds (SEaTH) algorithm [52] was used for the feature space optimization in our study. It is a statistical method based on training sample data. Jeffries-Matudita distance (J) on a scale [0, 2] measures the separability between classes. For two classes C 1 and C 2 , the greater J measures, the better the separability. J can be computedfrom and where m i and σ 2 i (i = 1,2) respectively represent the mean and variance of a given feature for two classes.

Classification
In order to classify segmented image objects with their selected features determined by the aforementioned feature selection techniques, a nearest neighbor (NN) classifier was employed. NN classifier, one of the simplest algorithms, is a Euclidean distance-based algorithm, assigns an unlabelled image pixel (or object) the class label of its nearest neighboring pixels (or objects) in the feature space [47].

CED Method
High-resolution images contain a large amount of object information, but there are also a lot of noise (e.g., linear textures on yardangs and shadows around them) that affect the extraction accuracy. This noise cannot be eliminated easily by the conventional methods (expanding filter radius and increasing threshold). In this section, we proposed the CED method based on resampling to reduce noise and improve the efficiency of boundary extraction ( Figure 2).

Resampling
Bilinear interpolation was applied for the resampling. This involves a linear interpolation by using pixel values of four adjacent points and assigning different weights according to their distance from the interpolation point [53]. This method has an average low-pass filtering effect, and the edges are smoothed to produce a coherent output image. The original image was resampled to six different spatial resolution images: 3 m, 5 m, 8 m, 10 m, 12 m, and 15 m.

RGB to Greyscale Transferring
As the CED method usually detects object boundaries using greyscale images, the RGB image should be converted into greyscale. A weighted average method was used to complete the image conversion [54]. The determination of band weights takes into account the physiological characteristics of human eyes. Human vision has the highest sensitivity to green, followed by red, and the lowest sensitivity to the blue [55]. The conversion formula is listed as follows where Grey is the brightness of the pixel in the greyscale images, and R, G, B are brightness values of the red, green, and blue bands of the same pixel in color images.

Canny Edge Detection
The detection process consisted of four steps: (1) utilized a Gaussian filter to smooth the image and remove noise; (2) calculated the gradient amplitude and direction with the finite-difference of the first partial derivative at each pixel; (3) non-maximum suppressed the gradient amplitude; and (4), applied double thresholds to determine and connect potential edges [40,56].

Manual Editing
After the edge extraction was completed, we needed several steps of manual editing to acquire the accurate boundary of yardangs: (1) selected appropriately closed, U-and V-shaped yardang boundaries in different edge detection results achieved from different spatial resolution images using manual interpretation; (2) merged all of the boundaries together, selected one detected from higher resolution image if the boundaries overlapped; (3) connected breakpoints, checked topology, made sure each yardang boundary was closed, then converted all of the yardang boundaries to polygons.

Accuracy Assessment
The confusion matrix method is widely used to verify the classification accuracy of remote sensing images [57]. This method calculates the accuracy by comparing the actual classification of each sample in the ground with the corresponding classification in the classification result image. The confusion matrix was also used to calculate the producer's accuracy (PA), user's accuracy (UA), overall accuracy (OA), and kappa coefficient. The OA and Kappa coefficient are employed to provide a summary measure of classification stability [58]. PA represents the probability that a class is correctly classified as the real result, while UA represents the probability that a class is correctly classified as the image classification result [59].
In order to ensure the reliability of accuracy assessment result, the randomly selected sample points must satisfy two conditions: (1) a minimum of 50 sample points for each category [45]; (2) the minimum total number of sample points (n) can be calculated by where o is the expressed overall accuracy, z is percentile from a standard normal distribution (z = 1.96 for a 95% confidence interval), and d is the desired half-width of the confidence interval [59].
In our study, we set o = 80%, z = 1.96, d = 2.5%, and calculated n = 984. We first randomly selected 3000-pixel samples from the whole image, and then deleted the samples in the training area, achieved a total number of 2247, including corridor (1617), yardang (578), and shadow (52). The total number of samples and a minimum number of samples for each category satisfied the two conditions.

Segmentation and Classification
Based on the ESP tool, we got several results (34,52,80,114,138,145,208, 257) using the initial inputs: the size of the increasing scale parameter 5, starting scale parameter 10.
When SC = 34, 52, the image objects had relatively small areas and were fragmented. Most feature categories were composed of multiple polygon objects, while only a few scattered features were represented by individual polygons. When SC ≥ 80, shadows formed single polygons, some yardang polygons were relatively complete. When SC ≥ 208, smaller yardangs were neglected due to an increased segmentation scale (Figure 3a-d).
In this research, the following 40 dimensions features were selected to calculate J distance: spectral features (mean layers1, 2, and 3, SD layer 1, 2, and 3, brightness, and maxdiff), GLGM textural features (mean, Hom, Ent, Con, Std, and all of them calculated in five directions), and geometric features (area, compactness, density, main direction, roundness, and shape index).
In general, the more features selected, the higher the accuracy of classification is achieved. However, there will be a large amount of information redundancy as the number of features increases, and classification will take longer, especially for the texture features. We set the maximum feature dimension to 10 considering the time cost and classification accuracy. The best feature groups are listed in (Table 1).
OBIA classification results are listed in (Figure 4a-g). The edge detection result is listed in (Figure 4h). textural features (mean, Hom, Ent, Con, Std, and all of them calculated in five directions), and geometric features (area, compactness, density, main direction, roundness, and shape index).
In general, the more features selected, the higher the accuracy of classification is achieved. However, there will be a large amount of information redundancy as the number of features increases, and classification will take longer, especially for the texture features. We set the maximum feature dimension to 10 considering the time cost and classification accuracy. The best feature groups are listed in (         Figure 5 shows OA, kappa coefficients of agreement, PA, and UA assessed classifications generated by the two methods. As expected, a smaller segmentation scale (SC = 34, 52) yields higher accuracy, the highest overall accuracy of 92.26% with a Kappa of 0.82 was achieved. The results also demonstrate that the overall accuracy and kappa coefficient shows a declining trend with increasing scale (Figure 5a,b).

Accuracy Evaluation
In general, we will achieve a higher accuracy of the classification results when the segmentation scale is closer to the size of the optimal scale [60]. In our study, the three categories (corridor, yardang, and shadow) have different optimal scales according to the PA and values in the diagonal of the confusion matrix. Yardangs have the highest classification accuracy under medium segmentation scales (SC=138, 145), while corridors should use smaller or larger segmentation scales to get a higher classification result. Since the size of shadows was obviously smaller than others, the smallest segmentation scale (SC=34) was beneficial to the shadows.
Despite being unable to recognize shadows, the CED method achieved an overall accuracy of 89.23% with a kappa coefficient of 0.72 (Figure 5a, b; Appendix Table A9), similar to that of a medium segmentation scale (SC=138). For the CED method, the category of the corridor had higher PA and UA than the OBIA method (Figure 5c, d), as the manual interpretation could eliminate the phenomenon that dunes divided into yardangs. Though the category of yardang had a higher UA than the OBIA method, its PA was lower (the value 71.99% was similar to that of the largest segmentation scale (257)), indicating it is easy to miss yardangs using the CED method.

Geomorphological Characteristic Parameters
For the OBIA method, several manual editing steps are required to ensure the accuracy of the characteristic geomorphological parameters calculated on the basis of yardangs extracted: (1) We merged same-class adjacent objects; (2) re-labeled the shadows and corridors inside the yardangs into yardangs; (3) deleted the misclassified yardangs, but did not modify the yardang boundaries.
After extracting of yardangs from the very high-resolution image subset, with each being seen as a single object, we could calculate their geometric parameters (i.e., area, length, width, length/width ratio, and orientation). These parameters were calculated in units of pixels in the software, eCognition. We could get the characteristic geomorphological parameters of yardangs after converting the pixels into meters. Figure 6 shows several morphological parameters of yardangs: total number, minimum area, mean area, and median direction. Larger segmentation scale generally leads to a smaller number of image objects with larger areas [48], so the total number of yardangs decreases while minimum area With an increase of segmentation scale, PA and UA of the three classes (corridor, yardang, and shadow) showed a different tendency. PA of corridor first decreased and then increased, reaching the lowest points (SC = 138, 145) with the values of 87.45%, 86.2%, while the variation trend of yardang' PA was opposite to that of the corridor (Figure 5c). PA of yardang first increased and then decreased, reaching the highest points (SC = 138, 145) with the values of 98.26% and 98.17%. The variation trends of UA for the two classes (corridor and yardang) were opposite to that of PA. According to the confusion matrix (Appendix A Table A1(1)-(8)), we can explain this strange phenomenon. With the increase of segmentation scale, the number of samples in diagonal (confusion matrix) for yardang first increased and then decreased, reaching the highest points (SC = 138, 145), while the number of samples in diagonal for corridor first decreased and then increased, reaching the lowest points (SC = 138, 145). With the increase of segmentation scale, corridors can easily be mistaken for yardangs as the dunes in the corridors have similar spectral and textural properties to yardangs. This phenomenon increases the number of samples in calculating UA and reduces the UA of yardangs.
Shadows generally have small areas. As the segmentation scale becomes larger, they are gradually divided into other categories, and the total number also decreases. So the PA of shadows shows an obvious decreasing trend, while the UA changes greatly (Figure 5c,d).
Different categories have different optimal segmentation scales, which depend on the size of the categories and the image characteristics. Accuracy is a good criterion to evaluate the optimal scale. In general, we will achieve a higher accuracy of the classification results when the segmentation scale is closer to the size of the optimal scale [60]. In our study, the three categories (corridor, yardang, and shadow) have different optimal scales according to the PA and values in the diagonal of the confusion matrix. Yardangs have the highest classification accuracy under medium segmentation scales (SC = 138, 145), while corridors should use smaller or larger segmentation scales to get a higher classification result. Since the size of shadows was obviously smaller than others, the smallest segmentation scale (SC = 34) was beneficial to the shadows.
Despite being unable to recognize shadows, the CED method achieved an overall accuracy of 89.23% with a kappa coefficient of 0.72 (Figure 5a,b; Appendix A Table A1 (9)), similar to that of a medium segmentation scale (SC = 138). For the CED method, the category of the corridor had higher PA and UA than the OBIA method (Figure 5c,d), as the manual interpretation could eliminate the phenomenon that dunes divided into yardangs. Though the category of yardang had a higher UA than the OBIA method, its PA was lower (the value 71.99% was similar to that of the largest segmentation scale (257)), indicating it is easy to miss yardangs using the CED method.

Geomorphological Characteristic Parameters
For the OBIA method, several manual editing steps are required to ensure the accuracy of the characteristic geomorphological parameters calculated on the basis of yardangs extracted: (1) We merged same-class adjacent objects; (2) re-labeled the shadows and corridors inside the yardangs into yardangs; (3) deleted the misclassified yardangs, but did not modify the yardang boundaries.
After extracting of yardangs from the very high-resolution image subset, with each being seen as a single object, we could calculate their geometric parameters (i.e., area, length, width, length/width ratio, and orientation). These parameters were calculated in units of pixels in the software, eCognition. We could get the characteristic geomorphological parameters of yardangs after converting the pixels into meters. Figure 6 shows several morphological parameters of yardangs: total number, minimum area, mean area, and median direction. Larger segmentation scale generally leads to a smaller number of image objects with larger areas [48], so the total number of yardangs decreases while minimum area and mean area increase as the segmentation scale increases. CED method obtains satisfactory results, and the values of the three parameters (i.e., total number, minimum area, and mean area) are close to those of the smallest segmentation scale (SC = 34). The values of the median direction using the two methods were around 11 • , which was similar to the calculated result by Dong et al. [20].
as a single object, we could calculate their geometric parameters (i.e., area, length, width, length/width ratio, and orientation). These parameters were calculated in units of pixels in the software, eCognition. We could get the characteristic geomorphological parameters of yardangs after converting the pixels into meters. Figure 6 shows several morphological parameters of yardangs: total number, minimum area, mean area, and median direction. Larger segmentation scale generally leads to a smaller number of image objects with larger areas [48], so the total number of yardangs decreases while minimum area

The Effect of Segmentation Scales
Based on the size of the area, all of the yardangs could be divided into three sub-categories: micro-yardang (area ≤ 100 m 2 ), meso-yardang (100m 2 < area < 1000 m 2 ), and mega-yardang (area ≥ 1000 m 2 ). We wanted to analyze the relationship between yardang size and segmentation scale.

The Effect of Segmentation Scales
Based on the size of the area, all of the yardangs could be divided into three sub-categories: micro-yardang (area≤100m 2 ), meso-yardang (100m 2 <area<1000m 2 ), and mega-yardang (area≥1000m 2 ). We wanted to analyze the relationship between yardang size and segmentation scale. Figure 7a demonstrates the total number of the three types of yardangs under different segmentation scales. With an increase of segmentation scale, the number of the three types of yardangs decreased, micro-yardangs could only be identified using the smaller segmentation scale (SC=34, 52, 80), only several meso-yardangs could be extracted under the highest segmentation scale (SC=257). Figure 7b shows that the PA of yardang extraction had a strong correlation with the total area. With an increase of segmentation scale, both of them demonstrated a tendency of initial increase (reaching the highest points when SC=138, 145), followed by a decrease.

The Effect of Spatial Resolution
Using the resample method, the smoothing effects caused by the decrease (coarsening) of spatial resolution can reduce the spectral difference amongyardangs, and it is easier to distinguish the boundaries of yardangs. With the decreasing of spatial resolution, the image information gradually decreases, and the edges of yardangs become clearer (Figure8e-g). We analyzed the relationship between the total length of the extracted boundaries and the spatial resolution via Figure 9a, whereas a logarithmically decreasing trend is shown (R 2 =0.904).
The spatial resolution had a significant influence on the extraction of yardangs. For the same  Figure 7b shows that the PA of yardang extraction had a strong correlation with the total area. With an increase of segmentation scale, both of them demonstrated a tendency of initial increase (reaching the highest points when SC = 138, 145), followed by a decrease.

The Effect of Spatial Resolution
Using the resample method, the smoothing effects caused by the decrease (coarsening) of spatial resolution can reduce the spectral difference among yardangs, and it is easier to distinguish the boundaries of yardangs. With the decreasing of spatial resolution, the image information gradually decreases, and the edges of yardangs become clearer (Figure 8e-g). We analyzed the relationship between the total length of the extracted boundaries and the spatial resolution via Figure 9a, whereas a logarithmically decreasing trend is shown (R 2 = 0.904).
Remote Sens. 2020, 12, x FOR PEER REVIEW 15 of 20 (a) (b) Figure 9. (a) The relationship between the total length of the extracted boundaries and the spatial resolutions, (b) A linear relationship between the median widths of extracted yardangs using different spatial resolutions.

Conclusions
Two methods, which are efficient, low-cost, and semi-automatic,were proposed for extracting yardang parameters in this research. Using the OBIA classification method, we integrated geometric, spectral,and textural features to construct the feature space.Yardangs could be easily extracted from the very high spatial resolution image subset from Google Earth.The segmentation scale was the key factor in determining the size of yardangs. With an increase of segmentation scale, the number of the three types of yardangs decreased, micro-yardangs (area≤100m 2 ) could only be (a) (b) Figure 9. (a) The relationship between the total length of the extracted boundaries and the spatial resolutions, (b) A linear relationship between the median widths of extracted yardangs using different spatial resolutions.

Conclusions
Two methods, which are efficient, low-cost, and semi-automatic,were proposed for extracting yardang parameters in this research. Using the OBIA classification method, we integrated geometric, spectral,and textural features to construct the feature space.Yardangs could be easily extracted from the very high spatial resolution image subset from Google Earth.The segmentation scale was the key factor in determining the size of yardangs. With an increase of segmentation scale, the number of the three types of yardangs decreased, micro-yardangs (area≤100m 2 ) could only be identified using the smaller segmentation scale (SC=34, 52, 80), while only several meso-yardangs The spatial resolution had a significant influence on the extraction of yardangs. For the same yardang, if the images with different resolutions could extract it in complete boundaries, then we could achieve more accuracy using the higher (finer) spatial resolution image (Figure 8a-d). Especially for mega yardangs, the boundaries became much clearer as the spatial resolution decreased (Figure 8e-g). Huang and Wu [60] concluded that spatial resolution has a linear relationship with the accuracy of the extracted objects. Ehsani and Quiel [33] showed that yardangs could be clearly recognized when their widths larger than the spatial resolution of DEM using a self organizing map method. In our study, we also analyzed the relationship between yardang size and spatial resolution. Figure 9b shows a linear relationship between the median widths of extracted yardangs and spatial resolutions (R 2 = 0.95).

Conclusions
Two methods, which are efficient, low-cost, and semi-automatic, were proposed for extracting yardang parameters in this research. Using the OBIA classification method, we integrated geometric, spectral, and textural features to construct the feature space.Yardangs could be easily extracted from the very high spatial resolution image subset from Google Earth. The segmentation scale was the key factor in determining the size of yardangs. With an increase of segmentation scale, the number of the three types of yardangs decreased, micro-yardangs (area ≤ 100 m 2 ) could only be identified using the smaller segmentation scale (SC = 34, 52, 80), while only several meso-yardangs (100 m 2 < area < 1000 m 2 ) could be extracted with the highest segmentation scale (SC = 257).
Using the CED method, resampling the image subset to a series of lower spatial resolution ones could effectively reduce noise. The total length of the yardang boundaries showed a logarithmically decreasing (R 2 = 0.904) trend with decreasing spatial resolution. There is also a linear relationship between the median widths of yardangs and spatial resolutions (R 2 = 0.95).
Confusion matrices were used to verify classification accuracies. The OBIA method achieved the highest overall accuracy of 92.26% with a Kappa of 0.82 at a smaller segmentation scale (SC = 52). Despite being unable to extract shadows, the CED method achieved an overall accuracy of 89.23% with a kappa coefficient of 0.72, similar to that of the OBIA method working at a medium segmentation scale (SC = 138).
Comparing the geomorphological characteristic parameters of yardangs using the two methods, the CED method gave rises to satisfactory results, and the values of the three parameters considered (i.e., total number, minimum area, and mean area) are close to those obtained using OBIA method atthe smallest segmentation scale (SC = 34). OBIA is more automatic, while CED is simpler but relies more on manual interpretation.
Author Contributions: Z.L. proposed and organized the research; W.Y. and W.Z. performed the experiments; J.Z. refined the experimental design. All authors contributed to writing the paper. All authors have read and agree to the published version of the manuscript.