Next Article in Journal
A Method for Intelligent Road Network Selection Based on Graph Neural Network
Previous Article in Journal
An Evaluation of Smartphone Tracking for Travel Behavior Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Digital Elevation Model Comparison Criteria: An Evident Need to Consider Their Application

by
Carlos López-Vázquez
1,* and
Francisco Javier Ariza-López
2
1
Laboratorio de Tecnologías de la Información Geográfica (LatinGEO), Facultad de Ingeniería, Universidad ORT, Montevideo 11100, Uruguay
2
Departamento Ingeniería Cartográfica, Geodésica y Fotogrametría, Escuela Politécnica Superior, Universidad de Jaén, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2023, 12(8), 337; https://doi.org/10.3390/ijgi12080337
Submission received: 25 May 2023 / Revised: 18 July 2023 / Accepted: 28 July 2023 / Published: 11 August 2023

Abstract

:
From an extensive search of papers related to the comparison of Global Digital Elevation Models (hereinafter GDEMs), an analysis is carried out that aims to answer several questions such as: Which GDEMs have been compared? Where have the comparisons been made? How many comparisons have been made? How have the assessments been carried out? Which is the GDEM option with the lowest RMSE? Analysis shows that SRTM and ASTER are the most popular GDEMs, that the countries where more comparisons have been made are Brazil, India, and China, and that the main type of reference data for evaluations is the use of points surveyed by GNSS techniques. A variety of criteria have been found for the comparison of GDEMs, but the most used are the RMSE and the standard deviation of the elevation error. There are numerous criteria with a more user-centric character in thematic areas, such as morphometry, geomorphology, erosion, etc. However, in none of the thematic areas does there exist a standard method of comparison. This limits the possibilities of establishing a ranking of GDEMs based on their user-focused quality. In addition, the methods and reference data set are not adequately explained or shared, which limits the interoperability of the studies carried out and the ability to make robust comparisons between them.

1. Introduction

A digital elevation model (DEM) is a digital representation of elevations (or heights) of a topographic surface in the form of a geo-rectified point-based or area-based grid covering the Earth or other solid celestial bodies [1]. When a DEM records the bare earth, it is called a digital terrain model (DTM), and when the upper surface of biosphere elements and human-made features are included, it is called a digital surface model (DSM). DEMs are data of great importance due to their wide use in a wide variety of sciences (e.g., geology, hydrology, agronomy, forestry, etc.) and user communities (scientists, engineers, military, etc.). Due to their importance, DEMs are included in the INSPIRE themes [2], and also in the global fundamental geospatial data themes defined by the United Nations Expert Committee for Global Geospatial Information Management [3]. According to this group of experts, DEMs have a significant role in objectives 1, 2, 3, 6, 7, 11, 13, 14, and 15 of sustainable development as defined by the United Nations.
A specific type of DEM is the Global DEM (GDEM), characterized by its almost global coverage. GDEMs are commonly created by international research efforts, based on satellite platforms, and used for global studies by international user communities or organizations. The first GDEM with free access data was the SRTM (Shuttle Radar Topography Mission), which appeared in 2005 [4] and led to a revolution in the field of geoscience. Since then, many other GDEMs have been made available, such as, in chronological order: ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) [5,6]; WorldDEM [7,8,9]; NASADEM 1″ [10]; ALOS-AW3D30 1″ [11]; Copernicus DEM [12], MERIT (Multi-error-Removed Improved-terrain) [13]; FABDEM (Forest and Buildings removed Copernicus DEM) [14], etc. These GDEMs have different characteristics as they were created using different technologies, platforms, resolutions, processes, etc. (see Table 1). Additionally, over time, new versions are generated and some of these data sets are often tweaked and enhanced by companies. In this way, the availability of so many options introduces the need to select the most suitable GDEM for each use case; this, in turn, leads to the need to compare them. Since the appearance of ASTER, hundreds of GDEMs-data set comparison studies have been published for different proposes, in different locations, topographies, and ground conditions, and comparisons have been carried out based on different criteria, methods, and reference data. However, as indicated by Uss et al. [15], “comparison between DEMs is a complicated task”. The existence of such a large and varied offering is a serious problem for GDEM users, since they do not always have the capacity to carry out a selection process. In addition, the abundance of published papers reporting results that are not easily comparable introduces confusion. As Strobl et al. [16] indicated, “today we find ourselves in a situation in which it is often difficult, even for experts, to assess what the major strengths, weaknesses, and differences are between the available data sets and to decide which DEM might be the most accurate or appropriate for a certain application or region”. In this way, the Digital Elevation Model Intercomparison eXperiment (DEMIX) project should be highlighted, which, among its objectives, evaluates GDEMs [16] in order to propose well-specified criteria and measures and standardized comparison methods and to provide reference data sets throughout the world. Due to the applied importance of DEMs, the evaluation of their quality is a key topic today; recent papers such as [17,18] presented both general and critical reviews of the methods applied. However, this paper offers a vision that is much more focused on a specific reality: the comparison of GDEMs.
Aligned with the goals of the DEMIX project, the objective of this study is to carry out an analysis of the procedures used for comparison exercises involving at least one GDEM that have been carried out from 2005 to the present. The result of this review and analysis will discover the most widely-used criteria and the most popular reference information and comparison methods, as well as their strengths and weaknesses; all this is aimed at offering some kind of recommendation. To the best of our knowledge, no similar review has been carried out before considering this type of data.
This document is organized as follows: After the Section 1, in the Section 2, the set of scientific sources used in the process (called corpus) is presented. Then, the analysis of the results is carried out considering the GDEMs that intervene in the comparisons, the reference data and the sample used, the comparison criteria, and the results of the comparisons. Subsequently, a more general discussion is presented and, finally, the main Section 5 are presented.

2. Materials and Methods

This section covers the methods employed to generate a representative sample of documents for the analysis and a brief presentation of the results (the corpus) from a bibliographic perspective.
A two-line search strategy was deployed. In the first line, various databases integrated into the Web of Science (hereinafter WoS) were accessed (WoS is a trademark of Clarivate, Philadelphia, United States), and in the second, a direct anonymized search through Google was performed. In both cases, the same filtering criteria were applied. The criteria were the following:
  • Typology of documents. Only scientific papers were considered;
  • Scientific guarantee. The core of the corpus includes scientific papers mostly published by journals that were registered in databases included in the WoS (e.g., SCIELO, etc.). Then, in addition, an expanded search included other journals that were not ranked;
  • Time span. From the appearance of the second free GDEM (2005) until the end of August 2022;
  • Keywords. Each of the GDEMs available today and in the past were considered. Since the research proposed was centered on comparisons between GDEMs, possible pairs were considered. Additionally, terms related to comparisons (e.g., comparison, evaluation, validation, assessment, accuracy, quality, ranking, etc.) regarding the term DEM were sought;
  • Scope of the search. In the case of the WoS, the searches were performed including title, abstract, author keywords, and KeyWords Plus (a set of terms derived from the titles of articles cited by the author of the article being indexed). In the case of searches through Google Scholar, there was no control over these aspects;
  • Scope of analysis. We looked for papers that included at least one GDEM and at least one more DEM, and centered on accuracy assessment (a comparison between a product and a reference) and analysis that was not too dissimilar. Cases with only accuracy assessments were ignored.
Finally, the query made in the WoS was TS = (DEM NEAR comparison) OR TS = (DEM NEAR ranking) OR TS = (DEM NEAR accuracy) OR TS = (DEM NEAR evaluation) OR TS = (DEM NEAR assessment) OR TS = (DEM NEAR quality) OR TS = (DEM NEAR validation) AND (TS = (SRTM) AND TS = (ASTER)) OR (TS = (SRTM) AND TS = (ALOS)) OR (TS = (SRTM) AND TS = (AWD3D30)) OR (TS = (SRTM) AND TS = (TANDEM)) OR (TS = (ASTER) AND TS = (ALOS)) OR (TS = (ASTER) AND TS = (AWD3D30)) OR (TS = (ASTER) AND TS = (TANDEM)) and Article or Review Article (Document Types). Timespan: 1 January 2005 to 30 August 2022. In the case of searches through Google Scholar, the same filtering was manually applied by the authors. The final result was a set of 390 references. The abstract of each of these references was read in order to detect cases with topics outside the interests of the research. Finally, a total of 313 references (the corpus) were obtained, which are the basis of the analysis presented in the next section. As Supplementary Materials, an excel file is provided where the documents that make up the corpus are identified.
Now, we proceed to a brief analysis of the results from the documentary perspective with the aim of characterizing this material. Figure 1 shows the evolution of the number of references through time until the last complete year of the analysis. In relation to temporal evolution, the growing trend shown in Figure 1 is fair, which indicates a clear interest in the matter. Table 2 shows the titles of the journals that supply more than five references to this study. As can be seen from their titles, they are all related to geosciences, either with a more instrumental or technological vocation (for example, focused on remote sensing) or with a more applied perspective (environmental, geography or hydrology). In relation to the rest of the sources there are a total of 158 different journals and congresses, but which can also be considered as belonging to the two groups indicated above for the sources that make the greatest number of contributions. As a graphic summary of the titles of the papers and their abstracts, Figure 2 presents a cloud of words once the stop words and numbers have been eliminated and a stemming process has been applied using the WordArt App (WordArt.com, accessed on 1 May 2023). The cloud is dominated by the word “DEM”, which seems obvious. The word “model” occupies a second predominant position. This word does not really add much and usually appears as part of a “digital elevation model” or shorthand for it. The next most important words are “accuracy”, “SRTM” and “use”, which do focus the theme of this document a great deal.
Once the set of documents to be analyzed is available, some mechanism is needed that allows their analysis in a systematic way. Essentially, what is intended is to answer some basic questions: Which GDEMs have been compared? Where have the comparisons been made? How many comparisons have been made? How have the assessments been carried out? Which is the GDEM option with the lowest RMSE? To support this process, a macro-table was designed with an entry for each document in which all the aspects presented in the next Section 3 have been recorded. This table has five sections. The Section 3.1 is dedicated to bibliometric aspects and focuses on the document itself (e.g., title, year, etc.). The Section 3.2 is focused on the sampling carried out in the comparison work (e.g., number of locations, used elements, etc.). The Section 3.3 includes an extensive list of possible accuracy criteria and allows accountability on the use of these in each of the papers analyzed. This list has been generated in the process of analyzing the documents. Therefore it is not a list based on previous ideas, but on what was actually found in genuine comparisons. The Section 3.4 considers the frequency of use of the existing GDEMs. Finally, the Section 3.5 includes a list of possible types of reference data sets used (e.g., LiDAR data, GNSS data, etc.). This table was filled out by hand by the authors, extracting the information from each document. Many of the aspects that are recorded in the macro table are binary (true/false) (e.g., Does it use RMSE for the elevation error? Does it use standard deviation for the elevation error?; and so on). In this way, much of the subsequent analysis will be based on the analysis of proportions and the relationships between them.

3. Results Analysis

The analysis developed is organized following the structure of the macro-table presented above. In this way, five main subsections are established, in which the presentation of the results and their particular analysis will be developed. A more general analysis will be presented in the Section 4.

3.1. GDEMs

At this point, it is important to highlight that we are not going to distinguish between the different versions that a product may have (e.g., three versions for ASTER or four versions for SRTM) because many times in the papers analyzed they do not properly identify the version of the data set with which they work.
As indicated in the Section 1, there are numerous GDEMs and there is an actual need to know which of them is the most appropriate option. In addition, aligned with the DEMIX project, the objective of this paper is to analyze the GDEMs comparison procedures since they are the tool used to obtain objective evidence of the GDEM’s performance. In this paper, the GDEMs considered are those that appear in Table 3. In relation to GDEMs, we are interested in knowing the number of times each of the GDEMs is used, which are the most common comparisons between GDEMs, the GDEM analyses performed by country, and the evolution of these analyses over time. Table 3 presents the mere count of cases for each GDEM and year and it clearly shows that ASTER and SRTM (90 m and 30 m versions) are the most popular ones in the corpus analyzed. On the other hand, this table also presents a clear temporal trend of increasing the number of research papers, making evident a growing interest.
Table 4 presents the crossing of all the GDEMs against all the GDEMs used together. The results are presented in the form of an upper diagonal matrix and for greater reading comfort the zeros appear as empty cells. This table shows in the diagonal the values presented in the “Total” labeled column of the previous Table 3. In the upper triangular matrix, the crossovers detected in the comparisons made in the papers of the corpus are presented. Table 5 is derived from Table 4, and presents the total number of comparisons that have been made for each of the GDEMs. Both tables clearly show that SRTM (30 m + 90 m) is the one more frequently considered, but ASTER is also a very frequent option. As is logical, in the comparisons the smallest number of GDEMs involved is 2 (e.g., [27,28]). This situation was mandatory in the first years when the availability of GDEM was limited to two products. In the corpus analyzed, the largest number of GDEMs involved in a single comparison is nine [29]. The average number of comparisons of GDEMs that is usually carried out in the corpus is 3.28 (mean of the ratio between the number of times that each GDEM appears between the times that it has been compared with other GDEMs), and 34.93% percent of the papers compare four or more GDEMs.
In relation to the location of the test sites used for the comparisons, Table 6 presents the number of times that each GDEM is mentioned in each of the countries identified by their alpha-2 codes assigned by the International Standardization Organization. Despite the majority being one-country experiments, there are numerous cases in which comparisons are made in multiple countries. There are a total of 22 papers (7%) in which two or more countries are considered. There are just 9 papers that have a global analysis perspective. For example, ref. [30] performs a complete worldwide screening of the SRTM v4.1 and MERIT DEM. ref. [31] used 32 floodplain locations all over the world. In ref. [32], 1524 points of altimetric prominence distributed throughout the world are analyzed, and in ref. [33], elevations from 96 runways from diverse aerodromes were considered. Test cases using reference data from just one country are the majority and comprise a total of 68 different countries, which means 35% of the total of countries in the world. However, the number of papers per country is evenly distributed. As can be seen in Table 6, there are countries with a large number of cases (e.g., India), but those that only appear once are more than a third of the total. With the exception of the USA, we noticed that the countries with the largest number of cases are those emerging countries with altimetric data needs that might not be adequately covered by products from their national mapping agencies. Three out of four of the BRICS countries (Brazil, Russia, India, and China) lead the list.

3.2. The Reference Data

One of the issues that makes comparison between GDEMs difficult is the widely inconsistent reference data used [15]. A suitable reference data set is one that is independent of the GDEM data being compared and that in addition has substantially greater accuracy (on the order of three times). These conditions are usual in the analysis of the positional accuracy of geospatial data [34], but in most of the papers of the corpus, there is simply no effort spent on justifying the selection of the reference. Even more, it is very usual that the accuracy of the reference data is not explicitly stated. We set aside the use of GNSS, geodetic monuments, and IceSAT data, which by design might be suitable for the task, and focus on other situations. In many cases, maps at a scale of 1:50.000 are used (e.g., [35,36]), and even at smaller scales (e.g., 1:100.000 in ref. [37]). From this information a planimetric accuracy bound of the original source can be inferred, but not the altimetric accuracy because it does not always have a direct relationship with the planimetric accuracy. In some papers, the evaluation of slope and orientation is proposed instead of the elevation (e.g., [38,39,40,41,42]), and the situation regarding the accuracy of the reference data is even more obscure. It is usual to establish conditions or limitations on references. For example, the reference extraction area is limited to those areas with a slight slope and bare soil (e.g., [31,33]) or to areas of smooth topography and bare soil [43]. We consider that these limitations are more typical of the establishment of the population under analysis than of the very definition of the reference.
Another important topic is the geometry of the reference. The use of points is commonplace, although under this category we can include terrain points surveyed with GNSS technologies (e.g., [44,45]), by LiDAR systems (e.g., [29]), vertices of traditional topographic and geodetic networks (e.g., [40]), or even footprints, in the form of points, of some satellite altimeters (e.g., [46]). Also mentioned, but much less widely used, are linear elements like profiles (e.g., [47,48]), geological lineaments (e.g., [49,50]) and surfaces (e.g., [15]). In cases related to functional quality, quality assessment and applied perspective (use-case centered), specific reference data are used for each use case. Ref. [51] counted detected dolines (karst depressions) with reference to manually interpreted data, while [52] relied on in-situ observations. Other terrain features like gullies are also possible [53]. Some authors compared landslide true instances with reference to their estimations (e.g., [54,55,56]). Volcano lahar is modeled and compared to observed trajectories [57]. For hydrological applications, among others, we can mention ref. [58], who compared gauge stations to simulation data. Comparing the shape of the real inundation area to its estimations is also possible [59]. Our analysis will focus exclusively on the typology of the reference, since going into greater detail will generate significant dispersion in the results. For our purposes, the classification of reference sources is as follows:
  • GCP-GNSS. Any set of ground control points (GCP) captured by GNSS (Global Navigation Satellite Systems) techniques (e.g., GPS, Galileo, etc.);
  • Geodetic Benchmarks. Geodetic or topographic vertices of the official leveling networks;
  • ICEsat. Data from the altimetric profilometer of the ICEsat satellite;
  • LiDAR Cloud. Any kind of LiDAR data cloud irrespective of its different characteristics (aerial data, UAV data, different densities, and accuracies);
  • Other—Elevation Data. For example, while using the DEM to estimate building heights, reference data is the height of the building;
  • Other—Raster Data Set. This usually refers to the use of another elevation data set (e.g., global, like WorldDEM/TandemX, or just a local DEM) or a legacy source, like a digitized version of elevations taken from an official small-scale topographic map;
  • The 3D Lines–Profiles. These can be illustrated with 3D profiles corresponding to transects or communication routes or aerodrome runways of varying resolution and accuracy;
  • Planimetric Features. Any 2D data set used to analyze the fit of the GDEM, such as a hydrographic network, geological lineaments, building footprints, etc. No elevation reference data is involved;
  • Functional Reference Data. Basically occurs in functional quality assessments (e.g., erosion, floods, etc.), where reference data appropriate to the use case is used (e.g., tons of eroded material, volume of floods, etc.). No elevation reference data is involved;
  • None. No reference data is used in the comparison.
Table 7 presents the count and percentage of use in the comparisons of each of the above-indicated reference typologies. The clear predominance can be observed in the use of ground control points surveyed using GNSS techniques (30%), as well as the use of existing official DEMs (20%). It should be noted that the use of LiDAR data clouds is still quite low. We suppose that this is due to both the problems of having sufficiently large areas surveyed and the problems of managing large volumes of this type of data. It is also noteworthy that the use of 2D and 3D elements is not very widespread either. It is relevant to indicate that there are cases in which reference data is not used. They correspond to cases where internal or visual consistency is used. It is also of interest to know if various references are used together. In this regard, the count of crossover cases between references is offered in Table 8. Table 8 indicates that “GCP-GNSS”, and to a lesser extent “Official-DEM”, are the most standard reference types since they are those that appear most often crossed with the other reference categories. Just 9% of the comparison exercises have been performed using some sort of functional accuracy.

3.3. The Sample for the Evaluation

As indicated by [34], the evaluation of geospatial data is usually achieved by sampling, since a complete evaluation is unaffordable (resources, time, etc.). Through sampling, the aim is to estimate a parameter of interest in the population (estimation process) or to decide to accept or reject a certain hypothesis about the said parameter (control process). In either case, the problem must be approached using the appropriate sampling theory of statistics. In this regard, the aspects that must be considered are defining the population of interest, defining the parameter(s) of interest, establishing whether to test a hypothesis or make an estimate, the type I and II errors in the hypothesis test or the precision in the estimation, etc. In general, the papers that have been analyzed do not present a rigorous statistical framework in the definition of the samples that are subsequently proposed. The following paragraphs describe what was observed in the corpus.
Going into technical details related to sampling aspects, we consider that the population of interest is not completely defined. Of course, the GDEM(s) to be analyzed are known, but the spatial extent that the analysis is intended to cover is not usually explicitly indicated, or it is not carried out adequately. For example, it is declared what GDEM is used, but it is not usually explicitly stated that the area of interest is a certain zone (e.g., continent, region, country or state), e.g.: “the ASTER in country XXX” or “the ASTER in region YYY”. The indication of the area is given more from the metadata perspective on where the analysis is carried out (location context) than as an element of the definition of the population of interest.
The above is related to the sites where the analysis is carried out in each paper. A site is an area (world, continent, country, region, province, etc.) of interest and each paper must have at least one. The area of interest should be defined by one or more geographical windows or boundaries, but this form of definition is not usual. In relation to the number of sites per document in the corpus, the mean value is close to three. Up to 73% of cases involve only one site, but there are four cases that exceed forty sites. In general, if the objectives expressed for the papers are compared with what subsequently materializes in the method and results, we consider that in most cases the population is not adequately sampled with a single site and, thus, no statistically sound conclusions can be extracted. In any case, the authors seem satisfied with their results and representativeness.
In addition to establishing the site(s) of interest, the size (area) of these sites is a key aspect of sampling. By this, we refer to the spatial extension covered by the sample to be analyzed (for example, a municipality, etc.). This aspect, which is very important from a statistical point of view, is not well addressed in most of the cases. In fact, 42% of them do not explicitly provide the size of the area. Statistics regarding the size [km2] of these areas appear in Table 9 and show great variability (see range [Max–Min] and standard deviation). Here, the most interesting information is related to the values of the percentiles; thus, considering the spatial resolution of the GDEMs, and keeping in mind their most appropriate range of applicability scales, we consider that there are numerous cases where the indicated test-area sizes are small.
Stratification is an appropriate strategy for sampling when it is considered that within the population there are subgroups with more homogeneous behavior. The perspective (estimation or hypothesis testing) and the parameter to be evaluated also affect the sampling design. For example, a quantitative variable is not the same as a qualitative variable, and within these, the number of categories or levels must also be considered. The majority of the papers analyzed have an estimation perspective and not so much quality control (contrast or hypothesis test) on that parameter (parameters of interest will be discussed in the next section). In any case, the stratification criteria can vary according to the needs of the researchers and the specific conditions of the area where an analysis is performed. Stratification can be considered as a refinement to be applied either to the design of the sample or to the presentation and analysis of the results. In the case of this corpus, we can conclude that stratification is only used for the presentation of the results in order to demonstrate a previous hypothesis. That is to say, there is no paper in the corpus with a statistical application of stratification in the sampling design. In general, there is simply no sampling design. In line with what was indicated above, the following examples use stratification to present the results: ref. [45] establishes four types of areas; ref. [29] establishes classes based on slope and cover; and ref. [46] considers classes based on height intervals. In total, there are 116 papers (37%) in which the application of stratification for the purpose of reporting results is used.

3.4. The Criteria Used in the Evaluation

In many cases, the evaluation carried out is datacentric (as defined by [60]) and, in general, it only focuses on vertical positional accuracy. Only in very few papers are derived variables included in the comparison. For example, refs. [61,62] and others considered slope accuracy, while ref. [38] considered aspect. Ref. [40] considered both. However, studies that present a user-centric orientation (as defined by [60]) for the comparison exercise are becoming more frequent. For example, ref. [63] in relation to the case of floods, ref. [64] in relation to the prediction of land cover, and ref. [65] in relation to the determination of peaks. It should be remembered here that our corpus was devoted to comparison exercises; there might certainly be other papers, not involving a comparison, which performs user-centric accuracy estimates but were not considered here. Ref. [66] proposes evaluating the quality in a complete way, but they do not establish a quality model only independent measures.
The parameters used in the comparison of the GDEM are a very relevant element because, on the one hand, they must be considered as a surrogate of the purpose and perspective adopted in the comparison, and, on the other, because they establish conditions for the comparison method itself. In any case, we must distinguish the metric (e.g., RMSE, STD, and LE90) from the theme to which it is applied (e.g., elevation, slope, aspect, etc.). The combination of a metric and a theme is called an evaluation criterion. Table 10 and Table 11 present the cases identified in the corpus with the combinations of the metrics with the themes of interest (the criteria). The criteria have been grouped by themes of interest. In total, 10 different basic metrics were identified (Correlation Coefficient R, IQR, LE90, LE95, MAD, MAE, NMAD, RMSE, STD, and Range). Not all of these metrics were applied to all themes. Despite being well established, the formulas as reported by the authors are sometimes wrong. To avoid these problems a reference to a reputable source could be used, e.g., ISO 19157 annex D [67]. On the other hand, Table 11 shows metrics that are directly linked to the user needs (e.g., flood areas, topographic wetness index (TWI)-related, etc.).
Focusing our attention on Table 11 and leaving aside the most popular criteria for a moment, it is noteworthy that a small number of metrics are applied also to horizontal accuracy and orientation accuracy. In the case of elevation and slope, the number of metrics is much higher. We believe that this higher number is due to the greater interest in these topics (elevation and slope) and, therefore, the need to explore which metrics may be more appropriate. In any case, these metrics are quite conventional, and, as mentioned above, many of them are defined in a general form by the international standard ISO 19157 [67] devoted to the quality of geospatial data. In relation to the most popular criteria (Table 11), the variety of options is large. The terms included in the “explanation” column for this case are not metrics per se, but explanations related to the processes that are considered. For example, within morphometry, there are many themes that are used in the papers (e.g., bifurcation ratio, form factor, stream frequency, etc.), and in the case of spurious pits, there are also several possible metrics (e.g., presence/absence, density, count, area covered, etc.). This situation makes it impossible to present here all the metrics detected in the corpus analyzed, and that is why a term related to the purpose is used as a label for each issue.
To carry out the comparisons between GDEMs, one or more criteria can be applied in each paper. For example, ref. [44] uses six criteria and [68] twelve. The use of several criteria offers a multivariate perspective from the statistical point of view, although statistical studies of this type were not found in the corpus. Concerning the number of criteria used in the corpus, the modal value is one, 29% of the cases consider four or more criteria, and there is a case in which twelve criteria are used jointly [68].
Table 12 indicates the criteria used at least 10 times. In this table, it can be seen that in 9 out of 20 cases, the criteria is related to elevation (EL_xxx). The rest of the popular criteria are of an applied nature, which clearly indicates their interest. In addition, the most common measurements are those with a classical statistical basis (RMSE, standard deviation, and range). The most popular case is RMSE applied to the elevation, but popularity is just that. Criteria of a more applied nature (morphometric, functional quality, and hydrology) also appear, but with fewer cases counted and, generally, in papers of more recent dates. They usually lead to a quality statement, not to a quantitative value, thus precluding producing a ranking.
Another analysis of interest here is how the different criteria are associated. Table 13 presents the crossing of all the criteria against all the criteria for those that account for more than 10 cases in the diagonal (as shown in Table 12). The results are presented in the form of a diagonal matrix and for greater reading comfort the zeros appear as empty cells. Several aspects can be highlighted: First, we note that only the first row counts values for all the criteria, which indicates that the EL_RMSE is the standard criterion, which means that any other criteria are always accompanied by this one. The highest values occur for the cross between EL_RMSE and EL_STD and EL_Range. In the second row, the values are much lower than in the first row and there are cases of crossings with a null value (zero). In the other rows, numerous cases with no value are present. In the lower corner of the matrix, there are few crosses between the most applied criteria; so, although half of the criteria are of an applied nature, the joint use of them is unusual. In other words, there are no cases in which two applied criteria are used jointly. For example, using morphometric criteria and landslide criteria jointly. What is more usual is to apply general criteria (e.g., El_RMSE) together with applied criteria, which is logical: offer information with two perspectives, the applied perspective and a more general or standard one.
A better way to understand the structure of the matrix presented in Table 13 is graphically. For this purpose, with the data from Table 13 a matrix of relative appearances has been determined, and from it, a matrix of distances has been calculated. After that, a hierarchical cluster analysis has been applied based on the Ward method for the intra-group distance. The graphic result is presented in Figure 3, where it is clearly observed that the three indices with the highest frequency form a compact group (red branch) with respect to the rest, they are data-oriented criteria applied to elevation. It is also evident that, in the rest, the grouping of the most applied indexes (hydrology, functional quality, and landslides) is detected (blue branch); these are use-oriented criteria. The rest of the groups (under the yellow branch) make up a totum revolutum.

3.5. Quantitative Results

It has already been argued that the corpus shows a plethora of inconsistencies in the methodology applied by different authors. Despite the fact that they have used different reference data sets, considered test sites with wildly different areas and (in a few cases) even computed the metrics with the wrong formula, they have produced for each site a numerical value which, as a set, might be worth considering at the global scale. If we take them collectively, for any given criteria the wisdom of the crowd might produce a sound value for one GDEM, which in turn can be compared with the equivalent ones produced for a second GDEM. Thus, for a particular criterion, we will be in the position to establish a ranking among GDEMs. Only criteria such as EL_RMSE, EL_STD, and EL_Range present a high number of cases to ensure reasonable representativeness. For the above reasons, we are going to focus just on the EL_RMSE case, which is the most widely used. In any case, the following section will include a discussion of the criteria applied.
In line with the above argument, Table 14 presents some basic statistics for the case of the EL_RMSE for those GDEMs with more than 15 site assessments. One important issue that affects numerical results is that the maximum values are conditioned by a few papers. We believe that they make the comparisons somewhat crude. For this same reason, the mean and deviation values of the metric are also affected. In this way, if we wish to make a general consistency comparison between the values of the EL_RMSE for different GDEMs, it is better to focus on the median and other percentiles which are less affected by extreme values.
Computing a ranking among GDEMs presumes that those GDEMs involved are indeed comparable. The implicit assumption is that the end-user has defined a subset of GDEMs wherein any of them is suitable for their task. The rationale thus separates GDEMs like WorlDEMs (a high resolution, non-free GDEM) from others (of lower resolution but free access). We can identify a subset containing those of 1″ resolution (one arcsecond) (NASADEM, TANDEMX30, AWD30, SRTM30, ALOSPRISM, and ASTER), and other subsets of 3″ resolution (MERIT, TANDEMX90, and SRTM90). Both sets were biased by our early decisions regarding the criteria to build the corpus: other global DEMs certainly exist that have not raised enough interest in the literature to be included in these comparison exercises.
Among those of 1″ resolution, the ranking using the median of the EL_RMSE shows that the best GDEM is NASADEM, followed by TANDEMX30, AWD3D30, SRTM30, ALOSPRISM, and ASTER. If we use instead the more informative LE95 (metric proposed in various accuracy standards), the best one is again NASADEM, followed by AWD3D30, SRTM30, TANDEMX30, ALOSPRISM, and the list closes again with ASTER. The most noticeable change is the fall of TANDEMX30 from second to fourth position. These results are to be taken as merely informative: we have just used published information, and they do not arise from a systematic process like the one outlined by the DEMIX project [16]. Among the 3″ resolution GDEMs, the ranking either for the median or the LE95 metric is (MERIT, TANDEMX90, and SRTM90).

4. Discussion

An important aspect that undermines the results arising from the corpus analyzed is the absence of a common methodology for evaluating and reporting the accuracy of the DEMs and, therefore, of the GDEMs. Of course, for years there have been guides (e.g., [69,70]) and even standards (e.g., [71]) that could have been used to estimate accuracy and perform comparisons between geospatial data products, as well as to better report the results, but for some reason their use has been very limited in the corpus analyzed. Let us observe two examples. First, the use of a widely-used standard such as the NSSDA [71] is mentioned only a few times in the corpus (5 papers out of 313) (e.g., [72]). Second, aspects as simple as adequately indicating the area of analysis using coordinates are not usual. We do not know if this is due to the authors’ ignorance of the existence of standards, or because they consider that these standards are not adequate for GDEM comparisons. All of this limits the possibilities of performing meta-analyses of a statistical nature, which would be logical for a situation like this.
From a scientific point of view, we consider that the information provided in the papers would not always allow the experiment to be replicated, not just because of the lack of easy access to reference data, but also because of a lack of clarity in the methods and processes. Abounding in the lack of standardization, numerous papers differ in basic definitions, and there are even errors in the analytical formulation (e.g., in some cases to calculate the RMSE the numerator is divided by n-1). We have detected some papers (e.g., [73]) with problems in the definition of the statistical formulation of metrics (e.g., RMSE and STD). Others show wrong figures (e.g., STD values larger than RMSE values) or even negative STD values [27]. The inappropriate application of the uncertainty expansion parameters (e.g., LE 90% and LE 95%) as multiples of the RMSE has also been found (e.g., [74,75,76]). This is not correct, since these relationships can only be applied to the case of the STD under the assumption of a normal distribution of errors. All of the above means that we cannot take for granted the comparability and interoperability between the values calculated for the RMSE, STD, etc., neither between them nor when the same parameter is used in several studies, given that other relevant aspects (size of samples, data processing criteria, etc.) can vary considerably.
In relation to the data set used as a reference, in some cases, it is not well defined or identified ([15,77,78]). Despite the fact that common sense states that the reference should be substantially more accurate than the GDEM, in some cases one of the GDEMs is used as a reference ([79]). There are also cases of using legacy mapping at smaller scales (1:50.000 and 1:100.000) ([37,80]) which may mean that the data set is not suitable as a reference because it does not satisfy the criterion of having greater accuracy than the product to be evaluated. In some cases, the reference data is very scarce and insufficient. For example, ref. [81] use less than 10 control points. Furthermore, as indicated previously, attention is not always paid to compliance with the minimum requirements that must be met (representativeness, independence, and accuracy).
The control or evaluation by points is the most frequent, but as some authors indicate, its use has a limited value (e.g., [33]). When using points as control elements, most of the time the issue of the spatial support of the measurement is not considered; ground control points have punctual support and DEM data represent elevation values referred to a larger area. This situation and its consequences are too often ignored. On the other hand, the analysis of positional accuracy is oriented towards altimetry, as is natural due to the elevation component of GDEMs, but whether or not there is a horizontal displacement that affects it is not usually analyzed (e.g., [29] evaluate horizontal displacement). Most evaluations focus on the absolute perspective (absolute vertical accuracy), when for many applications and analyses the relative vertical accuracy is more important than the absolute one [29]. On the other hand, it is well known that elevation error and the artifacts present in the GDEMs greatly influence the elevation derivatives (slope, orientation, curvature, etc.) (e.g., [40,66]) in such a way that it is possible to have high elevation accuracy and low shape quality and vice versa [82]. Despite all this, only [40] analyzed pixel-wise error of elevation, slope, and aspect. For their part, [83] indicates that fine-scale local morphometry is often much more important than elevation difference metrics. In general, unfortunately, it is not very common that the authors to report the limitations of the GDEM comparison and evaluation methods that they apply (e.g., [25]).
From our analysis of the corpus, we concluded that sufficient scientific rigor is not always applied. For instance, when needed, the normality of the errors is assumed but the actual distribution is seldom analyzed. Although there is some controversy about the normality of elevation errors, this aspect is only analyzed in very few cases (e.g., [74,84,85] among others). In these cases, analysis based on Q-Q plot graphs is often applied (e.g., [86,87]) dismissing more formal statistical tests. Other possible distributions are almost ignored. Only refs. [88,89] mention the use of the Laplace distribution as an alternative. There are few cases characterizing the error through other alternatives, for example through the application of the semivariogram (e.g., [31,63,90]) or using Fourier analysis (e.g., [43,63,91]). The possible autocorrelation of the error is practically forgotten. Another aspect that is well known to affect the numerical results of normal-based statistics is the presence of outliers. As in the previous case, there are few papers where robust expressions against them are considered (e.g., MAD, LE90, IQR, etc.) (e.g., [30,85]) with little or no justification. We consider that this means that the most elementary aspects of the variable of interest, such as knowing its basic behavior, are not adequately and widely addressed in the corpus.
The common situation is the evaluation of the accuracy of DEMs from a single perspective, internal or external accuracy, but not jointly ([66]). Assessment of external accuracy is the most usual option, although, as El Hage indicates, this perspective is not always the most appropriate. Vertical accuracy is linked to horizontal or planimetric accuracy, but there are very few studies (e.g., [29,68,92,93]) that analyze the influence of the possible horizontal displacement between data sets. That is to say that in practically all the cases it is assumed that this displacement does not exist as an underlying basic hypothesis. However, this hypothesis is not usually made explicit. The above, together with the aforementioned problem of the difference in spatial support between the control element (for example, a GNSS point or a geodetic vertex) and a DEM value (for example, a grid cell), are critical aspects in these types of evaluations.
Stratification is an appropriate strategy for sampling design and the presentation of results when there are conditions that generate more homogeneous groups in each of the strata. The GDEMs cover wide geographical areas with changing topographic, geomorphologic, and hydrographic conditions, and therefore it seems logical to consider stratification. However, sampling stratification is never used in the corpus analyzed here. Stratification is only considered in the presentation of results. Common criteria for stratification are elevation itself ([94,95]), topography ([36]), slope ([36,79,94]), area ([79]), aspect ([94,96]), land cover ([36,96]), stack number ([36]), etc. In some cases, several criteria are considered at once ([36]).
Elevation is trivially the preferred theme for evaluating the accuracy of DEMs. However, slope and aspect also receive attention in numerous papers. Their joint analysis (e.g., [38,40,94,97,98,99,100]) is usual, but there are also cases in which only one of them is analyzed (e.g., [39,62,101,102]). The analysis of the slope is more frequent than that of the aspect. Second-order parameters (like roughness and curvature) appear in a testimonial way (e.g., [61,103,104]). Typically, slope analyses adopt a descriptive perspective (mean values, deviations, histograms, and visual analysis of some profiles) and not a direct pixel-by-pixel slope values comparison. In this regard, the application of the goodness-of-fit test between distributions is missing. In any case, we believe that the foregoing clearly indicates the need to pay attention to these variables (slope and aspect) derived from the DEMs, which coincides with what was indicated by ref. [105] when analyzing the uses of the DEMs through a user survey.
Quantitative, objective metrics lead naturally to a ranking, but there are subjective alternatives as well. One of these is visual analysis. It also offers numerous options, but there are no standardized methods. Some form of visual analysis is indicated in thirteen references. The most complete example is the one presented by ref. [106], which includes profiles, DEM visualization, and relief shading.
We consider that in general the analyses carried out are weak. There are numerous alternatives (e.g., elevation profiles, visualization, shaded reliefs, distribution goodness-of-fit tests, parameters tests, analysis of variance, Q-Q plots, boxplots, hexbin scatter plots, ROC curves, etc.) that are not typically used.
In relation to the assessment, a clear formulation of the comparison method to be followed is lacking, but the majority of the papers offer a ranking of the GDEM analyzed (e.g., [65,107,108,109,110]), although very few propose some kind of procedure to combine various analysis perspectives in a metric that allows ordering of the analyzed GDEMs ([68,111]). There are numerous cases in which several criteria are used, but without resulting in an ordering of the available GDEM. Even when various criteria are used the perspective of these analyses is not multivariate, from the statistical point of view, nor multicriteria, from the point of view of decisions. Moreover, there is a paradigmatic case which is that of morphometric evaluations. There are numerous papers with a morphometric perspective ([66,109,112,113], etc.) where a number of indices are computed (e.g., Horton’s parameters) and compared with those values computed using the reference DEM. However, in most of the papers, no joint result is reached for all of them. Morphometric parameters are only calculated, presented, and commented on in an attempt to uncover a relationship. Thus, despite the popularity of these analyses and the applied interest they have, there is no reference to a standard method for comparing and ranking (G)DEMs from this perspective. In summary, numerous results are generated, but most of the time the authors are not able to elucidate which GDEM behaves better than another. Regarding multicriteria analysis, only one paper is noteworthy [65].
One of the most interesting and notable issues is that despite there are numerous papers that present a certain orientation to fitness for use, the use of the most traditional perspective, centered on vertical positional accuracy, is dominant. There are also cases in which vertical positional accuracy indices (e.g., RMSE) derived from control points are interpreted from an applied perspective, as is the case of ref. [114], who analyzed this situation in connection with landslide applications.
We can consider the existence of traditional criteria (e.g., EL_RMSE, EL_STD, etc.) that are focused on vertical elevation error and are based on common statistical metrics (e.g., RMSE, STD, etc.). They show a generalist and data-centric perspective, typical of data producers and in which the intended use is not considered. We can also consider the existence of other criteria, very similar to the previous ones and also based on statistical metrics, but which already include a broader perspective of what a DEM is. For example, there are a few papers that rank the global DEM using planimetric accuracy. Ref. [115] compared global DEM using the double buffer method described in ref. [116]. Instead of using contour lines as homologous objects, they selected ridges and talweg lines, thus connecting the analysis to hydrography. Another clear example is the metric applied to slope and orientation. In some papers, these criteria have been used with a certain fitness for use perspective, but we believe that they should really be part of a routine evaluation of DEMs together with the criteria related to elevation (absolute and relative accuracies). Finally, we can consider the existence of other criteria that are of a much more applied nature, and the corpus presents numerous examples. We denote them as functional accuracy, as part of a so-called functional quality concept. It should also be noted that most of the papers that present an applied orientation usually combine several criteria. For example, ref. [117] is focused on the suitability of GDEMs for micro-scale watershed planning, and three watershed-defining parameters (elevation, slope, and reservoir capacity) were tested in order to determine which performs better. Below are some examples of fitness for use. Some of these options are directly linked to one or several of the thematic areas that ref. [118] considers within current digital (geo)morphometry:
  • Geomorphology. Ref. [65] is interested in peak detection as remnants of degraded geomorphic surfaces and uses indices related to the presence of those peaks. Ref. [119] is interested in the abilities of GDEMs for deriving the topographic wetness index (TWI) and landform classifications. They use the overall accuracy and the kappa indexes for the assessment of classification results.
  • Geomorphometry. Ref. [120] pays attention to several basin properties: basin area, average overland flow length, basin slope, basin length along the main channel, basin slope along the main channel, basin perimeter, and shape factor. They aim to provide guidelines for users to select the most suitable GDEM that will obtain an accurate analysis in less time. Ref. [109] uses “register difference” which represents the area mismatching degree (%) (sliver polygons area) between the derived data set and the reference. Ref. [112] applies twenty-one morphometric parameters and uses relative error in evaluating the similarity between the derived drainage network of the GDEMs and a stream network derived from a topographic map.
  • Determination of water volumes. Ref. [121] estimates water volume variations, establishing a regression between the volume derived from the analysis of the GDEMs and the reference values using the correlation coefficient (R2), the normalized root-mean-square error (NRMSE), and the mean absolute percentage error (MAPE) for the analysis. Ref. [122] compares three GDEMs and analyzes the behavior of the contour lines, the longitudinal profiles of dams, and the curves for water reservoir elevation and stored volume. Ref. [117] uses a flood level and determines the volume stored in the reservoir vessel.
  • Detection of depressions and peaks. Ref. [51] investigates the use of GDEMs to detect and quantify natural karst depressions. For the evaluation, ref. uses the overall accuracy and also a morphometric analysis based on the circularity index. Ref. [32] focuses on the presence of peaks using the topographic prominence which is defined as the vertical distance between a peak and the lowest contour line encircling it but not another higher peak.
  • Detection of lineaments. Ref. [50] examines various GDEMs at different resolutions in order to recommend the best one based on lineaments elicitation. Variables of interest are the density, length, and orientation of the extracted lineaments.
  • Drainage network delineation. Ref. [123] analyzes several aspects, among them the positional accuracy of the resulting drainage network, for which they apply a buffer method [124]. This is an example of a case where something computable from the DEM can be discerned as well in the terrain as 2D objects. The same can be anticipated for roads, but there was no example in the corpus.
  • Determination of the height of buildings. Ref. [125] analyzes the feasibility of using GDEMs for extracting digital building height models and urban elevation profiles and uses a completion metric that is equivalent to recall (true positive/(true positive + false negative)).
The methods outlined above only require access to the DEM jointly with the DEM reference data in order to perform the computation. Among those cases where extra reference data is involved in the computation (other than elevation), we can mention:
  • Quantification of soil loss and erosion. Ref. [126] applied the universal soil loss equation and compared the results obtained with their reference using the Kappa index and the producer and user accuracies for a certain number of erosion level categories. Ref. [102] compared the GDEMs in terms of soil loss and various flavors of slope (slope, slope length, etc.) as criteria.
  • Landslide simulation. Ref. [127] compared the results of landslide risk estimates to three real landslide events caused by rain. For their evaluation, they used receiver operating characteristics (ROC) curves, confusion matrices, and a factor called LRClass, which is the ratio of the percentage of landslide locations within a particular class of factor of safety in relation to the total number of landslide locations considered.
  • Hydraulics. In [110], part of the analysis is based on the use of a numerical model. Several results of the simulations are considered: the discharge and water surface elevation results from the hydraulic model, the delineation of the flooded area, and the relative sensitivity of the hydraulic model to changes in Manning’s n roughness coefficient.
  • Snow avalanches. Ref. [128] analyzed the case of snow avalanche dynamics. As comparison criteria, they used: (a) flow path, (b) run-out distance and deposit, and (c) flow velocities and impact pressure.
  • Land cover classification. Ref. [64] analyzed the impact of various DEMs in the automatic classification of land covers and the result is evaluated using the Kappa index.
We consider that the high number of examples of evaluation applied to use cases clearly shows its interest. Another remarkable aspect is that in many cases metrics already applied in the geospatial field are used (e.g., completeness, overall accuracy, kappa index, recall, density, etc.). Unfortunately, unlike the case of data-centric procedures already incorporated into established standards, there are no standardized procedures for performing these user-centric evaluations. It is evident that, here, there is a clear gap that offers opportunities for research, standardization, and outreach.
An important issue to highlight here is that the papers are mostly based on open GDEM data (despite some commercial ones having been considered), but almost all of them use reference data that are not offered openly. In this sense, the proposal of [129] to create a collaborative DEM (DTM + DSM) data control infrastructure worldwide is interesting. This is also in line with the ongoing DEMIX initiative [16]. Finally, ending as we have begun this section, the lack of standardization should be highlighted both in the use of existing standards and in the development of new standards. We believe that many of the problems indicated above could be avoided if adequate standards were available that would prescribe certain elementary aspects in order to achieve rigorous comparisons and guide other aspects of the comparisons.

5. Conclusions

Based on a multiple search strategy, an analysis of a corpus of 313 papers related to comparisons of one GDEM and one or more DEM/GDEMs with a reference has been performed. The number of findings in the search confirms that the use of GDEMs is widespread and that the comparison between different GDEM options continues to be a topic of interest for the scientific community and users. The most popular GDEMs in comparison studies are SRTM and ASTER. The distribution by country of the papers indicates that there is much greater activity in emerging countries (e.g., India, Brazil) than in developed ones. In general, the definition of the data population to be compared is not carried out rigorously and most of the comparisons focus on the use of classical metrics on elevation (RMSE and STD). The RMSE applied to elevation is the standard in comparisons. It is also common to use more than one metric in comparisons, a fact that easily leads to conflictive rankings. A multivariate perspective to handle this is not applied in the analyses. Comparisons have also been found in which criteria more widely applied and closer to functional quality are used, but this perspective is still not very common. Comparisons are made using very diverse sources as a reference (e.g., GNSS points, legacy cartography, topographic network vertices, etc.). We noticed that the data sets used as references do not always have adequate quality. Regarding the numerical values that result from the comparisons, the lack of standardized methods, measures, criteria, and reporting procedures makes it nearly impossible to perform any type of rigorous meta-analysis. Our analysis has shown the importance of a more applied evaluation of the quality of the GDEMs, closer to the uses (suitability for use), but also that there are no standardized methods for it, neither proposed by the user communities nor by the producers. This opens up a wide field for applied research.
In line with the criteria indicated above, and looking to the future, the main conclusion of this study is the urgent need to standardize the GDEM comparison methods so that the results will be interoperable and usable on a global scale. This standardization should cover the definition of the use cases, the data population, the sampling, the criteria and measures for comparison, the criteria for selecting the reference, the statistical analysis, and reporting procedures. Along these lines, and with an open science perspective, a global GDEM reference infrastructure would be an element of great value to assure transparency.

Supplementary Materials

This information can be downloaded at https://www.mdpi.com/article/10.3390/ijgi12080337/s1. The corpus.xls file contains the list of all the documents that make up the corpus analyzed in this paper.

Author Contributions

Conceptualization, methodology, analysis, writing—original draft preparation, and writing—review and editing, Carlos López-Vázquez and Francisco Javier Ariza-López. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the research project “Functional Quality of Digital Elevation Models in Engineering” of the State Research Agency of Spain; PID2019-106195RB-I00/AEI/10.13039/501100011033 (https://coello.ujaen.es/investigacion/web_giic/funquality4dem/) (accessed on 1 August 2023).

Data Availability Statement

The copus of analyzed documents is included as Supplementary Materials.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guth, P.L.; Van Niekerk, A.; Grohmann, C.H.; Muller, J.-P.; Hawker, L.; Florinsky, I.V.; Gesch, D.; Reuter, H.I.; Herrera-Cruz, V.; Riazanoff, S.; et al. Digital Elevation Models: Terminology and Definitions. Remote Sens. 2021, 13, 3581. [Google Scholar] [CrossRef]
  2. EU. Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE)|INSPIRE; European Parliament and of the Council of the European Union: Strasbourg, France, 2007. [Google Scholar]
  3. UN-GGIM. The Global Fundamental Geospatial Data Themes; United Nations Committee of Experts on Global Geospatial Information Management, United Nations: New York, NY, USA, 2019. [Google Scholar]
  4. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The Shuttle Radar Topography Mission. Rev. Geophys. 2007, 45, RG2004. [Google Scholar] [CrossRef] [Green Version]
  5. Tachikawa, T.; Hato, M.; Kaku, M.; Iwasaki, A. Characteristics of ASTER GDEM Version 2. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3657–3660. [Google Scholar]
  6. Tachikawa, T.; Kaku, M.; Iwasaki, A.; Gesch, D.; Oimoen, M.; Zhang, Z.; Danielson, J.; Krieger, T.; Curtis, B.; Haase, J. ASTER Global Digital Elevation Model Version 2—Summary of Validation Results; Earth Resources Observation and Science (EROS) Center: Sioux Falls, SD, USA, 2011. [Google Scholar]
  7. Rizzoli, P.; Martone, M.; Gonzalez, C.; Wecklich, C.; Borla Tridon, D.; Bräutigam, B.; Bachmann, M.; Schulze, D.; Fritz, T.; Huber, M.; et al. Generation and Performance Assessment of the Global TanDEM-X Digital Elevation Model. ISPRS J. Photogramm. Remote Sens. 2017, 132, 119–139. [Google Scholar] [CrossRef] [Green Version]
  8. Krieger, G.; Moreira, A.; Fiedler, H.; Hajnsek, I.; Werner, M.; Younis, M.; Zink, M. TanDEM-X: A Satellite Formation for High-Resolution SAR Interferometry. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3317–3341. [Google Scholar] [CrossRef] [Green Version]
  9. Zink, M.; Bachmann, M.; Brautigam, B.; Fritz, T.; Hajnsek, I.; Moreira, A.; Wessel, B.; Krieger, G. TanDEM-X: The New Global DEM Takes Shape. IEEE Geosci. Remote Sens. Mag. 2014, 2, 8–23. [Google Scholar] [CrossRef]
  10. Crippen, R.; Buckley, S.; Agram, P.; Belz, E.; Gurrola, E.; Hensley, S.; Kobrick, M.; Lavalle, M.; Martin, J.; Neumann, M.; et al. NASADEM Global Elevation Model: Methods and Progress. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B4, 125–128. [Google Scholar] [CrossRef] [Green Version]
  11. Tadono, T.; Takaku, J.; Tsutsui, K.; Oda, F.; Nagai, H. Status of ALOS World 3D (AW3D) Global DSM Generation. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3822–3825. [Google Scholar]
  12. Airbus D&S. Copernicus Digital Elevation Model—Product Handbook; Airbus Defence and Space—Intelligence: Potsdam, Germany, 2022. [Google Scholar]
  13. Yamazaki, D.; Ikeshima, D.; Tawatari, R.; Yamaguchi, T.; O’Loughlin, F.; Neal, J.C.; Sampson, C.C.; Kanae, S.; Bates, P.D. A High-Accuracy Map of Global Terrain Elevations: Accurate Global Terrain Elevation Map. Geophys. Res. Lett. 2017, 44, 5844–5853. [Google Scholar] [CrossRef] [Green Version]
  14. Hawker, L.; Uhe, P.; Paulo, L.; Sosa, J.; Savage, J.; Sampson, C.; Neal, J. A 30 m Global Map of Elevation with Forests and Buildings Removed. Environ. Res. Lett. 2022, 17, 024016. [Google Scholar] [CrossRef]
  15. Uss, M.L.; Vozel, B.; Lukin, V.V.; Chehdi, K. Estimation of Variance and Spatial Correlation Width for Fine-Scale Measurement Error in Digital Elevation Model. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1941–1956. [Google Scholar] [CrossRef]
  16. Strobl, P.A.; Bielski, C.; Guth, P.L.; Grohmann, C.H.; Muller, J.-P.; López-Vázquez, C.; Gesch, D.B.; Amatulli, G.; Riazanoff, S.; Carabajal, C. The Digital Elevation Model Intercomparison Experiment Demix, a Community-Based Approach at Global Dem Benchmarking. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B4-2021, 395–400. [Google Scholar] [CrossRef]
  17. Polidori, L.; El Hage, M. Digital Elevation Model Quality Assessment Methods: A Critical Review. Remote Sens. 2020, 12, 3522. [Google Scholar] [CrossRef]
  18. Mesa-Mingorance, J.L.; Ariza-López, F.J. Accuracy Assessment of Digital Elevation Models (DEMs): A Critical Review of Practices of the Past Three Decades. Remote Sens. 2020, 12, 2630. [Google Scholar] [CrossRef]
  19. Takaku, J.; Tadono, T.; Doutsu, M.; Ohgushi, F.; Kai, H. Updates of ‘Aw3d30’ Alos Global Digital Surface Model with Other Open Access Datasets. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B4-2020, 183–189. [Google Scholar] [CrossRef]
  20. Gesch, D.; Oimoen, M.; Danielson, J.; Meyer, D. Validation of the Aster Global Digital Elevation Model Version 3 over the Conterminous United States. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B4, 143–148. [Google Scholar] [CrossRef] [Green Version]
  21. Rodríguez, E.; Morris, C.S.; Belz, J.E. A Global Assessment of the SRTM Performance. Photogramm. Eng. Remote Sens. 2006, 72, 249–260. [Google Scholar] [CrossRef] [Green Version]
  22. Mukul, M.; Srivastava, V.; Mukul, M. Analysis of the Accuracy of Shuttle Radar Topography Mission (SRTM) Height Models Using International Global Navigation Satellite System Service (IGS) Network. J. Earth Syst. Sci. 2015, 124, 1343–1357. [Google Scholar] [CrossRef] [Green Version]
  23. Airbus D&S. Copernicus DEM Copernicus Digital Elevation Model Validation Report; Airbus Defence and Space—Intelligence: Potsdam, Germany, 2020. [Google Scholar]
  24. Robinson, N.; Regetz, J.; Guralnick, R.P. EarthEnv-DEM90: A Nearly-Global, Void-Free, Multi-Scale Smoothed, 90 m Digital Elevation Model from Fused ASTER and SRTM Data. ISPRS J. Photogramm. Remote Sens. 2014, 87, 57–67. [Google Scholar] [CrossRef]
  25. Uuemaa, E.; Ahi, S.; Montibeller, B.; Muru, M.; Kmoch, A. Vertical Accuracy of Freely Available Global Digital Elevation Models (ASTER, AW3D30, MERIT, TanDEM-X, SRTM, and NASADEM). Remote Sens. 2020, 12, 3482. [Google Scholar] [CrossRef]
  26. Airbus D&S. WorldDEMTM Technical Product Specification Digital Surface Model, Digital Terrain Model. Version 2.4; Airbus Defence and Space—Intelligence: Potsdam, Germany, 2018; p. 38. [Google Scholar]
  27. Snehmani; Singh, M.; Gupta, R.D.; Ganju, A. Extraction of High Resolution DEM from Cartosat-1 Stereo Imagery Using Rational Math Model and Its Accuracy Assessment for a Part of Snow Covered NW-Himalaya. J. Remote Sens. GIS 2013, 4, 23–34. [Google Scholar]
  28. Du, X.; Guo, H.; Fan, X.; Zhu, J.; Yan, Z.; Zhan, Q. Vertical Accuracy Assessment of Freely Available Digital Elevation Models over Low-Lying Coastal Plains. Int. J. Digit. Earth 2016, 9, 252–271. [Google Scholar] [CrossRef]
  29. Breytenbach, A.; Van Niekerk, A. Analysing DEM Errors over an Urban Region across Various Scales with Different Elevation Sources. S. Afr. Geogr. J. 2020, 102, 133–169. [Google Scholar] [CrossRef]
  30. Hirt, C. Artefact Detection in Global Digital Elevation Models (DEMs): The Maximum Slope Approach and Its Application for Complete Screening of the SRTM v4.1 and MERIT DEMs. Remote Sens. Environ. 2018, 207, 27–41. [Google Scholar] [CrossRef] [Green Version]
  31. Hawker, L.; Neal, J.; Bates, P. Accuracy Assessment of the TanDEM-X 90 Digital Elevation Model for Selected Floodplain Sites. Remote Sens. Environ. 2019, 232, 111319. [Google Scholar] [CrossRef]
  32. Grohmann, C.H. Comparative Analysis of Global Digital Elevation Models and Ultra-Prominent Mountain Peaks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III–4, 17–23. [Google Scholar] [CrossRef] [Green Version]
  33. Becek, K. Assessing Global Digital Elevation Models Using the Runway Method: The Advanced Spaceborne Thermal Emission and Reflection Radiometer Versus the Shuttle Radar Topography Mission Case. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4823–4831. [Google Scholar] [CrossRef]
  34. Ariza-López, F.J.; García-Balboa, J.; Rodríguez-Avi, J.; Ceballos, J. Guide for the Positional Accuracy Assessment of Geospatial Data, 1st ed.; Occasional Posts; Pan American Institute of Geography and History (PAIGH): Mexico City, Mexico, 2021; Volume 1. [Google Scholar]
  35. Khasanov, K.; Ahmedov, A. Comparison of Digital Elevation Models for the Designing Water Reservoirs: A Case Study Pskom Water Reservoir. E3S Web Conf. 2021, 264, 03058. [Google Scholar] [CrossRef]
  36. Hu, Z.; Peng, J.; Hou, Y.; Shan, J. Evaluation of Recently Released Open Global Digital Elevation Models of Hubei, China. Remote Sens. 2017, 9, 262. [Google Scholar] [CrossRef] [Green Version]
  37. Chymyrov, A.; Chontoev, D.; Zhakeev, B. Creation of the Digital Relief Models Based on Open Remote Sensing Data for Improvement the Borders of River Basins in the Issyk-Kul Lake Cavity. ICIGIS 2020, 26, 349–365. [Google Scholar] [CrossRef]
  38. Courty, L.G.; Soriano-Monzalvo, J.C.; Pedrozo-Acuña, A. Evaluation of Open-access Global Digital Elevation Models (AW3D30, SRTM, and ASTER) for Flood Modelling Purposes. J. Flood Risk Manag. 2019, 12, e12550. [Google Scholar] [CrossRef] [Green Version]
  39. Fijałkowska, A. Analysis of the Influence of DTM Source Data on the LS Factors of the Soil Water Erosion Model Values with the Use of GIS Technology. Remote Sens. 2021, 13, 678. [Google Scholar] [CrossRef]
  40. Zingaro, M.; La Salandra, M.; Colacicco, R.; Roseto, R.; Petio, P.; Capolongo, D. Suitability Assessment of Global, Continental and National Digital Elevation Models for Geomorphological Analyses in Italy. Trans. GIS 2021, 25, 2283–2308. [Google Scholar] [CrossRef]
  41. Nadi, S.; Shojaei, D.; Ghiasi, Y. Accuracy Assessment of DEMs in Different Topographic Complexity Based on an Optimum Number of GCP Formulation and Error Propagation Analysis. J. Surv. Eng. 2020, 146, 04019019. [Google Scholar] [CrossRef]
  42. Li, P.; Li, Z.; Dai, K.; Al-Husseinawi, Y.; Feng, W.; Wang, H. Reconstruction and Evaluation of DEMs from Bistatic Tandem-X SAR in Mountainous and Coastal Areas of China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5152–5170. [Google Scholar] [CrossRef]
  43. Purinton, B.; Bookhagen, B. Beyond Vertical Point Accuracy: Assessing Inter-Pixel Consistency in 30 m Global DEMs for the Arid Central Andes. Front. Earth Sci. 2021, 9, 758606. [Google Scholar] [CrossRef]
  44. Rai, D.; Tobgay, T.; Dorji, T.; Dema, D.; Sharma, V.; Choki, T. Accuracy Assessment of Digital Elevation Models for a Mountainous Terrain; Jigme Namgyel Engineering College (JNEC): Dewathang, Bhutan, 2021; pp. 20–27. [Google Scholar]
  45. Del Rosario González-Moradas, M.; Viveen, W. Evaluation of ASTER GDEM2, SRTMv3.0, ALOS AW3D30 and TanDEM-X DEMs for the Peruvian Andes against Highly Accurate GNSS Ground Control Points and Geomorphological-Hydrological Metrics. Remote Sens. Environ. 2020, 237, 111509. [Google Scholar] [CrossRef]
  46. Liu, Z.; Zhu, J.; Fu, H.; Zhou, C.; Zuo, T. Evaluation of the Vertical Accuracy of Open Global DEMs over Steep Terrain Regions Using ICESat Data: A Case Study over Hunan Province, China. Sensors 2020, 20, 4865. [Google Scholar] [CrossRef]
  47. Bayık, Ç.; Becek, K.; Mekik, Ç.; Özendi, M. On the Vertical Accuracy of the ALOS World 3D-30m Digital Elevation Model. Remote Sens. Lett. 2018, 9, 607–615. [Google Scholar]
  48. Saini, O.; Bhardwaj, A.; Chatterjee, R. Generation of Radargrammetric Digital Elevation Model (DEM) and Vertical Accuracy Assessment Using ICESat-2 Laser Altimetric Data and Available Open-Source DEMs. In Proceedings of the 39th INCA International Congress on New Age Cartography and Geospatial Technology in Digital India, Dehradun, India, 18 December 2019. [Google Scholar]
  49. Soliman, A.; Han, L. Effects of Vertical Accuracy of Digital Elevation Model (DEM) Data on Automatic Lineaments Extraction from Shaded DEM. Adv. Space Res. 2019, 64, 603–622. [Google Scholar] [CrossRef]
  50. Shebl, A.; Csámer, Á. Reappraisal of DEMs, Radar and Optical Datasets in Lineaments Extraction with Emphasis on the Spatial Context. Remote Sens. Appl. Soc. Environ. 2021, 24, 100617. [Google Scholar] [CrossRef]
  51. De Carvalho, O.; Guimarães, R.; Montgomery, D.; Gillespie, A.; Trancoso Gomes, R.; de Souza Martins, É.; Silva, N. Karst Depression Detection Using ASTER, ALOS/PRISM and SRTM-Derived Digital Elevation Models in the Bambuí Group, Brazil. Remote Sens. 2013, 6, 330–351. [Google Scholar] [CrossRef] [Green Version]
  52. Kakavas, M.; Nikolakopoulos, K.G.; Kyriou, A.; Zagana, H. Assessment of Freely Available DSMs for Automatic Karst Feature Detection. Arab. J. Geosci. 2018, 11, 388. [Google Scholar] [CrossRef]
  53. Chowdhuri, I.; Pal, S.C.; Saha, A.; Chakrabortty, R.; Roy, P. Evaluation of Different DEMs for Gully Erosion Susceptibility Mapping Using In-Situ Field Measurement and Validation. Ecol. Inform. 2021, 65, 101425. [Google Scholar] [CrossRef]
  54. Zanandrea, F.; Michel, G.P.; Kobiyama, M.; Cardozo, G.L. Evaluation of Different DTMs in Sediment Connectivity Determination in the Mascarada River Watershed, Southern Brazil. Geomorphology 2019, 332, 80–87. [Google Scholar] [CrossRef]
  55. Brock, J.; Schratz, P.; Petschko, H.; Muenchow, J.; Micu, M.; Brenning, A. The Performance of Landslide Susceptibility Models Critically Depends on the Quality of Digital Elevation Models. Geomat. Nat. Hazards Risk 2020, 11, 1075–1092. [Google Scholar] [CrossRef]
  56. Chanu, M.L.; Bakimchandra, O. Landslide Susceptibility Assessment Using AHP Model and Multi Resolution DEMs along a Highway in Manipur, India. Environ. Earth Sci. 2022, 81, 156. [Google Scholar] [CrossRef]
  57. Huggel, C.; Schneider, D.; Miranda, P.J.; Delgado Granados, H.; Kääb, A. Evaluation of ASTER and SRTM DEM Data for Lahar Modeling: A Case Study on Lahars from Popocatépetl Volcano, Mexico. J. Volcanol. Geotherm. Res. 2008, 170, 99–110. [Google Scholar] [CrossRef] [Green Version]
  58. Garrote, J. Free Global DEMs and Flood Modelling—A Comparison Analysis for the January 2015 Flooding Event in Mocuba City (Mozambique). Water 2022, 14, 176. [Google Scholar] [CrossRef]
  59. Khojeh, S.; Ataie-Ashtiani, B.; Hosseini, S.M. Effect of DEM Resolution in Flood Modeling: A Case Study of Gorganrood River, Northeastern Iran. Nat. Hazards 2022, 112, 2673–2693. [Google Scholar] [CrossRef]
  60. Ariza-López, F.J.; Reinoso-Gordo, J.F. Functional Quality: A Use-Case Oriented Data Quality Evaluation. In Proceedings of the Fourteenth International Conference on Advanced Geographic Information Systems, Applications, and Services, Porto, Portugal, 26–30 June 2022; pp. 28–30. [Google Scholar]
  61. Tran, T.A.; Raghavan, V.; Masumoto, S.; Vinayaraj, P.; Yonezawa, G. A Geomorphology-Based Approach for Digital Elevation Model Fusion—Case Study in Danang City, Vietnam. Earth Surf. Dyn. 2014, 2, 403–417. [Google Scholar] [CrossRef] [Green Version]
  62. Ashatkin, I.A.; Maltsev, K.A.; Gainutdinova, G.F.; Usmanov, B.M.; Gafurov, A.M.; Ganieva, A.F.; Maltseva, T.S.; Gizzatullina, E.R. Analysis of Relief Morphometry by Global DEM in the Southern Part of the European Territory of Russia. Uch. Zap. Kazan. Univ. Ser. Estestv. Nauki 2020, 162, 612–628. [Google Scholar] [CrossRef]
  63. Prakash Mohanty, M.; Nithya, S.; Nair, A.S.; Indu, J.; Ghosh, S.; Mohan Bhatt, C.; Srinivasa Rao, G.; Karmakar, S. Sensitivity of Various Topographic Data in Flood Management: Implications on Inundation Mapping over Large Data-Scarce Regions. J. Hydrol. 2020, 590, 125523. [Google Scholar] [CrossRef]
  64. Cherlinka, V.R.; Dmytruk, Y.M.; Bodyan, Y.H. Effect of DEM Sources on Quality Indicators of Predictive Maps of Soil Cover. AiG 2020, 90, 36–46. [Google Scholar] [CrossRef]
  65. Dobre, B.; Kovács, I.P.; Bugya, T. Comparison of Digital Elevation Models through the Analysis of Geomorphic Surface Remnants in the Desatoya Mountains, Nevada. Trans. GIS 2021, 25, 2262–2282. [Google Scholar] [CrossRef]
  66. El Hage, M.; Villard, L.; Huang, Y.; Ferro-Famil, L.; Koleck, T.; Le Toan, T.; Polidori, L. Multicriteria Accuracy Assessment of Digital Elevation Models (DEMs) Produced by Airborne P-Band Polarimetric SAR Tomography in Tropical Rainforests. Remote Sens. 2022, 14, 4173. [Google Scholar] [CrossRef]
  67. ISO 19157:2013; Geographic Information—Data Quality. International Organization for Standardization: Geneva, Switzerland, 2013.
  68. Rawat, K.S.; Kumar, S.; Mishra, A.K.; Singh, S.K. Assessing the Accuracy of Open Source Altitude Data for the Hilly Area in Tehri Garhwal District of Uttarakhand, India. In Smart Technologies for Energy, Environment and Sustainable Development; Kolhe, M.L., Jaju, S.B., Diagavane, P.M., Eds.; Springer Proceedings in Energy; Springer Nature: Singapore, 2022; Volume 2, pp. 153–177. ISBN 9789811668784. [Google Scholar]
  69. ASPRS. ASPRS Guidelines Vertical Accuracy Reporting for Lidar Data V1.0; American Society for Photogrammetry and Remote Sensing: Baton Rouge, LA, USA, 2004. [Google Scholar]
  70. ASPRS. ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Remote Sens. 2015, 81, 1–26. [Google Scholar] [CrossRef]
  71. FGDC-STD-007.3-1998; Geospatial Positioning Accuracy Standards. Part 3: National Standard for Spatial Data Accuracy (NSSDA). Federal Geographic Data Committee Secretariat: Reston, VA, USA, 1998.
  72. Ioannidis, C.; Xinogalas, E.; Soile, S. Assessment of the Global Digital Elevation Models ASTER and SRTM in Greece. Surv. Rev. 2014, 46, 342–354. [Google Scholar] [CrossRef]
  73. Zhao, S.; Cheng, W.; Jiang, J.; Sha, W. Error Comparison among the DEM Datasets Made from ZY-3 Satellite and the Global Open Datasets. Available online: https://m.researching.cn/articles/OJ990f29924021fb66 (accessed on 22 February 2023).
  74. Abdulkareem, I.; Samuel, Z.; Abdullah, Q. Accuracy Assessment of Digital Elevation Models Produced From Different Geomatics Data. Eng. Technol. J. 2020, 38, 1580–1592. [Google Scholar] [CrossRef]
  75. Ihsan, H.M.; Sahid, S.S. Vertikal Accuracy Assessment on Sentinel-1, Alos Palsar, and Demnas in the Ciater Basin. J. Geogr. Gea 2021, 21, 16–25. [Google Scholar] [CrossRef]
  76. Altunel, A.O.; Okolie, C.J.; Kurtipek, A. Capturing the Level of Progress in Vertical Accuracy Achieved by ASTER GDEM since the Beginning: Turkish and Nigerian Examples. Geocarto Int. 2022, 37, 12073–12095. [Google Scholar] [CrossRef]
  77. Xu, K.; Fang, J.; Fang, Y.; Sun, Q.; Wu, C.; Liu, M. The Importance of Digital Elevation Model Selection in Flood Simulation and a Proposed Method to Reduce DEM Errors: A Case Study in Shanghai. Int. J. Disaster Risk Sci. 2021, 12, 890–902. [Google Scholar] [CrossRef]
  78. Md Ali, A.; Solomatine, D.P.; Di Baldassarre, G. Assessing the Impact of Different Sources of Topographic Data on 1-D Hydraulic Modelling of Floods. Hydrol. Earth Syst. Sci. 2015, 19, 631–643. [Google Scholar] [CrossRef] [Green Version]
  79. Lopes Pereira, H.; Catalunha, M.J.; Borges, C.R., Jr.; Teixeira Gonzaga Sousa, P. Qualidade de Modelos Digitais de Elevação Utilizando Dados Do SIGEF: Estudo de Caso Para as Sub-Bacias Do Ribeirão Dos Mangues e Rio Soninho No Estado Do Tocantins. Rev. Bras. Geogr. Fís. 2019, 12, 187–200. [Google Scholar] [CrossRef] [Green Version]
  80. Kovalchuk, I.P.; Lukianchuk, K.A.; Bogdanets, V.A. Assessment of Open Source Digital Elevation Models (SRTM-30, ASTER, ALOS) for Erosion Processes Modeling. J. Geol. Geogr. Geoecol. 2019, 28, 95–105. [Google Scholar] [CrossRef] [PubMed]
  81. Mohammadi, A.; Karimzadeh, S.; Jalal, S.J.; Kamran, K.V.; Shahabi, H.; Homayouni, S.; Al-Ansari, N. A Multi-Sensor Comparative Analysis on the Suitability of Generated DEM from Sentinel-1 SAR Interferometry Using Statistical and Hydrological Models. Sensors 2020, 20, 7214. [Google Scholar] [CrossRef]
  82. El Hage, M. Etude de La Qualité Géomorphologique de Modèles Numériques de Terrain Issus de l’imagerie Spatiale. Ph.D. Thesis, Conservatoire National des Arts et Metiers-CNAM, Paris, France, 2012. [Google Scholar]
  83. Trevisani, S.; Skrypitsyna, T.N.; Florinsky, I.V. Global Digital Elevation Models for Terrain Morphology Analysis in Mountain Environments: Insights on Copernicus GLO-30 and ALOS AW3D30 for a Large Alpine Area. Available online: https://www.researchsquare.com (accessed on 10 July 2023).
  84. Yap, L.; Kandé, L.H.; Nouayou, R.; Kamguia, J.; Ngouh, N.A.; Makuate, M.B. Vertical Accuracy Evaluation of Freely Available Latest High-Resolution (30 m) Global Digital Elevation Models over Cameroon (Central Africa) with GPS/Leveling Ground Control Points. Int. J. Digit. Earth 2019, 12, 500–524. [Google Scholar] [CrossRef]
  85. Abdel-Maguid, R.H. Evaluation of Vertical Accuracy of Different Digital Elevation Models Sources for Buraydah City. Appl. Geomat. 2021, 13, 913–924. [Google Scholar] [CrossRef]
  86. Alidoost, F.; Samadzadegan, F. Statistical Evaluation of Fitting Accuracy of Global and Local Digital Elevation Models in Iran. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W3, 19–24. [Google Scholar] [CrossRef] [Green Version]
  87. Pakoksung, K.; Takagi, M. Digital Elevation Models on Accuracy Validation and Bias Correction in Vertical. Model. Earth Syst. Environ. 2016, 2, 11. [Google Scholar] [CrossRef] [Green Version]
  88. Becek, K.; Koppe, W.; Kutoğlu, Ş. Evaluation of Vertical Accuracy of the WorldDEMTM Using the Runway Method. Remote Sens. 2016, 8, 934. [Google Scholar] [CrossRef] [Green Version]
  89. Arabameri, A.; Rezaie, F.; Pal, S.C.; Cerda, A.; Saha, A.; Chakrabortty, R.; Lee, S. Modelling of Piping Collapses and Gully Headcut Landforms: Evaluating Topographic Variables from Different Types of DEM. Geosci. Front. 2021, 12, 101230. [Google Scholar] [CrossRef]
  90. Athmania, D.; Achour, H. External Validation of the ASTER GDEM2, GMTED2010 and CGIAR-CSI- SRTM v4.1 Free Access Digital Elevation Models (DEMs) in Tunisia and Algeria. Remote Sens. 2014, 6, 4600–4620. [Google Scholar] [CrossRef] [Green Version]
  91. Purinton, B.; Bookhagen, B. Validation of Digital Elevation Models (DEMs) and Comparison of Geomorphic Metrics on the Southern Central Andean Plateau. Earth Surf. Dyn. 2017, 5, 211–237. [Google Scholar] [CrossRef] [Green Version]
  92. Vassilaki, D.I.; Stamos, A.A. The 0.4 Arc-Sec Tandem-X Intermediate Dem with Respect to the Srtm and Aster Global Dems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W2, 253–259. [Google Scholar] [CrossRef] [Green Version]
  93. Guan, L.; Pan, H.; Zou, S.; Hu, J.; Zhu, X.; Zhou, P. The Impact of Horizontal Errors on the Accuracy of Freely Available Digital Elevation Models (DEMs). Int. J. Remote Sens. 2020, 41, 7383–7399. [Google Scholar] [CrossRef]
  94. Wang, W.; Yang, X.; Yao, T. Evaluation of ASTER GDEM and SRTM and Their Suitability in Hydraulic Modelling of a Glacial Lake Outburst Flood in Southeast Tibet. Hydrol. Process. 2012, 26, 213–225. [Google Scholar] [CrossRef]
  95. Ochoa, C.G.; Vives, L.; Zimmermann, E.; Masson, I.; Fajardo, L.; Scioli, C. Analysis and Correction of Digital Elevation Models for Plain Areas. Photogramm. Eng. Remote Sens. 2019, 85, 209–219. [Google Scholar] [CrossRef]
  96. Carrera-Hernández, J.J. Not All DEMs Are Equal: An Evaluation of Six Globally Available 30 m Resolution DEMs with Geodetic Benchmarks and LiDAR in Mexico. Remote Sens. Environ. 2021, 261, 112474. [Google Scholar] [CrossRef]
  97. Li, P.; Shi, C.; Li, Z.; Muller, J.-P.; Drummond, J.; Li, X.; Li, T.; Li, Y.; Liu, J. Evaluation of ASTER GDEM Using GPS Benchmarks and SRTM in China. Int. J. Remote Sens. 2013, 34, 1744–1771. [Google Scholar] [CrossRef]
  98. Zhao, S.; Zhang, S.; Cheng, W. Relative Error Evaluation to Typical Open Global Dem Datasets in Shanxi Plateau of China. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII–3, 2395–2399. [Google Scholar] [CrossRef] [Green Version]
  99. Han, H.; Zeng, Q.; Jiao, J. Quality Assessment of TanDEM-X DEMs, SRTM and ASTER GDEM on Selected Chinese Sites. Remote Sens. 2021, 13, 1304. [Google Scholar] [CrossRef]
  100. Shafique, M.; van der Meijde, M. Impact of Uncertainty in Remote Sensing DEMs on Topographic Amplification of Seismic Response and Vs 30. Arab. J. Geosci. 2015, 8, 2237–2245. [Google Scholar] [CrossRef]
  101. De Freitas Leal Lopes, M.; Fontenele, G.R.; Gameiro, S.; de Paula Miranda, M.; Duarte, C.R.; Souto, M.V.S. Análise Comparativa dos Lineamentos da Região da Jazida Fósforo-Uranífera de Itataia-CE Gerados Através dos MDE: SRTM+, ASTER GDEM 2 e TOPODATA. Available online: https://proceedings.science/sbsr/trabalhos/analise-comparativa-dos-lineamentos-da-regiao-da-jazida-fosforo-uranifera-de-ita?lang=pt-br (accessed on 22 February 2023).
  102. Maltsev, K.A.; Golosov, V.N.; Gafurov, A.M. Digital Terrain Models and Their Use in Calculations of Soil Flow-off Rates on Arable Land. Proc. Kazan University. Nat. Sci. Ser. 2018, 160, 514–530. [Google Scholar]
  103. Fashae, O.; Olatunbosun, J.; Olusola, A. An Assessment of Digital Elevation Model for Geospatial Studies: A Case Study of Alawa Town, Niger State, Nigeria. Ife Res. Publ. Geogr. 2017, 15, 31–51. [Google Scholar]
  104. Atwood, A.; West, A.J. Evaluation of High-resolution DEMs from Satellite Imagery for Geomorphic Applications: A Case Study Using the SETSM Algorithm. Earth Surf. Process. Landf. 2022, 47, 706–722. [Google Scholar] [CrossRef]
  105. Ariza-López, F.J.; Mora, E.G.C.; Mingorance, J.L.M.; Cai, J.; Gordo, J.F.R. DEMs: An Approach to Users and Uses from the Quality Perspective. Int. J. Spat. Data Infrastruct. Res. 2018, 13, 131–171. [Google Scholar]
  106. Hnila, P.; Elicker, J. Quality Assessment of Digital Elevation Models in a Treeless High-Mountainous Landscape: A Case Study from Mount Aragats, Armenia. Magazen 2021, 2, 5055. [Google Scholar] [CrossRef]
  107. Shawky, M.; Moussa, A.; Hassan, Q.K.; El-Sheimy, N. Pixel-Based Geometric Assessment of Channel Networks/Orders Derived from Global Spaceborne Digital Elevation Models. Remote Sens. 2019, 11, 235. [Google Scholar] [CrossRef] [Green Version]
  108. Sawai, S.; Rawat, K.; Singh, S.; Kumar, S. Statistical Investigation of Accuracy of Satellite Elevation Data: A Case Study. J. Crit. Rev. 2021, 7, 4469–4484. [Google Scholar]
  109. Zhao, S.; Qi, D.; Li, R.; Cheng, W.; Zhou, C. Performance Comparison among Typical Open Global DEM Datasets in the Fenhe River Basin of China. Eur. J. Remote Sens. 2021, 54, 145–157. [Google Scholar] [CrossRef]
  110. Casas, A.; Benito, G.; Thorndycraft, V.R.; Rico, M. The Topographic Data Source of Digital Terrain Models as a Key Element in the Accuracy of Hydraulic Flood Modelling. Earth Surf. Process. Landf. 2006, 31, 444–456. [Google Scholar] [CrossRef]
  111. Utlu, M.; Özdemir, H. How Much Spatial Resolution Do We Need to Model a Local Flood Event? Benchmark Testing Based on UAV Data from Biga River (Turkey). Arab. J. Geosci. 2020, 13, 1293. [Google Scholar] [CrossRef]
  112. Jain, A.O.; Thaker, T.P.; Misra, A.K.; Singh, A.K.; Kumari, P. Determination of Sensitivity of Drainage Morphometry towards Hydrological Response Interactions for Various Datasets. Environ. Dev. Sustain. 2021, 23, 1799–1822. [Google Scholar] [CrossRef]
  113. Kiliç, B.; Gülgen, F.; Çelen, M.; Öncel, S.; Oruç, H.; Vural, S. Morphometric Analysis of Saz-Çayırova Drainage Basin Using Geographic Information Systems and Different Digital Elevation Models. Int. J. Environ. Geoinform. 2022, 9, 177–186. [Google Scholar] [CrossRef]
  114. Kakavas, M.; Kyriou, A.; Nikolakopoulos, K.G. Assessment of Freely Available DSMs for Landslide-Rockfall Studies. In Proceedings of the Earth Resources and Environmental Remote Sensing/GIS Applications XI; Schulz, K., Nikolakopoulos, K.G., Michel, U., Eds.; SPIE: Edinburgh, UK, 2020; p. 24. [Google Scholar]
  115. Dos Santos, A.d.P.; das Graças Medeiros, N.; dos Santos, G.R.; Rodrigues, D.D. Avaliação Da Acurácia Posicional Planimétrica Em Modelos Digitais de Superfície Com o Uso de Feições Lineares. Bol. Ciênc. Geod. 2016, 22, 157–174. [Google Scholar] [CrossRef] [Green Version]
  116. Reinoso, J.F. A Priori Horizontal Displacement (HD) Estimation of Hydrological Features When Versioned DEMs Are Used. J. Hydrol. 2010, 384, 130–141. [Google Scholar] [CrossRef]
  117. Altunel, A.O. Suitability of Open-Access Elevation Models for Micro-Scale Watershed Planning. Environ. Monit. Assess. 2018, 190, 512. [Google Scholar] [CrossRef]
  118. Pike, R.J. Geomorphometry-Diversity in Quantitative Surface Analysis. Prog. Phys. Geogr. Earth Environ. 2000, 24, 1–20. [Google Scholar] [CrossRef] [Green Version]
  119. Karlson, M.; Bastviken, D.; Reese, H. Error Characteristics of Pan-Arctic Digital Elevation Models and Elevation Derivatives in Northern Sweden. Remote Sens. 2021, 13, 4653. [Google Scholar] [CrossRef]
  120. Fathy, I.; Abd-Elhamid, H.; Zelenakova, M.; Kaposztasova, D. Effect of Topographic Data Accuracy on Watershed Management. Int. J. Environ. Res. Public Health 2019, 16, 4245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  121. Bendib, A. High-Resolution Alos Palsar for the Characterization of Water Storage at the Fountaine Des Gazelles Dam in Biskra, Eastern Algeria. J. Indian Soc. Remote Sens. 2021, 49, 1927–1938. [Google Scholar] [CrossRef]
  122. Masharif, B.; Khasanov, K. Comparison of Digital Elevation Models for Determining the Area and Volume of the Water Reservoir. Int. J. Geoinform. 2021, 17, 37–45. [Google Scholar] [CrossRef]
  123. Pakoksung, K.; Takagi, M. Assessment and Comparison of Digital Elevation Model (DEM) Products in Varying Topographic, Land Cover Regions and Its Attribute: A Case Study in Shikoku Island Japan. Model. Earth Syst. Environ. 2021, 7, 465–484. [Google Scholar] [CrossRef]
  124. Ariza-López, F.J.; Mozas-Calvache, A.T. Comparison of Four Line-Based Positional Assessment Methods by Means of Synthetic Data. Geoinformatica 2012, 16, 221–243. [Google Scholar] [CrossRef]
  125. Misra, P.; Avtar, R.; Takeuchi, W. Comparison of Digital Building Height Models Extracted from AW3D, TanDEM-X, ASTER, and SRTM Digital Surface Models over Yangon City. Remote Sens. 2018, 10, 2008. [Google Scholar] [CrossRef] [Green Version]
  126. Fiorio, P.R.; da Silva Barros, P.P.; de Oliveira, J.S.; Nanni, M.R. Estimates of Soil Loss in a GIS Environment Using Different Sources of Topographic Data. Ambiência 2016, 12, 203–216. [Google Scholar] [CrossRef]
  127. Sarma, C.P.; Dey, A.; Krishna, A.M. Influence of Digital Elevation Models on the Simulation of Rainfall-Induced Landslides in the Hillslopes of Guwahati, India. Eng. Geol. 2020, 268, 105523. [Google Scholar] [CrossRef]
  128. Bühler, Y.; Christen, M.; Kowalski, J.; Bartelt, P. Sensitivity of Snow Avalanche Simulations to Digital Elevation Model Quality and Resolution. Ann. Glaciol. 2011, 52, 72–80. [Google Scholar] [CrossRef] [Green Version]
  129. Ariza-López, F.J.; Reinoso-Gordo, J.F.; Nero, M.A. Proposal for a Collaborative Data Infrastructure for Control of DEMs. In Proceedings of the Geomorphometry 2023, Iasi, Romania, 10–14 July 2023; Available online: https://zenodo.org/record/7871959 (accessed on 23 July 2023).
Figure 1. Evolution of the number of references in the corpus through time.
Figure 1. Evolution of the number of references in the corpus through time.
Ijgi 12 00337 g001
Figure 2. Cloud of words from titles and abstract of the corpus.
Figure 2. Cloud of words from titles and abstract of the corpus.
Ijgi 12 00337 g002
Figure 3. Dendrogram corresponding to the cluster analysis. The colored lines indicate the main groupings.
Figure 3. Dendrogram corresponding to the cluster analysis. The colored lines indicate the main groupings.
Ijgi 12 00337 g003
Table 1. GDEMs considered in this study.
Table 1. GDEMs considered in this study.
Data SetCoverageAcquisition YearsResolution (m)Vertical AccuracyDatum Plain/VerticalReferences
Free global DEMsALOS AW3D3082° S–82° N2006–2011304.4 m (RMSE)WGS84/EGM96[11,19]
ASTER83° S–83° N2009–20193012.64 m (RMSE)WGS84/EGM96[5,6,20]
SRTM56° S–60° N2000905.6–9 m (90% LE)WGS84/EGM96[4,21]
3011.5 m (RMSE) [22]
TanDEMx (1)Entire Earth2010–2015123.49 m (90% LE) [7]
3010 m (90% LE)
9010 m (90% LE)
Copernicus (2)Entire Earth2011–2015304 m (90% LE) [23]
904 m (90% LE)
FABDEM60° S–80° N 301.12–2.88 m (MAE) [14]
Error-reduced versions of SRTMEarthEnv60° S–83° N 904.13 m (RMSE)WGS84[24]
NASADEM 306.4–12.08 m (RMSE)WGS84/EGM96[10,25]
MERIT60° S–90° N 905 m (LE90)EGM96[13]
Commercial GDEMWorldDEMEntire Earth [7]
2010–2015124 m (90% LE) [26]
2017–202152.5 m (90% LE) [23]
Notes: (1) Proposed uses of the 12 m and 30 m global TanDEM-X DEMs are submitted for evaluation for free disposal at no charge. (2) The abbreviation COP will be used in the document instead of Copernicus.
Table 2. Journals that supply at least five references (N) to this study.
Table 2. Journals that supply at least five references (N) to this study.
Journal TitleN
Remote Sensing22
Geocarto International12
International Journal of Remote Sensing9
Remote Sensing of Environment8
Arabian Journal of Geosciences6
Journal of Hydrology6
Environmental Earth Sciences5
IEEE International Geoscience and Remote Sensing Symposium (IGARSS)5
ISPRS International Journal of Geo-Information5
Remote Sensing Letters5
Revista Brasileira de Geografia Física5
Table 3. Count of cases for each GDEM and year in the corpus (sorted by total cases).
Table 3. Count of cases for each GDEM and year in the corpus (sorted by total cases).
Year200420062007200820092010201120122013201420152016201720182019202020212022Total
ASTER1 1125467161921142437364015249
SRTM-30 1 142879112133414617201
SRTM-90 1115414101317611241498129
AWD3D30 4121318241081
ALOS-PRISM 1 12124171826880
WorldDEM 1245823 25
TANDEM-X90 1 795224
TANDEM-X30 1 113105223
NASADEM 1 1211318
MERIT 1 1444418
GMTED2010 22221 2112
GTOPO-30 12321 1 10
COP30 1 438
EarthEnv-DEM90 1 1111 5
GLOBE 111 3
COP90 11 2
ETOPO01 1 1 2
FABDEM 11
Total/year1 2241091116385057498414915618174891
Table 4. Crossing of GDEMs used together.
Table 4. Crossing of GDEMs used together.
SRTM-30SRTM-90NASADEMASTERALOS-PRISMGTOPO-30GLOBEETOPO01AWD3D30MERITTANDEM-X30TANDEM-X90WorldDEMCOP90COP30FABDEMGMTED2010EarthEnv-DEM90
SRTM-30201571416868631651220191416 73
SRTM-90 12921122382127891410 1 73
NASADEM 18136 8413115 1
ASTER 24964921661317132117 84
ALOS-PRISM 80 9387514 21
GTOPO-30 10312 52
GLOBE 312 32
ETOPO01 2 1
AWD3D30 8110799 3 22
MERIT 18542111 1
TANDEM-X30 2325 21
TANDEM-X90 24311
WorldDEM 251
COP90 2
COP30 8
FABDEM 1
GMTED2010 123
EarthEnv-DEM90 5
Table 5. Times each GDEM has been compared with other GDEMs (ordered by N).
Table 5. Times each GDEM has been compared with other GDEMs (ordered by N).
GDEMN
ASTER591
SRTM-30457
SRTM-90321
AWD3D30242
ALOS-PRISM223
TANDEM-X3082
TANDEM-X9082
WorldDEM73
MERIT68
NASADEM64
GMTED201043
GTOPO-3038
COP3032
EarthEnv-DEM9022
GLOBE18
COP907
ETOPO016
FABDEM2
Table 6. Distribution of cases (N) per country (ordered by N).
Table 6. Distribution of cases (N) per country (ordered by N).
CountryNCountryNCountryN
IN53ID4DE1
CN36NP4EC1
BR25PL4EE1
TR12SP4ET1
US12BD3HR1
IR10CA3HT1
IT9CH3HU1
SA9DZ3KR1
EG8JP3LB1
GR8KG3ML1
NG7MX3MM1
RU7NZ3MZ1
IQ6TN3NE1
MY6UA3PK1
PE5CO2PU1
UZ5ES2RO1
AQ4JO2SE1
AR4MA2SI1
AU4NO2SK1
BO4VN2SV1
BT4AM1UK1
CL4AT1ZA1
FR4CM1World9
Table 7. Count and percentage of cases for each reference category.
Table 7. Count and percentage of cases for each reference category.
GCP-GNSSOfficial-DEMOther—RasterFunctional Ref. DataGeodetic BenchmarksICEsatLiDAR Cloud3D Lines–ProfilesOther ElevationPlanimetric FeaturesNone
N10772363129282410966
%30201098873322
Table 8. Crossing of references used together.
Table 8. Crossing of references used together.
GCP-GNSSOfficial-DEMOther—RasterFunctional Ref. DataGeodetic BenchmarksICEsatLiDAR Cloud3D Lines–ProfilesOther—ElevationPlanimetric FeaturesNone
GCP-GNSS1078628941
Official-DEM 72116 12 1
Other—Raster 361 1 11
Functional Ref. Data 3111
Geodetic Benchmarks 29
ICEsat 28 51
LiDAR Cloud 24
3D Lines–Profiles 10
Other—elevation 9
Planimetric features 6
None 6
Table 9. Basic statistical figures for the area of analysis per paper [km2].
Table 9. Basic statistical figures for the area of analysis per paper [km2].
MeanModeMinMaxStandard Deviation
215 × 103100.000.208.00 × 1061.08 × 106
5% percentile25% percentile50% percentile75% percentile95% percentile
3.40114.50997.008.62 × 103622.22 × 103
Table 10. Criteria (theme + metric) used in the comparisons involving raw data.
Table 10. Criteria (theme + metric) used in the comparisons involving raw data.
CriteriaCriteriaCriteriaCriteria
ThemeMetricThemeMetricThemeMetricThemeMetric
ElevationEL_CorCoefR
EL_IQR
EL_LE90
EL_LE95
EL_MAD
EL_MAE
EL_NMAD
EL_Range
EL_RMSE
EL_STD
SlopeSL_CorCoefR
SL_IQR
SL_LE90
SL_LE95
SL_Lenght
SL_MAD
SL_MAE
SL_Other
SL_Range
SL_RMSE
SL_STD
AspectAS_MAE
AS_Other
AS_Range
AS_STD
HorizontalHZ_CE90
HZ_Range
HZ_RMSE
HZ_STD
CorCoefR = linear correlation coefficient; IQR = interquartile range; LE90 = linear error at 90% confidence; LE95 = linear error at 95% confidence; length = length; MAD = mean absolute deviation; MAE = mean absolute error; NMAD = normalized median absolute deviation; range = range of values; RMSE = root mean squared error; STD = standard deviation; other = some other option.
Table 11. Criteria applied and explanation of metrics.
Table 11. Criteria applied and explanation of metrics.
ThemeExplanation of the Used Metrics
ContourlinesMetrics derived from the use of contour lines as a mean of analysis of the compared topography of two DEM data sets (e.g., horizontal displacements).
GeologicalLineamentsMetrics derived from geological lineament analysis (e.g., density, length, etc.) as well as horizontal displacements.
InundationAreasMetrics derived from the analysis of inundation areas (e.g., inundated area and horizontal displacement of inundation area border).
LandslideMetrics derived from analysis based on the occurrence of landslides (e.g., count, density, length, etc.).
LinearFeaturesMetrics derived from analysis based on the presence of linear features (e.g., count, length, horizontal displacement, etc.).
GeomorphometryMetrics derived from any kind of geomorphometric analysis, mainly on a basin base (e.g., basin area, basin perimeter, etc.).
OrthorectificationCriteria related to the quality of derived orthorectified data
ProfilesMetrics derived from any type of profiles (e.g., straight profiles, profiles along watercourses, etc.).
RegistrationMetrics derived after a minimization process of the discrepancies with respect to another DEM used as reference, by means of a horizontal shift.
SpuriousPitsMetrics derived from the presence/absence of spurious pits in DEM databases (e.g., density, commissions, omissions, etc.).
TerrainRoughnessIndexMetrics derived from the use of indexes related to the terrain roughness (e.g., surface slope, curvature, topographic roughness index, etc.).
TWIMetrics derived from the use of the topographic wetness index (TWI), or similar.
VisualMetrics or subjective qualifications derived from any type of visual analysis, for example, a visual inspection flight over the DEM, shading, etc.
FunQualityMetrics derived from the performance of the DEM data for certain applications based on models, simulations, and so on, and not considered in the three cases below.
HydrologyA case of FunQuality centered on hydrology. Metrics derived from hydrological models and applications (e.g., water flow, water height, maximum discharge, etc.).
NSENashSutcliffeA particular case of FunQuality centered on hydrology. Metrics based on the application of the Nash–Sutcliffe model efficiency coefficient (NSE) to assess the predictive skill of hydrological models.
USLE/RUSLEA case of FunQuality centered on soil erosion processes. Metrics derived from the study of soil erosion by means of the universal soil loss equation or any of its variants (e.g., soil loss).
OtherAny other option different from the above.
Table 12. Criteria with at least 10 instances in the corpus.
Table 12. Criteria with at least 10 instances in the corpus.
General CriteriaNApplied CriteriaN
EL_RMSE197Morphometry53
EL_STD121FunQuality47
EL_Range94Other26
EL_CorCoeffR52Hydrology18
EL_MAE50Profiles18
EL_NMAD22Registration15
EL_LE9019Visual13
EL_IQR11Landslide10
EL_MAD10
Table 13. Crossing of criteria used jointly.
Table 13. Crossing of criteria used jointly.
EL_RMSEEL_STDEL_RangeMorphometryEL_CorreCoeff-REL_MAEFunQualityOtherEL_NMADEL_LE90HydrologyProfilesRegistrationVisualEL_IQREL_MADLandslide
EL_RMSE197977923434712201816613109691
EL_STD 12112421 132 1313 2
EL_Range 94 11 1 11
Morphometry 531 11 2
EL_CorreCoeff-R 52 1 1 1
EL_MAE 50
FunQuality 47 5 2
Other 26
EL_NMAD 221
EL_LE90 19
Hydrology 18
Profiles 181
Registration 15
Visual 13
EL_IQR 111
EL_MAD 10
Landslide 10
Table 14. Basic statistical figures for the EL_RMSE criterion [m].
Table 14. Basic statistical figures for the EL_RMSE criterion [m].
GDEMNMeanMedian5% Percentile95% PercentileMinMaxσ
MERIT494.642.621.4913.121.2117.34.26
WorldDEM544.083.300.798.260.4735.904.95
NASADEM356.365.252.0412.361.7713.263.48
TANDEMX90698.285.371.0624.550.5049.009.59
TANDEMX30178.565.510.9322.700.6437.369.13
AWD3D301017.715.781.4816.841.1061.609.32
SRTM302148.665.792.0719.100.37186.6514.26
ALOSPRISM668.806.551.1827.250.4040.308.35
SRTM9018812.018.812.1132.530.8388.8011.71
ASTER27013.069.684.0127.840.93137.6512.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

López-Vázquez, C.; Ariza-López, F.J. Global Digital Elevation Model Comparison Criteria: An Evident Need to Consider Their Application. ISPRS Int. J. Geo-Inf. 2023, 12, 337. https://doi.org/10.3390/ijgi12080337

AMA Style

López-Vázquez C, Ariza-López FJ. Global Digital Elevation Model Comparison Criteria: An Evident Need to Consider Their Application. ISPRS International Journal of Geo-Information. 2023; 12(8):337. https://doi.org/10.3390/ijgi12080337

Chicago/Turabian Style

López-Vázquez, Carlos, and Francisco Javier Ariza-López. 2023. "Global Digital Elevation Model Comparison Criteria: An Evident Need to Consider Their Application" ISPRS International Journal of Geo-Information 12, no. 8: 337. https://doi.org/10.3390/ijgi12080337

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop