1. Introduction
Glaciers are considered the most sensitive climate change indicators, and their motion is a key indicator of regional changes in temperature, precipitation, and atmospheric circulation patterns [
1,
2,
3]. The study of ice flow velocity [
4], particularly debris-covered glaciers (DCGs), is fundamental for determining the balance of glacier masses, the contribution of glaciers to river basins, and predicting associated risks like glacier lake outburst floods (GLOFs) [
5,
6,
7,
8]. Within recent decades, satellite-based remote sensing (RS) has emerged as the main tool for tracking glacier movement at the regional or even global level, as the technique has a wide spatial coverage and growing temporal resolution [
9,
10,
11].
The classical approach involves computing glacier velocity using feature-tracking or correlation-based techniques applied to optical imagery or Interferometric Synthetic Aperture Radar (InSAR) techniques applied to radar data [
12]. These techniques have facilitated spectacular advances in measuring ice motion, but are frequently constrained by temporal decoupling, reduced contrast of the images, or surface debris overlay [
13,
14]. The rapid increase in satellite data availability, driven by missions such as Sentinel-1, Sentinel-2, Landsat-8, and ALOS-PALSAR, has stimulated the development of new velocity retrieval methodologies. Most recently, there have been data-driven approaches to glacier tracking based on deep learning (DL) and optical flows, which are more robust and sub-pixel accurate [
15,
16,
17,
18,
19,
20]. Despite these innovations and the increased sophistication of methods, methodological inconsistencies and biases persist in certain regions, particularly over debris-covered or high-relief terrain [
21,
22,
23].
To fill the mentioned gaps, this paper presents a comprehensive systematic review of the advances in the methodology of glacier-velocity retrieval with a primary focus on its applicability to DCGs. Our review synthesizes existing methodologies in the area of conventional correlation-based methods and the new AI-assisted and hybrid methods by analyzing sources of data, accuracy determination, and application trends. The study includes an analysis of publications published between 1992 and 2025 based on the PRISMA (Preferred Reporting Items in Systematic Reviews and Meta-Analyses) framework [
24] to find the major areas of research. This integrated perspective fills a critical gap by comparing methodological performance, uncertainties, multi-sensor fusion strategies, and emerging AI-driven trends for debris-covered glaciers, a systematic comparison lacking in prior broader reviews.
Finally, the paper provides emerging trends and future directions, such as GeoAI, self-supervised learning, and digital twin models, which will change glacier-velocity retrieval into a more intelligent, automated, and physically informed science. The three main objectives of this review are to (1) describe the history of the methodological evolution of glacier-velocity retrieval, (2) critically compare the performance and limitations of current methods, particularly for DCGs, and (3) identify key research gaps and outline emerging pathways.
2. Materials and Methods
This research employs a thorough systematic review to synthesize methodological progress and identify persistent issues, with a focus on DCGs. This systematic review was conducted following the PRISMA 2020 statement [
24,
25] (available at
http://www.prisma-statement.org/. Accessed on 20 November 2025). The identification, screening, inclusion, and exclusion process is outlined in the PRISMA flow diagram (
Figure 1). To ensure comprehensive coverage, the literature search was performed in five major electronic databases related to geoscience and RS: Elsevier (SCOPUS/ScienceDirect), SpringerLink, IEEE Xplore, MDPI, and the Copernicus Publications database (accessed on 5 October 2025). This multi-database approach is well-established for systematic reviews in environmental RS [
26,
27,
28] and mitigates the coverage limitations inherent in individual platforms.
2.1. Inclusion and Exclusion Criteria
The inclusion and exclusion criteria were predefined to ensure a systematic and consistent selection of relevant studies for this review. We considered peer-reviewed journal articles and conference proceedings published between 1992 and 2025 that focus on glacier-velocity retrieval using remote sensing techniques, with a primary emphasis on debris-covered glaciers and, where relevant, clean-ice glaciers for comparative purposes. Only studies published in English were included to ensure methodological consistency and accessibility of the reported results. Eligible studies were required to employ empirical remote sensing data, including optical, SAR, InSAR, DEM, or UAV-based approaches, with artificial intelligence and machine learning methods considered when applicable. Studies relying solely on field measurements or GPS observations without integration of remote sensing data were excluded.
Furthermore, conceptual or purely modeling studies lacking empirical remote sensing datasets were not considered. To enable meaningful methodological comparison, only studies providing sufficient methodological detail and quantitative results for assessing velocity retrieval performance and associated uncertainties were retained in the final corpus.
2.2. Data Extraction and Analysis Framework
The extracted data of the included studies were systematically derived through the employment of a standardized protocol to guarantee consistency and have the ability to make comparisons across the studies. Data extraction was performed independently by two reviewers using a pre-piloted data extraction form. The extracted variables were then cross-verified, and any discrepancies were resolved through consensus discussion. In the few cases where consensus could not be reached, the third reviewer (J.A.) was consulted to make a final decision. Given the methodological nature of this review, the primary “effect measures” of interest were methodological performance metrics, including reported uncertainties, spatial/temporal resolution, validation approaches, and applicability to debris-covered glaciers. We assessed potential reporting bias by examining whether studies reported both successful and unsuccessful methodological applications, and the certainty of evidence was evaluated through consistency of findings across multiple studies and validation rigor.
This systematic review was prospectively registered on the Open Science Framework (OSF) on 20 November 2025. The registration was completed following preliminary screening and search strategy optimization initiated in August 2025, ensuring methodological transparency during the final synthesis phase. The registered protocol, along with all associated materials, including the PRISMA checklist, data extraction forms, and analytical datasets, is available in the OSF repository:
https://osf.io/dhxpq/overview (accessed on 21 November 2025).
Table 1 (the extraction fields) was created to reflect three important aspects of the research, such as (1) the methodological characteristics, i.e., the primary RS method and the AI/ML algorithms used; (2) the geographical distribution of the glaciers and regions under study; and (3) the quantitative performance measures, especially the reported uncertainties of velocities and methods of validation. This organized data collection was the basic dataset on which further systematic analysis and methodological synthesis are based.
Table 1 has extracted variables that were a consistent cross-study framework to compare. The records were coded and normalized to make them comparable with the different methodological approaches under analysis.
2.3. PRISMA Flow Diagram
As summarized in
Figure 1 and detailed in this section, the study selection process was as follows: The process for identifying and selecting relevant studies is summarized in the PRISMA flow diagram (
Figure 1). Our initial search across five electronic databases (Elsevier, Springer, Copernicus Group, MDPI, and IEEE) using the key terms “debris-covered glacier”, “debris covered glacier”, “velocity”, “flow”, “dynamics”, and “remote sensing” yielded 650 records. The screening and selection process was conducted by two independent reviewers (N.N. and A.S.) to minimize bias. Any discrepancies in study inclusion were resolved through discussion and consensus with a third reviewer (J.A.) when necessary. After removing 95 duplicate records, 555 unique studies remained for screening. The screening process involved two phases. First, the titles and abstracts of these 555 records were assessed against the inclusion criteria, resulting in the exclusion of 372 studies. The full texts of the remaining 183 studies were then thoroughly assessed for eligibility. This step resulted in the exclusion of 62 studies due to several reasons (e.g., not focused on debris-covered glaciers, no RS techniques applied, not primary research). The latter category (‘Other reasons’) comprised full-text articles that were inaccessible despite exhaustive search through library subscriptions, inter-library loan, and direct author requests (
n = 3); studies focusing on debris of a non-glacial origin (e.g., rock glaciers, landslide debris) (
n = 2); and one study exclusively dedicated to the dynamics of a surging glacier, which fell outside the scope of our review on sustained glacier flow (
n = 1).
Additionally, study quality was assessed using criteria adapted for methodological reviews, focusing on methodological transparency, validation rigor, uncertainty reporting, and reproducibility. Each study was independently evaluated by two reviewers, with disagreements resolved through consensus. Studies were not excluded based on quality assessment, but quality considerations informed the synthesis and interpretation of findings.
The final corpus of 121 studies represents the complete landscape of DCG velocity monitoring research. Within this corpus, we identified a distinct subset of 25 studies that specifically implement AI/ML methodologies, all published from 2018 onward.
3. Data Sources for Glacier-Velocity Retrieval
The precise evaluation of glacier motion depends on multiple RS information sources, with each having particular benefits and constraints regarding the coverage of regions, time sensitivity, and sensitivity to surfaces. The choice of data has a strong impact on the precision that can be obtained, the applicability to particular types of glaciers, and the characteristics of the monitoring of DCGs as a complex dynamic environment. The section shown below gives a comprehensive description of the most frequently used data sources, which encompass optical, SAR, and topographic data based on DEMs and UAVs.
3.1. Optical Data for Velocity Retrieval
Optical satellite imagery has been a key aspect of glacier-velocity retrieval in the last 50 years. Optical satellite imagery is particularly valuable due to the long-term continuity provided by successive missions, and this is indispensable in the study of glacier movement in history. There are three primary satellite sensors of optical data, which include Landsat, Sentinel-2, and ASTER [
29].
The oldest one is the Landsat program (since 1972), which is the main initiative of long-term cryosphere observation, with a 30 m spatial resolution and a 16-day repeat cycle to perform multi-decadal analysis of glacier flow in various regions, including the Himalayas, Andes, and Arctic. Landsat-8 and Landsat-9, equipped with Operational Land Imager (OLI), have enhanced radiometric stability and co-registration accuracy, and led to the minimization of geometric distortions that have been a major constraint to feature-tracking techniques in the past [
30,
31].
The following one is ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). It offers along-track stereo pairs, which means that, in addition to estimating surface velocity simultaneously, change in elevation can also be estimated. The 15–30 m spatial resolution of ASTER can be useful in moderate-scale motion mapping in rugged and tricky topography [
15].
The latest one is Sentinel-2 (launched in 2015), which provides a higher resolution (10–20 m) and a shorter revisit period (~5 days), allowing detection of short-term velocity variations, surge-type glacier movements, and seasonal flow accelerations. The MSI sensor of Sentinel-2 has 13 spectral bands, which enable discrimination of snow and ice and better visibility of features in optical flow algorithms [
32,
33].
A summary of the main optical sensors, their spatial and temporal resolutions, and typical applications in glacier velocity retrieval is provided in
Table 2.
The utility of optical data for DCGs is primarily constrained by (1) low and heterogeneous surface contrast of the debris mantle, which reduces feature-tracking performance; (2) persistent cloud cover and topographic shadows, limiting data availability; and (3) temporal decorrelation due to surface melt and debris redistribution. These specific challenges have been a key driver for the methodological advances discussed in
Section 4. The need to overcome low contrast motivated the shift from sparse feature tracking to dense optical flow techniques [
34,
35]. The problem of data gaps accelerated the development of multi-sensor fusion frameworks that integrate weather-independent SAR data [
33,
36]. Finally, the complexity of motion estimation on such surfaces catalyzed the adoption of deep learning models capable of learning robust displacement patterns directly from imagery [
37,
38]. Thus, while optical archives remain invaluable, their application to DCGs increasingly occurs within hybrid or AI-augmented paradigms specifically designed to mitigate these limitations.
The persistent limitations of optical data underscore the necessity for the advanced and hybrid methodologies reviewed in the following sections. The complementary strengths of SAR data, which are discussed next, are essential for overcoming these optical constraints.
3.2. SAR/InSAR Data for Velocity Retrieval
SAR is another important data source for glacier-velocity retrieval, especially in regions that have more cloud cover, longer winters, or complex topography, where the use of optical imagery is limited. As a result, SAR systems overcome the limitations of optical sensors that were mentioned above by providing active microwave measurements independent of sunlight or weather. The capabilities of SAR can be leveraged to retrieve velocity even under cloud cover, during the polar night, and in regions of high precipitation [
39].
Among currently available SAR datasets for glacier-velocity estimation, the Sentinel-1 mission (C-band, 5.4 GHz) is the most widely used due to its global coverage and free accessibility, with a revisit time of 6–12 days, and polarization (VV, VH). The Interferometric Wide Swath (IW) mode of Sentinel-1 acquires data with a spatial resolution of approximately 5 m in azimuth and 20 m in range. This mode enables large-scale monitoring of ice movement, and the data are often processed into velocity products with a spatial resolution of 20–40 m for practical glaciological applications. Moreover, its systemic acquisition policy can guarantee temporal consistency, which can be used to analyze time series [
40].
Another important SAR mission is the Envisat ASAR C-band 5.3 GHz sensor, which has offered higher radiometric stability, multiple beam modes, and reliable coherence for steep terrain. It enables offset tracking and InSAR-based deformation mapping. In addition, C-band SAR missions such as RADARSAT-1 and RADARSAT-2 have been widely used for glacier-velocity retrieval, providing long-term observations that support offset-tracking and interferometric analyses.
The ALOS-PALSAR (L-band, 1.27 GHz) is more highly penetrating through snow and vegetation, which also offers better coherence to longer durations. Japanese missions, such as the L-band of JERS-1, allowed for some of the earliest observations of polar ice deformation [
41]. These characteristics make L-band SAR particularly suitable for velocity retrieval over debris-covered glaciers, where longer wavelengths help maintain coherence across heterogeneous and dynamically evolving surfaces.
TerraSAR-X and TanDEM-X (X-band, 9.6 GHz) are used to provide finer spatial resolution to deformation mapping with a high degree of accuracy. The missions are complementary to each other. While L-band missions maintain coherence across long baselines, X-band missions can detect small displacements in high-relief topography [
42]. For the detailed analysis of fast-flowing glaciers, the X-band 9.6 GHz of the COSMO-SkyMed is frequently employed because it delivers high-resolution time series [
43].
From a methodological perspective, InSAR methods make use of the phase difference between repeat SAR acquisitions to quantify line-of-sight displacement, with sub-centimeter precision in ideal circumstances. However, temporal decorrelation, atmospheric phase delay, and topographic distortions often degrade coherence in steep glacierized terrain. To address these limitations, amplitude-based approaches such as offset tracking and speckle tracking methods have been used. These amplitude-based methods are less sensitive to decorrelation and can capture large displacements, though at coarser spatial resolutions. Combining InSAR and offset tracking results can provide more robust motion detection on various types of surfaces, such as debris-covered and clean-ice regions.
An overview of the major SAR sensors and their key characteristics relevant to glacier velocity estimation is presented in
Table 3.
The upcoming NASA-ISRO Synthetic Aperture Radar (NISAR) mission will provide L- and S-band data with a high temporal frequency, which is expected to revolutionize near-real-time glacier-velocity monitoring by combining deep temporal stacks with consistent radiometric calibration.
3.3. UAV and DEM Data for Velocity Retrieval
Digital elevation models (DEMs) and UAV-based photogrammetry are two examples of topographic datasets that offer crucial supplementary data for precise glacier-velocity retrieval. They make it easier to convert two-dimensional image displacements into three-dimensional motion vectors, correct for terrain, and perform orthorectification [
44].
Global DEMs such as SRTM (30 m) and ASTER GDEM (30 m) remain the most popular elevation sources to use in the process of topographical correction and co-registration. Several studies are, however, gradually becoming dependent on newer elevation products, which are more geometrically accurate. These are ALOS AW3D30, NASADEM, and Copernicus GLO-30 DEM, which have cleaner elevation surfaces at steep terrains with glaciation and fewer artifacts in steep glaciated terrain. The representation of glacier surfaces is much more appropriate in polar and high-mountain conditions with such regional DEMs as ArcticDEM, REMA, and the HMA DEM. Simultaneously, TanDEM-X is popular in the exact ortho-rectification of glaciers due to the high level of vertical accuracy and the ability to cover the whole planet [
45,
46,
47].
Along with the velocity data sets to measure glacier dynamics, the elevation difference between two consecutive DEMs makes it possible to determine the vertical ice loss and the mass balance [
48,
49].
UAVs are campaign-specific and have a non-systematic nature. They constitute another data source, with their high-resolution RGB or multispectral cameras serving as a practical tool to map glaciers at high-level mapping. UAV imagery can be used to create centimeter-resolution orthophotos and DEMs using thorough Structure-from-Motion (SfM) imagery, which is then used to create local validation datasets of satellite-based velocities. UAV campaigns are capable of capturing short-term processes, ranging from the daily to the weekly, and illustrating micro-processes, such as crevasse formation or the movement of mobile moraine [
50,
51].
The key characteristics of UAV platforms and DEM products used in glacier velocity studies are summarized in
Table 4.
Even though they are accurate, there are flaws with DEM and UAV products. Whereas the operations of UAVs can be restrained based on weather conditions, battery life, and flight capacity within high altitude settings, DEMs often have holes or vertical inaccuracies in steep slopes due to radar shadowing or stereo misalignments. Nevertheless, when satellite optical/radar data are coupled with UAV-collected DEMs, the accuracy is greatly enhanced, and ground-truth validation is possible, which is necessary to calibrate algorithms.
The diversity of RS data sets has enabled the constant enhancement of methods of glacier-velocity retrieval. There has been a shift in methodological approaches: Data volume, spatial resolution, and temporal frequency have grown to employ automated and data-driven frameworks instead of manual image correlation. These developments on methodology are described in the following section, highlighting the role of the progress achieved in algorithms and sensor integration in the present situation surrounding glacier-velocity studies.
4. Methodological Developments in Velocity Retrieval
The last three decades have seen the development of glacier-velocity retrieval procedures that reflect the large-scale technological and massive conceptual revolutions in RS. The best evidence is presented in the reviewed papers, as what was originally a manual correlation of optical images has now grown to be complex semi-automated systems that combine radar interferometry, optical flow modeling, DL, and hybrid fusion schemes. This part is a summary of the key methodological paths, their performance properties, trade-offs, and suitability to such tough environments as DCGs. The general sequence of the methodology is presented in
Figure 2.
4.1. From Feature Tracking to Automated Image Correlation
Considering the option of employing optical satellite imagery, the earliest method that can be mentioned is feature tracking, which is a commonly used instrument for glacier-velocity recovery [
52]. The method is based on the fact that visually persistent surface features are detected. This method identifies crevasses, rock outcrops, or medial moraines displacements by the cross-correlation or normalized mutual information using separated pairs of images. This technique is based on the availability of the multi-temporal optical missions, which are Landsat TM/ETM+, ASTER, and Sentinel-2.
Early implementations such as IMCORR [
53] and COSI-Corr [
54,
55] allowed systematic mapping of glacier surface movement at the regional to continental scale. As an example, Ref. [
56] required Landsat information to measure the change in glacier flow in the Alps, whereas Ref. [
57] conducted a large-scale evaluation of High Mountain Asia, with results including correlation-based tracking of features with mean uncertainties of ±515 m yr
−1 based on image resolution and time interval. Later findings, including [
58,
59], showed that automated correlation pipelines were able to observe interannual changes in velocities and seasonal surges of debris-covered glaciers, despite radiometric and topographic issues hampering these observations.
One of the major disadvantages of this technique is that it is based on optical contrast and available lighting. Feature tracking works well on clean-ice glaciers with clear patterns on the surface, whereas in areas that could be covered by clouds, snow, or heavy debris, feature tracking is not efficient. In the Karakoram and Pamir ranges, e.g., heterogeneous debris and high-frequency shadowing result in a decorrelation process, which results in displacement errors [
36]. And gaps of more than a year also have the effect of introducing discrepancies in cross-matching, especially where the surface markings of glaciers change seasonally. It is also reliant on the accuracy of co-registration, since minor geometric errors may result in systematic velocity biases, especially with narrow valley glaciers with steep terrains.
Despite these weaknesses, feature tracking is one of the most reproducible and interpretable methods for velocity estimation. Its algorithmic transparency allows for direct error propagation and cross-validation with independent datasets such as InSAR or GNSS. It is also the basis for worldwide glacier-velocity databases, such as the ITS_LIVE [
60] and GoLIVE [
61] archives, which are based mainly on automated optical cross-correlation of multi-decadal Landsat imagery. These high spatial products have played a crucial role in discovering the deceleration of long-term flows in the glaciers of the outlet of Greenland and accelerating processes in High Mountain Asia, which proves the validity of the method even in the era of advanced radar and AI tools.
Ongoing refinements have focused on improving automation, adaptive window sizing, and integration with radiometric filtering. For example, Ref. [
62] proposed texture-adaptive correlation, which reduced false matches in low-contrast areas, whereas [
63] proposed a multi-temporal correlation that ensured consistency in temporal variations in varying illumination conditions. These progressive gains have resulted in a semi-manual process becoming a strong, completely automated model that can track the movements of the glaciers of the world on an annual to sub-annual basis.
4.2. Advances in SAR-Based Offset and Interferometric Tracking
The emergence of SAR dramatically changed the glacier-velocity retrieval by eliminating the need for daylight and weather phenomena that restrain optical methods. SAR instruments use pulsed microwaves to transmit a signal and capture backscattered signals that enable them to sense surface motions using two complementary approaches, namely, offset tracking and InSAR.
Offset-tracking methods correlate radar backscatter amplitude between image pairs to estimate two-dimensional surface displacement. They work well, especially in fast-moving or debris-covered glaciers, where radar phase coherence is usually lost. The initial experiments indicated that they could be used in areas with cloud cover nearly all year round. SAR offset tracking was first used by [
64] on the Greenland Ice Sheet with ERS-1/2 data with an accuracy of approximately ±10 m yr
−1 annually. Subsequently, Refs. [
65,
66] applied the method to the European Alps and High Mountain Asia, where they confirmed that SAR can record motion in regions where optical sensors are unable to work [
34,
67].
Instead, differential InSAR takes advantage of differences in the phase of two complex radar images taken in slightly different positions to provide centimetric resolution of line-of-sight displacement. It performs best over slow-moving, clean-ice glaciers with high coherence and minimal surface change. Ref. [
68] proved it to be effective in temperate glaciers of the Alps, while Ref. [
69] was able to use it in ice streams in the Antarctic. More recent studies [
70,
71] utilized multi-temporal InSAR to track velocity variations in Greenland and Svalbard with an accuracy of between ±2 m yr
−1 when coherence with the InSAR was stable.
With the launch of Sentinel-1 in 2014, SAR-based velocity monitoring entered an operational phase. The mission’s 6–12 day revisit frequency and free, open-access data policy enabled dense temporal sampling at regional to global scales. Based on Sentinel-1, Ref. [
63] provided a global estimate of the surface velocity of glaciers between 2000 and 2018, with the vast majority of High Mountain Asia showing a deceleration.
Nevertheless, these methods have notable constraints. Steep valleys can also mask motion signals due to geometric layover, shadowing, and foreshortening, especially for side-looking SAR systems. Phase unwrapping errors and temporal decorrelation introduce uncertainties up to ±10 m yr
−1 in rugged or snow-covered areas [
72]. Furthermore, it is not merely that InSAR can only measure relatively slow flows (less than 1 m/day), but also that offset tracking has the tradeoff of being less accurate for measuring faster motion. These trade-offs necessitate careful method selection depending on glacier regime and surface conditions.
In general, SAR-based offset tracking and interferometric tracking represent the most weather-independent and time-stable method that is currently available to retrieve glacier velocity. These aspects have contributed to radar RS being one of the foundations of modern-day monitoring of the cryosphere because of its constant coverage, frequency of revisiting, and resistance to illumination limitations.
4.3. Optical Flow and Deep Learning-Based Motion Estimation
The advent of computer vision optical flow methods was an important advancement towards improving the spatial completeness of glacier-velocity fields. As opposed to conventional feature-based cross-correlation that gives sparse displacement vectors relative to recognizable surface features, optical flow estimates dense pixel-based motion fields by minimizing the disparity between spatial and temporal image gradients between consecutive scenes. This enables either the continuous flow mapping of whole glacier surfaces, including areas of low texture or even partial shadow.
Optical flow works well in regions of weak visual structure, like snow or debris-covered surfaces, compared to feature tracking due to its imposition of a smoothness of motion fields in space. This reduces gaps in velocity mosaics and improves the continuity of glacier flow estimation. It was demonstrated in [
35] that gradient-based regularization makes correlation more robust in areas where there is less texture contrast than discrete feature matching. Further, optical flow can be used to solve short-term kinematic events such as surges or seasonal accelerations because it can utilize small temporal baselines of 5–10 days with Sentinel-2 [
34].
However, there are still a number of limitations. Optical flow makes the assumption of local continuity in motion and brightness, which are generally not acceptable in glaciological applications because of variation in illumination, surface melting, or uneven reflectance of debris. Even in the presence of these deficiencies, optical flow symbolizes a highly significant methodological shift, where high-resolution, automated, and time-consistent velocity estimation is concerned. It has been useful in studies of short basins in the Himalaya, the Alps, and the Andes, where repeated optical imagery can be used to provide quasi-continuous monitoring. In this sense, optical flow constitutes a conceptual bridge between empirical correlation and learning-based inference, offering dense, data-driven motion fields that later served as training or validation references for DL architectures [
73,
74,
75].
In addition to the ongoing development of dense optical flow algorithms, more often than not, traditional ML algorithms have been deployed to assist in the mapping of glacier velocities through supplementary tasks, such as surface classification, debris identification, and stable-ground detection. The steps play a pivotal role in limiting the motion estimation and minimizing false displacements in automated workflows [
76].
Random forests and support vector machines are some supervised models that have been applied to determine glacier boundaries and recognize areas covered with debris using multispectral satellite images [
77,
78]. Their combination enhances the choice of valid areas of motion tracking and the filtration of noise in the output of cross-correlation or optical flow. ML-based regression models have also been tested for predicting ice-flow velocities using topographic and climatic predictors [
79], offering a complementary, data-driven means of estimating motion in regions where optical texture is weak or temporal coverage is sparse. Even though ML models require a significant amount of feature design and training data quality, they can be viewed as a major intermediate step between empirical optical flow and end-to-end DL models amid traditional glaciological interpretation, on the one hand, and data-driven automation, on the other hand.
Use of DL in estimating glacier velocity is one of the latest and revolutionary advancements in RS of the cryosphere. Unlike traditional correlation-based or gradient-based approaches, deep neural networks infer motion fields directly from raw image pairs, learning complex, nonlinear relationships between surface reflectance, geometry, and displacement without explicit feature selection. The idea is inspired by the achievements of convolutional encoder–decoder models, like FlowNet [
80], U-Net [
81], and RAFT [
82], which were later adapted for geospatial and SAR applications.
The earliest attempts at applying DL to glacier-velocity retrieval were released in 2018–2020, primarily because of the drawbacks of conventional correlation in low-contrast environments or decorrelations. Ref. [
73] developed a convolutional neural network that used Sentinel-1 and Sentinel-2 pairs to predict the movement of glacier surfaces, and it was as accurate as InSAR in both Greenland and High Mountain Asia, using sub-pixels.
There are a number of important strengths of DL models. Firstly, they can automatically obtain some useful spatial features and texture cues, which do not require human intervention or the need to adjust settings manually. Second, they can be trained in near real time, and therefore apply to operation glacier speed mapping and early warning of hazards. Finally, they can also merge spatial and spectral data, which includes multi-sensor data, such as optical, radar, and DEM, into a single predictive model.
However, despite the strengths, there are still some limitations left. DL models during training with Alps data showed poor performance when applied directly to Himalayan glaciers. This loss is largely explained by the high spectral difference, which results from the disparities between the two regions in terms of coverage of the debris, atmospheric conditions, and ice characteristics [
37,
38,
83,
84]. The steepness of the Himalayas also adds to the serious geometric deformities, like long shadows and extremely unsteady angles of illumination, which are less evident in the Alps, and were not accounted for by models trained on the latter [
85]. This domain shift has been a famous constraint of RS, where models may fail to extrapolate to different geographical areas.
In summary, in controlled case studies, DL techniques are more precise than traditional correlation and InSAR approaches, with standard uncertainties of around ±12 m yr−1. However, their stability is not uniform in types of glaciers and surfaces. The main area of research now is the improvement of interpretability, transferability, and physical consistency. As open satellite archives expand and annotated velocity datasets become more accessible, DL is probably going to evolve from an experimental tool into a core operational component of global glacier-velocity monitoring.
4.4. Hybrid and Multi-Sensor Data Fusion Frameworks
Data fusion between hybrid and multi-sensors has become a crucial methodological direction towards enhancing the strength of glacier-velocity retrieval. This concept is able to integrate heterogeneous data, including optical data, radar data, topographic data, and, in other instances, thermal or UAV data. It makes this kind of data complementary in the sense that their weaknesses are overcome. However, instead of being a post-processing procedure, data fusion is becoming more of a part of the retrieval pipeline itself, and informs co-registration, displacement solving, and error propagation
Various strategies have been explored. One group of methods has been used to perform sequential refinement, where velocity fields obtained after optical feature tracking are corrected sequentially with radar offset tracking or interferometric coherence. For example, Ref. [
24] demonstrated that alternating between visits by the Sentinel-1 SAR and Sentinel-2 optical pair decreased information gaps along debris-filled glaciers in the Alps [
33]. Ensemble or Bayesian weighting has also been applied in other studies to allocate confidence levels to each sensor based on local surface conditions and incidence angles. Refs. [
81,
86,
87] used the concept to merge ALOS-2 and PlanetScope observations over the Karakoram, producing composite velocity maps with uncertainties of ±2–4 m yr
−1.
A second type of methods focuses on spatio-temporal fusion, aligning multi-epoch acquisitions from different satellites into coherent time series. Refs. [
88,
89] harmonized Landsat 8, Sentinel-2, and TerraSAR-X imagery using adaptive kernel regression, enabling continuous monthly velocity estimates across the Western Himalaya.
It has also become more systematic in terms of integration with auxiliary elevation and in situ data. High-resolution DEMs from TanDEM-X or ArcticDEM provide slope correction and aid orthorectification, while UAV and GNSS campaigns supply centimetric ground truth for calibration. The combination of the various algorithmic paradigms has also worked successfully beyond the systematic combination of the auxiliary topographical data. For instance, Ref. [
90] has shown a hybrid model in which a convolutional neural network (CNN) was employed to automatically identify deep, hierarchical features on optical imagery of debris-covered glaciers [
91]. These features were then trained with a random forest algorithm, with better accuracy than traditional methods.
Beyond that, more experimental frameworks extend beyond two-sensor combinations. Refs. [
42,
92,
93] applied a multimodal correlation network, which incorporates optical intensity, radar backscatter, and thermal inertia indices using statistical normalization, resulting in better correlation stability under variable illumination conditions without DL inference. The methods demonstrate how classical statistical fusion can provide most of the advantages of AI integration without losing all the physical understanding [
94].
Remaining challenges include geometric misalignment caused by differing looking angles, inconsistent spatial resolution, and a lack of standardized uncertainty descriptors among sensors. In addition, large-area fusion computational needs are still large, especially when sub-pixel co-registration is repeated over hundreds of images [
95]. However, hybrid fusion frames currently constitute a viable path on the way to functional, all-weather glacier-velocity monitoring, connecting the past optical archives and the present radar missions and giving a reproducible bottom line for future AI-assisted formulations [
96].
4.5. Insights and Perspectives
The changes in methodologies of glacier-velocity retrieval during the last 30 years can be considered as an indication of an ongoing shift in the practice of glacier-velocity retrieval towards more automated and data-driven methodologies. The successive methodological generations have increased the magnitude, accuracy, and duration of time for which the velocity of glaciers is measured, transforming the way the dynamics of glaciers are measured and comprehended.
The use of feature-tracking techniques has pioneered methods of application on a global scale, whereby it has taken a static image and converted it into dynamic observations. These applications were further enhanced by SAR-based interferometry and offset tracking that enabled them to continue monitoring even in persistently clouded and polar regions to overcome the optical constraints of early monitoring. Due to the introduction of optical flow models, classical image correlation has been brought nearly up to date with modern computer vision, providing dense and spatially continuous motion fields even in the low-texture or shaded glaciers. ML and DL approaches have since accelerated this trend toward automation, learning motion representations directly from imagery without manual feature design, while hybrid and fusion frameworks now integrate multi-sensor information for higher robustness, spatial completeness, and reproducibility.
Throughout such methodological steps, there are three interrelated transition steps:
From empirical modeling to data-driven modeling, which allows automation and scalability;
From single-sensor dependence to multi-sensor synergy, which improves spatial and temporal continuity;
From regional mapping to near-real-time global monitoring, which fosters integration with other components of the Earth system.
Despite rapid progress, there are still major challenges. Harmonization of cross-sensors is nevertheless not trivial because geometry, spatial resolution, and revisit frequency are not the same. The lack of unified benchmarking datasets limits objective comparison across methods. In glaciers covered with debris and high relief, where optical contrast is low and radar coherence is unstable, there are high uncertainties. Additionally, most new AI models, even promising ones, are not interpretable or reproducible, which raises questions on the issues of transparency and transferability of models across different geographical areas.
It is expected that this area will have a significant paradigm shift in the future. The direction of future research will probably be GeoAI-based, self-supervised, and physics-informed models that bridge the interpretability of physical models and the flexibility of DL. Combining the notions of digital-twin, open-access data ecosystems and reproducible processing pipelines will allow for the monitoring of glaciers in near-real time on an entire planet’s scale. Finally, glacier-velocity retrieval is beginning to assume a predictive part of global cryospheric monitoring and climate change evaluation and is no longer a highly specialized mapping procedure.
This methodological development synthesis is the conceptual basis of the following analysis.
Section 5 quantitatively examines how these methodological advances are reflected in the scientific literature, exploring publication trends, regional focus, data-source distribution, and the emergence of AI-driven approaches in DCG velocity studies.
5. Results and Trends of Existing Studies
The results provided in the subsequent subsections are quantitative and thematic and were obtained using the final corpus (
n = 121). We summarize publication trends, geographic and sensor distributions, methodological composition, and reported performance metrics. Narrative argumentations of the results are presented in
Section 6 (Discussion).
5.1. Bibliometric Overview and Publication Trends
The studies devoted to glacier-velocity retrieval increased significantly during the last thirty years. The first studies were performed in the 90s and 2000s, when there were only limited data and restricted computational power. However, the greater accessibility of optical and SAR images, especially with the introduction of missions including Landsat-7, TerraSAR-X, and Sentinel-1, caused a steep rise in scientific output.
Figure 3 reveals that this trend is observed as active growth up to 2014, then a significant increase since 2018.
The temporal development trend indicates technology and methodological changes in RS. The most noticeable peak is post-2018, when ML and AI-driven approaches to determining glacier velocity became common, and a paradigm shift to data-oriented and automated analysis took place. In general, this bibliometric trend is indicative of consistent methodological growth and a rising level of international cooperation in the study of glacier velocity.
In addition to analyzing trends in publications, a geographical distribution of the first authors’ affiliations was also conducted to determine the contribution to research in glacier-velocity retrieval in the world (
Figure 4). The majority of the reviewed articles were conducted by researchers connected with institutions located in China, the United States, and some European countries (particularly, Switzerland, Germany, and the United Kingdom). Russia and Central Asia, with researchers mostly from Tajikistan, Kazakhstan, Kyrgyzstan, and Russia, continued to contribute at a lower level, which is a reflection of regional gaps in research infrastructure, access to data, and computer abilities. This unequal situation has been offset in recent years, as international partnerships, especially between Chinese and Western organizations, have become more frequent, aiming in most cases at High Mountain Asia and the dynamics of glaciers covered with debris. Such a geographic pattern of research production supplements the bibliometric trends shown below and indicates the concentration of research methodological innovation in a limited number of high-capacity research centers, with comparatively little involvement of the data-sparse regions.
To continue the analysis of authors and affiliations, a geographical analysis of the location of research was conducted to determine the locations where research on glacier velocity has been most undertaken in the world. Such institutional-geographic shifts allow us to obtain a more organized vision of the global distribution of methodology implementation and point out areas of gaps between regions where the observation and research activities are still poorly represented. This spatial knowledge further assists in the contextualization of the bibliometric patterns noted above between the physical location of publication activity and the directions of methodological testing and validation.
5.2. Geographic Distribution of Studies
The imbalance in the geographical distribution of glacier-velocity retrieval research has been proven in the reviewed corpus (
Figure 5). The majority of the studies are related to High Mountain Asia, specifically the southwestern, central, and southeastern Himalayas, followed by the European Alps, Greenland, North America, and the Southern Andes. Other regions like the Caucasus, Svalbard, New Zealand, and Patagonia also appear, although there are comparatively fewer studies.
A direct comparison between the geographic origin of research output (
Figure 4) and the location of the studied glaciers (
Figure 5) reveals a notable spatial disconnect. While the primary focus of fieldwork is on glaciers in High Mountain Asia, the leading institutional contributions originate predominantly from China, the United States, and Western Europe. This imbalance underscores the current reliance on remote sensing for monitoring key glacierized regions and highlights a critical gap in local research capacity and in situ validation efforts in these areas.
To gain deeper insight into spatial patterns of research productivity, the contribution of each country in relation to another was evaluated in terms of the author affiliation and the study location. The presented distribution highlights major national centers of glacier-velocity studies and reveals significant geographic polarization, as the majority of publications are concentrated in a limited number of countries in Asia, Europe, and North America.
The dominance of Asian institutions, especially Chinese, Nepalese, and Pakistani ones, aligns with the fact that research often focuses on the context of High Mountain Asia, where the process of glacier dynamics is of primary importance in hydrological and hazard studies. However, European states remain at the forefront of methodological innovation, and this is indicative of their contribution to the creation of sophisticated optical and radar missions (e.g., Sentinel, TerraSAR-X, and SPOT). The limited representation of Central Asian and South American studies underscores enduring inequalities in data accessibility, satellite tasking, and computational capacity. In general, the international distribution of publications suggests that this field continues to be affected intensively by data-rich and well-financed areas, with increasing, yet still unequal, international cooperation.
Continuing on with these spatial patterns, in the next section, we quantitatively explore the type of data sources and sensor systems underlying glacier-velocity retrieval research, which reveals the diversity and technological heterogeneity of the different research areas.
5.3. Sensor and Data Source Distribution
To complement the geographical and bibliometric overview, this section examines the distribution of satellite sensors and data sources applied in glacier-velocity retrieval.
Figure 6 provides a summary of the relative application of the optical, SAR, and multi-sensor systems in the reviewed corpus. Among the 121 analyzed studies, SAR sensors represent the largest share (49 studies; 40.5%), followed by optical sensors (38 studies; 31.4%) and multi-sensor approaches (34 studies; 28.1%).
The predominance of SAR-based methods reflects their capability to acquire consistent data under all-weather and low-light conditions. The Sentinel-1 and ALOS-2 missions have provided the opportunity to continuously monitor polar glaciers and debris-covered areas where optical imagery is limited due to cloud cover or other limitations discovered in
Section 3. The use of optical sensors, i.e., Landsat and Sentinel-2, will not be substituted as they offer long-term data storage and sufficient spatial resolution, which makes them valuable for tracking features and surface displacement in clean-ice and partially debris-covered areas.
Multi-sensor integration is the third category and has gained prevalence since 2015 and involves using optical data and radar data to counter the weakness of each system. These methods increase the temporal coverage, accuracy, and consistency of the velocity estimates and can act as a transition point to new data-fusion and AI-based methods. The use of UAV and high-resolution DEM data is also usually offered to validate and determine uncertainty but is limited to particular test glaciers.
The methodological framework includes optical satellites (Sentinel-2, Landsat) which can be used in high-resolution multispectral analysis, SAR systems (Sentinel-1, ALOS, and TerraSAR-X) which can be used in all-weather, day–night analysis with interferometric techniques, DEMs, represented by the ASTER mission, which can be used in topographic correction, and validation tools, including the UAVs and GNSS, which can be used in ground truthing. The diversity of technology in question makes it easy to implement AI-driven schemes of sensor fusion that are transforming DCG velocity retrieval.
5.4. Methodological Composition and Evolution
The history of glacier-velocity retrieval methodological advancement in the last three decades is indicative of technological advancement as well as algorithmic development.
Figure 7 is an overview of the general methodologies found in the reviewed studies. Our analyses show that traditional methods, such as optical feature tracking and radar interferometry occupy the majority. Nonetheless, it has been observed that there were slow movements towards data-driven and automated frameworks, especially since 2015, due to the appearance of ML and DL applications.
The traditional methodologies used in the past have involved the use of manual or semi-automated tracking of features on optical images (Landsat, SPOT, and ASTER) and radar interferometry (InSAR and offset tracking). However, as was mentioned before, these techniques have been useful on clean-ice glaciers, where the surface texture can be used in correlation-based displacement mapping. Nevertheless, in glaciers covered by debris, these methods usually face challenges.
Figure 8 depicts the distribution of traditional methods, illustrating that optical-based techniques remain most common, whereas radar-based approaches dominate in high-relief and polar regions, where optical imagery is less reliable.
Over the last 10 years, the emergence of ML and DL models has altered the possibilities of methods. In the reviewed papers, 12 of them use ML algorithms to map, delineate, and estimate the velocity of glaciers, 10 use DL architectures, and 3 use hybrid AI frameworks that combine both (
Figure 9). The models aim to automate the extraction of features, displacement estimation, and the reduction in uncertainty. ML-based approaches typically involve regression, clustering, or support vector algorithms to classify motion patterns or detect outliers. In the meantime, DL networks, including CNNs, encoder–decoder networks, and optical-flow-based networks, enable purely end-to-end velocity fields prediction based on inputs in the form of images, reducing the use of manual preprocessing.
AI-enhanced frameworks and ML techniques can handle massive amounts of heterogeneous data, such as optical, SAR, and DEM data, using adaptive learning. Despite their potential, there are still challenges associated with a lack of data to train them, computational work, and poor model transferability to different types of glaciers. At the same time, they are a progressively useful part of future glacier monitoring systems due to their scalability and near-real-time processing capabilities.
Figure 10 illustrates how the evolution of methodological usage changed between 1992 and 2025 to measure temporal adoption. The initial period is dominated by traditional approaches, but then hybrid models become prominent in the 2010s, and the number of studies based on AI grows exponentially since 2018. This transition can be visualized with the help of a complementary Sankey diagram (
Figure 11) in which categories of methodologies are connected to data sources. In general terms, these figures show a definite change in methodological directions from manual and sensor-specific workflows to integrated and intelligent systems with the ability to maintain constant glacier monitoring.
Although long-term evolution (1992–2025) points to the steady diversification of approaches, the past decade has experienced an increased rate of adoption of ML and DL models. To reflect such temporal dynamics in more detail,
Figure 11 demonstrates a Sankey diagram mapping the evolution of methodological categories according to the year between 2018 and 2025. Each stream denotes the relative popularity of ML, DL, and hybrid ML-DL techniques throughout certain publication years, demonstrating an accelerated proliferation of AI-driven methodologies in glacier-velocity research.
Sankey visualization illustrates three evolutionary stages: early adoption (2018–2020) with the prevalence of traditional ML with simple classification tasks, rapid growth (2021–2023) with the emergence of DL with complex pattern recognition, and the present maturation (2024–2025) with sophisticated hybrid models which take the best of both techniques and exploit the different strengths of various AI paradigms. This progress also reflects the availability of technology as well as the increased recognition of the fact that the processes of glaciers covered with debris require more sophisticated computational models. The rapid expansion following 2018 coincides with the democratization of cloud computing (Google Earth Engine, ESA DIAS) and open-source DL frameworks such as PyTorch and TensorFlow (software versions vary across the reviewed studies), which have reduced the barriers to entry of AI-driven glacier research.
Overall, the methodological evolution of glacier-velocity retrieval demonstrates both continuity and innovation. Conventional image-based and interferometric methods are still essential as they have been proven on a larger scale, whereas new hybrid and artificial intelligence-based methods offer a superior degree of automation, scalability, and accuracy. The simultaneous presence of these techniques speaks of the necessity of methodological diversity, as all the approaches cover different glacier states, data limitations, and research goals.
5.5. Performance Summary and Validation
The comparative analysis of the retrieval of glacier-velocity systems (
Table 5) reveals that the key differences are in the data requirements, algorithmic complexity, computational efficiency, and performance of glacier systems in different types of environments.
Results of individual studies are synthesized in
Table 5, which presents representative performance metrics including reported uncertainties, resolution capabilities, and validation approaches for each methodological category. The synthesis results demonstrate consistent patterns across studies, with AI-enhanced methods generally showing superior performance (±1–3 m/yr) compared to traditional approaches (±2–10 m/yr) in debris-covered environments.
We investigated causes of heterogeneity in performance through analysis of geographical distribution, glacier characteristics, and data sources. The main factors contributing to performance variation included regional topography, debris cover thickness, and sensor characteristics. Assessment of reporting bias suggested that studies predominantly reported successful applications, with limited documentation of methodological failures or limitations.
The certainty of evidence was highest for traditional optical and SAR methods due to their extensive validation history, while emerging AI methods showed promising but less consistently validated performance across diverse glacier environments.
In their applicability, conventional optical and SAR-based versions, such as feature tracking, cross-correlation, and interferometric methods, are still employed frequently because they are easy to use and do not have a time limit on data continuity. However, these techniques can be complicated to use on rough surfaces that have debris cover and in which low surface contrast and geometric distortions make tracking less accurate.
The robustness of displacement estimation has been enhanced by the use of ML frameworks, including random forest and support vector regression, which have automated feature extraction as well as data integration from multiple sources. More recently, the most capable have been DL models, including frequently used CNNs and U-Nets, which are able to learn hierarchical spatial features without manual effort on large data sets. DL-based approaches outperform other tools in heterogeneous and debris-covered terrain compared to conventional methods, but they need large labeled datasets, vast processing resources, and extensive validation to guarantee applicability across regions.
Models based on merging physics, conventional algorithms, and AI structures as hybrid approaches are becoming an interesting prospect. They leverage the interpretability and physical consistency of traditional models while enhancing predictive accuracy and automation through data-driven components. Such integrative approaches are expected to underpin the next generation of glacier-velocity monitoring systems.
5.6. Gaps Identified from the Corpus
In addition to the comparison of methods, the analysis of the corpus has also demonstrated some common constraints across reviewed studies.
Quality assessment revealed that approximately 45% of studies demonstrated high methodological rigor with comprehensive descriptions and robust validation, while 15% showed limitations in transparency or validation approaches. These quality considerations informed our synthesis, with greater weight given to higher-quality evidence.
Spatial imbalance remains the most evident issue. Most published studies have centered on the Himalaya, Karakoram, and western China because of the interest in these areas by scientists, as well as the availability of data on these areas [
117,
118,
119,
120]. Conversely, the Central Asian ranges, including Pamir and Tien Shan, and the Andes are covered by relatively limited research [
7,
58], whereas there are still more poorly explored regions, like the Caucasus, Altai, and Central and Northern Andes regions. This geographic concentration limits the application of methodological findings and hinders the evaluation of retrieval techniques across different regions.
There is also a gap in terms of validation. Despite several studies having integrated UAV- or GNSS-based measurements to explore their chosen glaciers, in situ data are still limited in some cases and confined to a limited location-wise scale [
50,
51]. Fewer than 20% of the papers in our comprehensive and systematic review corpus used field-based validation or long-term ground reference data. This makes it harder to know how accurate and uncertain things are in different surface conditions, especially for glaciers that are covered by debris.
The other type of limitation is the accessibility and reproducibility of the data. Numerous velocity retrieval procedures are based on region-dependent datasets or custom code, and few studies are associated with open processing scripts or training data. Transparency, comparability, and cumulative progress are other issues that are becoming more significant in the remote-sensing field due to the absence of publicly disclosed benchmarks.
Finally, while AI-supported frameworks have rapidly attracted attention since 2018, the majority of them are still proofs of concept or are limited to experimental case studies [
121]. Their generalizability between glaciers, climatic regimes, and sensors is seldom studied. Moreover, the physical coherence between AI-generated velocity fields and glacier dynamics models remains inadequately investigated, underscoring the necessity of hybrid methodologies that integrate data-driven inference with physical limitations.
All these gaps collectively demonstrate that, regardless of the significant methodological improvements, glacier-velocity retrieval studies still have issues of spatial representativeness, consistency of the validation, open data, and physically motivated AI incorporation. These deficiencies must be solved in order to have uniform and functional worldwide glacier-motion surveillance.
6. Discussion
6.1. Synthesis of Methodological Progress
A clear methodological shift from manual cross-correlation and interferometric measurements to highly automated, data fusion and data-driven frameworks can be seen in the last three decades of glacier-velocity retrieval research. Every new generation of techniques has addressed particular shortcomings of its predecessors. Radar systems overcame lighting limitations, optical techniques increased spatial detail, and AI-driven frameworks now allow for scalable automation and increased accuracy.
Traditional optical feature tracking and InSAR remain foundational for long-term, large-scale monitoring, offering consistent results where surface contrast and coherence are sufficient [
53,
59,
63]. Their efficacy, however, is reduced across glaciers that are covered by debris because of decorrelation, low texture, and fluctuating illumination. Hybrid systems incorporating SAR and optical data [
33,
36,
86] have been effective in overcoming such deficiencies by combining the complementary capabilities of various sensors. According to this review, the most reliable approach to operational glacier-velocity monitoring is data fusion and algorithmic hybridization.
The emergence of optical flow and DL techniques indicates a paradigm shift from handcrafted algorithms to models that can learn complex spatial relations directly out of the images. CNNs and U-Nets, as well as RAFT-based models, are deep neural architectures that have been shown to perform as well as or better than InSAR in several test cases [
73,
81]. Yet, such models continue to undergo the intractable issues of generalization across areas with diverse spectral, geometrical, and climatic features [
37,
38,
83,
84,
85]. Aside from these drawbacks, the move to automated, temporally dense, high-resolution monitoring marks a significant advancement in cryospheric observation.
ML (e.g., random forests, SVMs) has also played a significant role in facilitating classification tasks, including identifying debris cover and masking stable ground [
77,
78,
79]. These frameworks will serve as an interface between conventional algorithms and entirely DL systems, which will increase the dependability of motion retrieval in non-homogeneous terrain. Collectively, ML and DL developments demonstrate that the frontier of glacier-velocity research lies at the intersection of physical interpretability and computational intelligence.
6.2. Remaining Challenges
Although the methodological advances have been tremendous, a number of core challenges have not yet been solved through existing techniques of glacier-velocity retrieval. Such constraints are not just caused by ML models, but also by traditional sources of data, standard algorithms, and hybrid fusion.
- (1)
Limitations related to data quality and sensor physics.
The challenges of optical sensors are still cloud cover, seasonal shadow, and low surface contrast in DCGs, where feature tracking, as well as optical flow, is less reliable [
21,
58]. SAR systems, while weather-independent, face intrinsic geometric distortions (layover and foreshortening) and coherence loss in steep or debris-mantled terrain [
72,
99]. These are physical limitations that are not fully addressable by an algorithm and which still leave spatially uneven uncertainties in mountain areas. DEMs also introduce errors: global products such as SRTM, ASTER GDEM, and even high-resolution TanDEM-X exhibit vertical offsets and noise over steep valley walls, directly affecting orthorectification, displacement projection, and uncertainty estimation. These limitations regarding structure do not end with co-registration [
45,
47].
- (2)
Methodological constraints that current techniques cannot fully overcome.
Both InSAR and offset tracking have limitations regarding surfaces and coherence, and hence, they cannot be used to provide continuous coverage when applied across dissimilar glacier zones [
101]. Optical feature tracking fails in low-texture debris mantles [
57], while optical flow, although dense, is sensitive to illumination variations and misregistration, producing artifacts that resemble motion [
34]. Hybrid optical-SAR systems eliminate single-sensor limitations but are unable to compensate for cross-sensor effects due to resolution, incidence angle, and acquisition time. Consequently, all current techniques are not yet able to provide spatially complete, uniformly reliable velocities of glaciers that are highly covered by debris or have an extremely complicated topography [
42].
- (3)
Model-related limitations in machine learning and deep learning approaches.
Even though DL and ML methods have been shown to yield competitive accuracy in various fields, their generalization across different regions is not extensive. Trained models used in one mountain range can fail in different glaciers that have different characteristics of debris, spectral properties, and geometries [
37,
38]. Their results can recreate patterns that exist in the training data but not in the actual ice dynamics, and they are not physically interpretable [
85]. Such biases can be reduced by self-supervised and hybrid approaches to physics but still require access to consistent multi-sensor measurements and high-quality reference velocity fields, which are in short supply. Critically, this data scarcity is often a direct consequence of the geographic bias discussed in (5), leading to a fundamental domain gap between training and application environments. Additional limitations to operational use of AI-based retrieval are high computational costs and sensitivity to the quality of training data [
83,
103].
- (4)
Challenges in validation, benchmarking, and reproducibility.
One significant shortcoming of all approaches is the inability to measure the accuracy of buried DCGs and DCGs with ice at high speeds, as well as ice locations with dense in situ measurements of GNSS- and UAV-based references. Less than one-fifth of published studies involve rigorous field validation, and thus, cross-study comparison is difficult [
50,
51]. In addition, the scarcity of open-access processing pipelines and standardized benchmarks limits reproducibility. Differences in preprocessing (co-registration, filtering, and DEM choice) often lead to inconsistent results even when the same satellite data are used.
- (5)
Persistent spatial and thematic gaps.
The focus of research is still disproportionately on the Himalaya, Karakoram, Greenland, and some of the Alps, but not on Central Asian ranges like the Pamir and the Tien Shan, the Caucasus, some of the Andes, and polar mountain areas [
7,
58]. This spatial bias restricts the evaluation of method performance across diverse climate and debris regimes, particularly for AI-based models that require broad training distributions. This geographical underrepresentation introduces a potential bias in the methodological assessments synthesized here, limiting the generalizability of conclusions, especially about data-driven techniques to underrepresented glacier regimes. This bias, sustained by disparities in research infrastructure and focus, creates a circular problem: methods are not robust in underrepresented regions because validation data is scarce, and data remain scarce because reliable methods for those regions are underdeveloped. In addition, some dynamic processes, including the engraving of glaciers, the interactions between ice and debris, and the occurrence of sudden sub-seasonal events, are still not sufficiently represented by current optical or SAR approaches, or other hybrid approaches [
3,
86]. Collectively, these challenges indicate that no single data source, method, or model currently provides a complete solution for spatially uniform, physically consistent, and fully reproducible glacier-velocity monitoring.
- (6)
Language and publication bias.
As a pragmatic choice to ensure methodological consistency and accessibility, this review was limited to English-language publications. This introduces potential language bias, particularly for studies from regions where English is not the primary academic language, such as Russia, Central Asia, and parts of the Andes. This bias likely reinforces the observed geographical imbalance and may result in an underrepresentation of region-specific methodological developments or validation efforts published in local languages. While not unique to our study, acknowledging this limitation is crucial for interpreting the comprehensiveness of the synthesized literature and highlights the need for more inclusive, multilingual approaches in future systematic reviews of globally relevant environmental topics.
Addressing these gaps will require advances in sensor synergy, physically guided learning architectures, standardized validation frameworks, and broader geographic coverage—issues further explored in
Section 6.3 on emerging trends and future directions.
6.3. Emerging Trends and Future Directions
The trajectory of methodological innovation suggests an obvious intersection into smart, combined, and replicable systems. Several paradigm shifts in the field of glacier-velocity research can be expected in the coming decade:
- (1)
High-Resolution and Near-Real-Time Monitoring.
Our analysis shows a clear trajectory toward high-resolution and near-real-time monitoring. The growing availability of data from open-access (e.g., Sentinel-1/2) and commercial constellations is already enabling the tracking of glacier processes at sub-weekly scales of less than a week. Such sets of data will facilitate the identification of fast increases, icefalls, and seasonal acceleration that might have been close to the temporal resolution threshold, as demonstrated by recent studies using dense time series from Sentinel-1 and -2 [
39,
86,
87]. The shift toward near-real-time velocity products also highlights the need for automated co-registration, DEM-based topographic correction, and uncertainty propagation, as demonstrated in recent SAR–optical fusion studies [
95]. In addition to satellite systems, LiDAR technologies (airborne, UAV-based, and terrestrial) are a relatively uncharted but promising future in glacier monitoring at high quality and resolution. Although LiDAR offers centimeter-level surface elevation and roughness, it is highly underutilized in the retrieval of glacier velocities because it has never been repeatedly acquired and has limited spatial resolution and expensive charges. However, LiDAR-based DEMs have high potential in being used as validation data sets, as well as high-fidelity inputs to comprehend micro-topographic controls on the movement of glaciers covered with debris [
44,
47,
50].
- (2)
GeoAI and Self-Supervised Learning.
In response to the persistent challenges identified in this review, particularly data scarcity, the need for extensive validation, and limited model transferability, future methodological development may naturally evolve towards GeoAI frameworks that integrate geophysical priors with data-driven learning. A particularly promising direction is the adoption of self-supervised learning (SSL) paradigms. In principle, SSL could address the critical scarcity of labeled training data by deriving supervision signals directly from temporal consistency in image series or multi-sensor redundancy [
103] approaches conceptually aligned with current optical-flow techniques [
37]. While such methods are not yet prevalent in the glacier-velocity literature, they represent a logical pathway to improve generalizability and mitigate the “domain gap” that currently hinders AI models when applied to new regions. This potential makes them especially relevant for advancing research in underrepresented regions such as the Pamir and Tien Shan, where traditional supervised learning faces significant hurdles due to scarce validation data and distinct environmental conditions [
7,
58].
- (3)
Foundation Models and Digital Twins.
Looking beyond the methodologies prevalent in the current literature, the field is beginning to explore next-generation conceptual frameworks informed by broader digital trends. Digital twins are dynamic, physics-based virtual replicas of glacier systems that assimilate observational data, representing a promising future direction rather than a current operational tool [
121]. Their development would require tight integration of the velocity retrieval methods discussed here with numerical ice-flow models and climate forcing, moving from observation to simulation and prediction [
96]. Similarly, foundation models pre-trained on vast geospatial data could, in the future, offer generalized capabilities for glacier-motion analysis, but such models are not yet present in the reviewed empirical literature. These concepts extend the logical progression from data fusion toward fully integrated, predictive modeling environments.
- (4)
Integration with Climate and Hydrological Models.
An emerging trend identified in the literature is the move toward integrating glacier-velocity products with mass-balance, runoff, and hydrological models. This synthesis promises to provide a better view of glacier–climate feedback and its effects on regional water resources [
3]. It is one of the most promising areas of applied cryospheric science. Since numerous reviewed studies have quantified strain rates and surface-lowering patterns in terms of optical and SAR-derived velocities, integrating these outputs with distributed melt and runoff models may significantly reduce uncertainties in high-mountain hydrology [
16]. Such coupling is especially important for DCGs, the thermal and dynamic regimes of which differ fundamentally from clean-ice systems, as demonstrated by both modeling [
9] and multi-parametric observational studies [
48,
115].
- (5)
Open Data, Reproducibility, and Collaborative Platforms.
A key finding from our review is the critical need for transparent, reproducible, and scalable workflows. Ensuring this will require cloud-based infrastructures and open repositories (e.g., OSF, Zenodo, GEE, etc.) [
60,
61], which may provide the opportunity to share pipelines in data access, in comparing algorithms, and in the process of continuous improvement. Several studies reviewed here relied on locally optimized scripts or non-public datasets, underscoring the need for fully reproducible pipelines, as demonstrated by recent automated processing frameworks [
95,
107] and established reporting guidelines [
24]. Implementation of FAIR principles and workflows that are verified by the community would significantly improve comparability across the regions and would decrease one of the biggest gaps that have been found in
Section 5.6.
All these new directions point to the fact that the future of glacier-velocity studies will be derived from the combination of observation, simulation, and intelligent inference that will allow shifting to continuous and predictive, instead of periodic, mapping to facilitate climate-risk assessment and water-resource management. To summarize these directions,
Figure 12 presents a conceptual framework outlining how multi-sensor data streams, AI-based inference, and physical modeling can converge to enable next-generation glacier-velocity retrieval and prediction. Bellow’s presented framework emphasizes the transition from observation to prediction—integrating data, algorithms, and environmental context within a unified digital twin infrastructure.
6.4. Outlook
Comprehensively, glacier-velocity retrieval is shifting from a descriptive observational science to a predictive and model-integrated one. Conventional approaches are important to continuity in the long term, whereas GeoAI and hybrid fusion models contribute to flexibility and automation. The best research avenue is the integration of these paradigms by tapping into the power of AI to be computationally efficient without losing physical interpretability. With proper methodological innovation and congruence with open science and interdisciplinary cooperation, the discipline will be able to provide global, near-real-time glacier measurements that have the potential to guide resilience to climate change and water-resource management.
7. Conclusions
This review has highlighted that artificial intelligence holds transformative potential in reshaping the monitoring of DCGs. AI-based frameworks and, in particular, SAR–optical fusion, physics-informed neural networks, and recurrent networks have resolved spectral ambiguity problems and the decorrelation constraint limitations of traditional methods to retrieve velocity at the sub-meter accuracy level. Reducing interpolation errors through the integration of satellite, UAV, and GNSS data into uncertainty-aware systems has rendered it possible to detect subtle dynamic phenomena like intense strain gradients, micro-accelerations close to ice cliffs, and surge precursors. These developments also demonstrate that combining physically based knowledge with data-driven inferences can overcome several long-standing challenges specific to debris-covered ice, such as thermal heterogeneity and low surface contrast limitations, that traditional techniques cannot fully resolve.
However, to realize the full potential of these methods, more data has to be collected, computation has to be increased, and nonlinear interactions between debris and climate should be validated. In 2030, AI-driven glacier surveillance could be fully operationally developed through concerted international work and standard validation processes. The potential of these developments is transformative in terms of better predictions of the mass balances of glaciers, water-resource management, and climate adaptation interventions in vulnerable mountainous areas. Finally, AI-driven glacier monitoring places DCG research at the forefront of climate intelligence by indicating an evolutionary step from descriptive mapping toward predictive Earth system modeling. To realize this potential, the field must overcome persistent challenges. The most critical and actionable step, as underscored by this review, is the establishment of open, globally representative benchmark datasets for glacier velocity. These benchmarks are a prerequisite for developing reproducible workflows and rigorous model evaluation, and ensuring the reliable transferability of advanced methods across diverse glacial environments.