Next Article in Journal
Simultaneous Remote Sensing of HD16O/H216O Profile Using Differential Absorption Lidar: A Feasibility Analysis
Previous Article in Journal
Nighttime Contrail Characterization from Multisource Lidar and Meteorological Observations
Previous Article in Special Issue
Dead Sea Stromatolite Reefs: Testing Ground for Remote Sensing Automated Detection of Life Forms and Their Traces in Harsh Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea

1
Knell Family Institute for Artificial Intelligence, Weizmann Institute, Rehovot 76100, Israel
2
Geological Survey of Israel, Jerusalem 9692100, Israel
3
Department of Earth and Planetary Sciences, Weizmann Institute, Rehovot 76100, Israel
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 211; https://doi.org/10.3390/rs18020211
Submission received: 26 November 2025 / Revised: 16 December 2025 / Accepted: 2 January 2026 / Published: 8 January 2026

Highlights

What are the main findings?
  • A UNet-based Deep Learning architecture was used to detect subsidence patterns observed in Interferometric Synthetic Aperture Radar (InSAR) measurements.
  • Different train–test partition schemes based on random patches, temporal division, and geographic distribution showed high inference performance, with object-level metric scores above 0.8.
What are the implications of the main findings?
  • The UNet architecture shows strong potential for automating the delineation of sinkhole-induced subsidence from individual wrapped interferograms, reducing post-processing overhead and human errors, and enhancing the efficiency and reliability of sinkhole activity monitoring.
  • Different train–test partitioning schemes reveal a clear hierarchy of model generalization, quantified using object-level performance metrics, from recognizing partially seen subsidence patterns to transferring across unseen acquisitions and geospatial regions.

Abstract

The Dead Sea (DS) region has experienced a sharp increase in sinkhole formation in recent years, posing environmental and infrastructure risks. The Geological Survey of Israel (GSI) employs Interferometric Synthetic Aperture Radar (InSAR) to monitor sinkhole activity and manually map land subsidence along the western shore of the DS. This process is both time-consuming and prone to human error. Automating detection with Deep Learning (DL) offers a transformative opportunity to enhance monitoring precision, scalability, and real-time decision-making. DL segmentation architectures such as UNet, Attention UNet, SAM, TransUNet, and SegFormer have shown effectiveness in learning geospatial deformation patterns in InSAR and related remote sensing data. This study provides a first comprehensive evaluation of a DL segmentation model applied to InSAR data for detecting land subsidence areas that occur as part of the sinkhole-formation process along the western shores of the DS. Unlike image-based tasks, our new model learns interferometric phase patterns that capture subtle ground deformations rather than direct visual features. As the ground truth in the supervised learning process, we use subsidence areas delineated on the phase maps by the GSI team over the years as part of the operational subsidence surveillance and monitoring activities. This unique data poses challenges for annotation, learning, and interpretability, making the dataset both non-trivial and valuable for advancing research in applied remote sensing and its application in the DS. We train the model across three partition schemes, each representing a different type and level of generalization, and introduce object-level metrics to assess its detection ability. Our results show that the model effectively identifies and generalizes subsidence areas in InSAR data across different setups and temporal conditions and shows promising potential for geographical generalization in previously unseen areas. Finally, large-scale subsidence trends are inferred by reconstructing smaller-scale patches and evaluated for different confidence thresholds.

1. Introduction

Sinkhole formation has significant infrastructure and risk implications (e.g., [1,2,3]). Cover-collapse sinkholes (hereafter referred to as sinkholes) typically develop when subsurface soluble rocks dissolve and form voids that ultimately collapse [4,5,6]. These collapses typically occur within broader, slowly evolving precursor subsidence zones, driven by the ongoing dissolution of the underlying rock [7,8,9,10,11]. The mechanical properties of the rocks control the spatio-temporal characteristics of the subsidence, which can be observed remotely by Interferometric Synthetic Aperture RADAR (InSAR) measurements, Light Detection and Ranging (LiDAR) elevation observations, and field surveys [12]. Monitoring these subsidence zones is thus critical for tracking the evolution of sinkhole activity.
The Dead Sea (DS) (Figure 1a), is a hyper-saline terminal lake located within the DS transform, between the Sinai sub-plate and the Arabian plate [13]. The DS is divided into two basins. The southern basin, ~30 km long, with a minimal elevation of ~410 m below sea level (mbsl), is occupied by Potash industry evaporation ponds, and the northern basin, ~50 km long, with a minimal elevation of ~730 mbsl. Along the shores, sinkholes form due to the dissolution of a 5–20 m-thick 11 k yr-old salt (Halite) layer located between 5 and 65 m below the Holocene Ze’elim Formation [14,15,16]. An anomalously thicker salt layer (~35 m) was found in boreholes at the southern basin, suggesting differences in the limnological conditions between the two basins during their formation [14]. At its northern basin, the DS water level drops at a rate of ~1 m/year, exposing the salt layer to underground and meteoric fresh water and accelerating the sinkhole formation [17]. Between the two basins of the DS, an elevated sill, the Lynch Strait, was exposed during the 1970s following the DS water level drop, below its elevation at ~400 mbsl. The Lynch Strait is a complex hydrological and geological feature, including tectonic activity, mud flats, streams, and underground flow channels, and presenting extensive subsidence and sinkhole formation features [18,19].
The DS Sinkholes form in two main sedimentary environments: coarse alluvial fans and fine-grained mud flats, where the collapse processes evolve differently within the subsidence areas. Mud-flat collapses tend to be shallow, wide, and centered within their precursor subsidence zones, whereas alluvial-fan collapses typically occur along the edges of broader subsidence areas with longer precursory intervals [12]. In both settings, sinkholes align with concealed tectonic lineaments attributed to underlying faults [15] and remain confined to the subsurface extent of the salt layer, generally a few kilometers inland and below the ~380 m contour.
Since their discovery in the 1980s [20], nearly 4000 sinkhole collapses have been identified and mapped along the DS shores. While the number of collapsed sinkholes remains nearly constant, with small sinkholes merging and new ones collapsing, the sinkhole area doubled in size from ~1 km2 to ~2 km2 between 2016 and 2022 (Figure 2a). Initially identified by low-resolution InSAR data [21], the sinkholes are confined to gradual subsidence areas that range from under 100 m to over 1000 m across. This natural hazard has a significant impact on infrastructure, economy, and local communities, —leading to the main road, recreational area and campgrounds, and agricultural plantations closures, with economic losses estimated by $25 million USD in 1995 for Kibutz Ein- Gedi (Israel), and $70–$90 million USD for the Arab Potash Company (Jordan). These cumulative impacts illustrate the scale and immediacy of the hazard and underscore the need for reliable monitoring and early-warning capabilities [22,23,24].
InSAR is used for measuring subtle surface displacements over time [10,25,26,27]. There has been significant development in space geodesy and in InSAR in recent years, with increased data quantity and quality. Higher resolution, more frequent acquisition times, and improved InSAR imagery are now available. The resulting large datasets require automatic processing schemes for feasible and scalable identification of signals as well as feature extraction and analysis.
Deep Learning (DL) techniques have proven to be very useful for identifying land deformation in InSAR signals [28,29,30]. These include Convolutional Neural Networks (CNNs) using wrapped interferogram classification [31] or Vision Transformers [30] for detecting deformation fringe patterns. However, most studies have focused on large-scale deformation events, such as volcanic eruptions. In addition, most previous studies have used synthetic training datasets to train their respective DL models. DL approaches, specifically image segmentation, have also been recently used for identifying collapsed sinkholes in RGB visual bands from drone and satellite imagery on the eastern shores of the DS [32], and for detecting landslides in remote sensing data using hybrid data and network architecture schemes [33]. However, no prior work has applied DL to directly detect precursory sinkhole-related land subsidence from InSAR phase maps.
Originally designed for biomedical image segmentation, enabling high-accuracy detection of structures like cells and tissues, UNet [34] remains one of the most powerful and widely used DL architectures for image segmentation. Its effectiveness is especially notable in applications requiring precise object delineation, such as medical imaging, remote sensing, and geospatial analysis. The architecture’s strength lies in its fully convolutional encoder–decoder structure, which captures both global context and fine spatial detail. Despite the continual emergence of new segmentation networks—including Transformer-based architectures such as Segment Anything (SAM) [35], TransUNet [36] and SegFormer [37], as well as attention-enhanced variants like Attention UNet [38]—UNet continues to be a top-performing model in many scientific applications, particularly for medium-sized datasets, due to its strong balance of efficiency, accuracy, and robustness.
In this study, we train and apply DL-based image segmentation models to SAR interferograms, leveraging a methodically assembled dataset shaped by years of expert inspection. Using manually delineated subsidence areas as ground truth, we develop and assess a model for detecting sinkhole-related land subsidence under diverse generalization conditions, aimed at enhancing sinkhole monitoring along the western shores of the DS.

Dataset Background and Context

The Geological Survey of Israel (GSI) has been monitoring the DS sinkholes and associated surface subsidence since 2012. GSI employs annual LiDAR and two-pass InSAR measurements (every 11 days). The LiDAR measurements are used for both mapping collapsed sinkholes (Figure 1b) and accurately removing elevation-phase contribution to the InSAR measurements (Figure 1c). The InSAR measurements are used to delineate the local high-rate (>1 mm/day) surface displacements, manually interpreted as sinkhole-related subsidence [22].
The sinkhole-related patterns on the DS western shore have a relatively strong gradient (1 or more fringes along 20–500 m), increasing with temporal interval. Other pattern sources can be found outside the Halite layer area, presenting a low gradient, spanning over a few kilometers, or associated with a single SAR acquisition and are identified as a unique atmospheric signal [9,22]. Figure 1c shows examples of different signal sources.
Currently, the GSI sinkhole monitoring system applies a manual inspection of the interferograms in three time windows of 11, 44, and 77 days to delineate the sinkhole-related subsidence (Figure 1a). We found that these short temporal baseline interferograms are ideal for mapping sinkhole-induced subsidence along the Dead Sea shores, mitigating decorrelation processes while considering subsidence rates and the RADAR wavelength, and overcoming the shortcomings of phase-scatterer time-series analysis [22].
The number of sinkhole-related subsidence areas changed from ~100 in 2018 to ~500 in 2022, dropping to ~300 in 2023 (Figure 1b). Similar to the sinkholes, some areas merged, decreasing the number of subsidence areas, while other areas stopped their displacement due to changes in the underground processes that control their activity. The subsidence area increased from ~1 km2 in 2018 to ~6 km2 in 2023 (Figure 2b), three times larger than the area covered by the collapsed sinkholes.
The manual analysis of the data is a time-consuming challenge. Automating this process presents transformative potential in terms of scalability, consistency, and operational efficiency. The fact that, unlike conventional image-based DL tasks, InSAR provides phase patterns representing small land deformations rather than direct visual features affects annotation certainty, learning processes, and model interpretability. As a result, the DL model’s detection accuracy and generalization ability present challenges that necessitate specialized data preprocessing and evaluation methods. We introduce object-level metrics specifically designed for the characteristics of interferometric phase data, enabling model performance assessment beyond traditional pixel-wise accuracy. Using these, we demonstrate and quantify the DL model’s ability to generalize across unseen data, namely, different input arrangements, interferograms with new noise levels and varying temporal conditions, and different geographical regions. The GSI is evaluating the integration of the DL model into the DS Sinkholes Monitoring System workflow, with the potential to enhance monitoring efficiency, support informed decision-making, and improve the scalability of operational geohazard detection.

2. Data Collection and Mapping

We use X-band TerraSAR-X SAR acquisitions. Interferograms have been regularly acquired every 11 days since 2019, with only a handful of missing images. The images are acquired in both ascending and descending orbits for the southern and northern parts of the DS basin, respectively, with some overlap in the Zeelim to Ein-Gedi area. The SAR images are processed using the Gamma Software [39]. Interferograms were produced at full resolution (3 m x 3 m per pixel) and processed using a 16 × 16 pixel local fringe spectrum adaptive filter [40]. Long spatial wavelength corrections (e.g., orbit re-estimation or scene-wide gradient correction) are unnecessary due to the short spatial scale of the subsidence. The resulting interferograms cover a spatial region of approximately 150 km2 (~3 × 50 km2). The proportion of the subsidence region is about 3% (~5 km2, see Figure 1a). The interferograms are represented by 32-bit float numbers ranging between −π and π (wrapped interferograms). Each interferogram is associated with a layer of manually mapped polygons representing land-subsiding areas, used for the ongoing inspection and monitoring of sinkhole-related activity. Figure 1c shows an example interferogram with overlaid polygons marking land-subsiding areas (for more details on InSAR processing, see [22]). From January 2019 to December 2021, manual mapping was performed primarily on 11-day interferograms. Since December 2021, mapping has included 11-, 44-, and 77-day interferograms. The additional intervals improved the detection of slower subsiding areas and reduced noise, but also led to less consistent mapping across individual interferograms, as some subsiding areas appeared in multiple spans and were not always annotated. In total, there are 400 fully and partially mapped interferograms. Each is associated with a layer containing tens to hundreds of annotated polygons delineating subsidence areas, amounting to approximately 47,000 polygons. These annotations serve as the ground truth for training the DL model.

3. Methods

3.1. Pre-Training Data Preparation and Conditioning

The manually mapped subsidence areas serve as the ground truth annotations for supervised learning of the DL segmentation model. To this end, the polygon layers are rasterized into binary mask arrays, which serve as the ground truth masks for this study. During the data preparation stage, each mapped interferogram is subdivided into overlapping patches with configurable patch sizes and strides. According to the typical diameter ranges of the mapped polygons (~few 10 s to 1000 m), we chose a default patch size of 200 × 100 pixels, which are ~600 m × 300 m (N, W), so that each non-zero mask patch includes up to 10 polygons, and a stride of half a patch in each direction (N, W), such that each pixel is covered by 4 overlapping patches. Due to the extremely unbalanced nature of the data, with subsiding areas comprising only about 3% of the entire area of interest, we train the model exclusively on non-zero mask patches (where at least one pixel in the ground truth patch is labeled as 1). For practical reasons described in Section 2, the interferograms from 2022–2023 are only partially mapped. Therefore, for maximum consistency and annotation confidence, we train and test most of the models on the 11-day interval interferograms from 2019–2021, except for special cases discussed in subsequent sections. After applying a minimum threshold of 350 non-zero mask patches for northern frames and 150 patches for southern frames, this dataset contains 127 interferograms and ~42,000 non-zero mask patches, which we divide into training, validation, and test sets.

3.2. Binary Image Segmentation and Network Architecture

Mapping subsidence areas in wrapped interferograms is formulated as a pixel-wise binary segmentation task. DL architectures have significantly advanced binary image segmentation, offering precise and efficient solutions to manage the complexity of remote sensing data [41,42,43].
We train a UNet-based model in a supervised learning framework to identify subsidence pixels directly from the interferometric phase patterns. The UNet architecture comprises an encoder–bottleneck-decoder structure with skip connections that preserve spatial information while enabling multi-scale feature extraction. For the complete UNet architecture used in this study, as well as the selection of the input, output, and ground truth representations used in this study, see Figure 3. A broader overview of UNet, can be found in [34].
In addition, we explored two other UNet variants [36,38] to evaluate whether incorporating self-attention mechanisms in the bottleneck or integrating attention-gated skip connections can improve the model’s generalization across features. As these variants did not yield meaningful improvements and consistently performed similarly to, or slightly worse than, the original UNet, their evaluation is not included in the Results. However, implementation details are provided in Supplementary Materials S1.

3.3. Training-Test Partitioning Strategies

We explored three training-validation-test partitioning strategies, starting with a basic approach and progressing to more challenging distributions that better simulate real-world scenarios. The applied partition schemes are: (1) Random by Patches, (2) Random by Interferograms, and (3) Spatial.
1.
In the Random by Patches partition, all patches in the dataset are randomly divided into training, validation, and test sets.
a.
In the first variation, there is no restriction on the overlap between patches in the training set and those in the validation and test sets. While some degree of information leakage is expected, evaluating the model in this partition type offers valuable insights into its generalization ability and robustness in detecting subsidence objects despite variations in their appearance within the input phase images.
b.
In the second variation, Training and validation/test patches may be drawn from the same interferogram but without spatial overlap.
2.
In the Random by Interferograms partition, validation and test sets include patches from (randomly selected) interferograms that were not part of the training. This simulates the model’s performance when processing an unseen interferogram acquired on different dates, thereby introducing distinct atmospheric, ground, acquisition settings, and other temporal conditions.
3.
In the Spatial partition scheme, the dataset is geographically divided at a latitude of 31.3°, such that patches to the north form the training set, while southern patches form the validation and test sets. This scenario evaluates the model’s generalization potential across diverse geospatial conditions. In this scheme, we also train and test a model on the full dataset, which includes 11-, 44- and 77-day interval interferograms across the 2019–2023 range, using a dividing latitude of 31.4°. This is done to assess whether geographical generalization scales with dataset size and diversity. In addition, we train a spatial partition model with data augmentations, as described in the next subsection.
For the number of training, validation and test patches in each partition, see Table S1 in Supplementary Materials S2.

3.4. Data Augmentation for Spatial Partition Scheme

In the expanded spatial-partition setup (larger dataset and higher dividing latitude), we additionally apply data augmentations during training to further explore generalization potential. To simulate variability in geological settings and deformation patterns, we apply a set of data augmentations during training and select the configuration that yields the highest validation score. This augmentation set includes random rotations (±30°, applied with probability p = 0.4) to mimic diverse orientations of geospatial features, elastic transformations (p = 0.3) and grid distortions (p = 0.2) to introduce local, non-rigid structural variability, and random brightness–contrast adjustments (p = 0.3) to emulate different subsidence gradients and fringe sharpness. Gaussian noise is added (p = 0.3) to reflect potential signal degradation under varying ground and coherence conditions. All augmentations are applied consistently to both the interferometric phase image and its corresponding mask.

3.5. Training and Loss Function

We train the UNet model for the three partition types and their variations. The loss function, L , minimized during training is the sum of two loss functions commonly used for binary segmentation problems:
L = 1 N i = 1 N y GT , i log y Pred , i + 1 y GT , i log 1 y Pred , i + 1 2 i = 1 N y P r e d , i y G T , i i = 1 N y P r e d , i + i = 1 N y G T , i
The first term is the Binary Cross-Entropy Loss with Logits (BCEwl) [44] and the second is an overlap measure called the Dice Loss [45] which is (1- Dice-Coefficient). i indicates the pixel index and the sums are over the number of pixels in each patch (input image), N . y GT , i are the ground truth (0/1) pixel values and y Pred , i are the model prediction values, which are given by the Sigmoid function, σ ( z i ) applied to the raw output of the model, z i , so that their values range between 0 and 1. At evaluation stage (validation or test), the predicted values are computed using a threshold of 0.5 on the output and are assigned binary values (0 or 1),
y Pred , i train = σ z i ;
y Pred , i eval = 1 ,     σ ( z i ) 0.5 0 ,     σ ( z i ) < 0.5 .
We minimize the loss function using the Torch RMSprop optimizer [46], with initial learning rate is 10 5 , and the training batch size of 128. We train the model for a maximum of 150 epochs. To evaluate the model performance after each epoch (a full pass over the entire dataset) and select the best-performing version, the Dice-Coefficient is computed on the validation set, serving as the validation score. Additionally, a learning-rate scheduler reduces the learning rate when validation performance plateaus.

3.6. Evaluation Metrics and Inference

3.6.1. Metrics

Pixel-Level Metrics: We use several metrics to estimate the model’s performance for each partition type and its subcases. The first is the pixel-wise Dice Coefficient, which is a stricter metric used for model validation and parameter tuning during training, as discussed in Section 3.5. The second and third metrics are pixel-wise Recall and Precision:
Recall =   TP TP + FN ,
Precision = TP TP + FP ,
where TP, FN, and FP are the True Positive, False Negative, and False Positive counts, respectively. Recall Equation (4) is the ratio between the number of correctly predicted subsidence pixels and the total number of ground truth subsidence pixels. Precision Equation (5) is the ratio between the number of correctly predicted subsidence pixels and the total number of pixels predicted as subsidence (i.e., all pixels assigned a value of 1 by the model). The pixel-level metrics are computed on the test sets to verify consistency with the validation results and overall performance trends. However, in what follows, we focus on the object-level (OL) metric scores, described below.
Object-Level Metrics: Due to the nature of subsidence area annotation and the inherent ambiguity in boundary delineation, the primary metrics used during testing are the Object-Level (OL) Recall and Object-Level Precision, denoted RecallOL and PrecisionOL. These metrics quantify whether a ground-truth object (a mapped subsidence area) is correctly or incorrectly identified within a specified tolerance, rather than requiring precise boundary agreement. They are defined as:
Recall OL = T P OL T P OL + F N OL ,
Precision OL = T P OL T P OL + F P OL .
Here, T P OL is the total area of ground-truth objects that were correctly detected, F N OL is the total area of undetected objects, and F P OL is the area of falsely detected objects absent from the ground truth.
Each ground-truth polygon G T OL is considered detected if it satisfies the Intersection condition:
A G T O L j P O L , j b A G T O L I T h ,
where A denotes area, denotes intersection, and j P O L , j   b denotes the union of all predicted polygons, each expanded by a buffer of b pixels, and I T h is the Intersection Threshold defining the minimum required intersection fraction for detection. Similarly, an area in a predicted polygon, P OL , is counted as a false prediction if it satisfies:
1 A P OL j GT OL ,   j   b A P OL I T h ,
where denotes exclusion. These object-wise detection rules, applied to every ground-truth and predicted polygon within a patch, yield the patch OL Recall and Precision values used in the evaluation. For the full test set, the final Recall OL and Precision OL scores are obtained by computing ground-truth–area-weighted averages across all patches. The detailed OL metrics formulation is provided in Supplementary Materials S3.
ITh and b are referred to as the Intersection Tolerance parameters. Their default values are determined by quantifying the annotation uncertainty of the smallest mapped subsidence areas. The buffer is set to b = 5 pixels and is also selected to be smaller than the minimum approximate subsidence diameter and to constitute 5% of the patch height and 10% of its width. The default Intersection Threshold I T h = 0.7 follows the commonly used “majority-overlap’’ principle in object-detection evaluation, requiring that most of a ground-truth polygon be intersected for it to be considered detected. This choice balances annotation uncertainty with a sufficiently strict detection criterion.
Figure 4 illustrates the motivation and structure of the Object-Level (OL) evaluation metrics used in this work. Figure 4a highlights the variability and ambiguity in manually mapped subsidence areas across interferograms. These challenges arise from differences in shape, size, spatial extent, and textural appearance—including decorrelated regions—as well as the difficulty of distinguishing true subsidence signatures from background phase noise. Together, these factors underscore the need for dedicated, carefully designed evaluation metrics. Figure 4b visualizes the Intersection-Tolerance framework used for computing Recall OL , demonstrating how detection is determined by the intersection ratio between each ground-truth polygon and the buffered model predictions. Figure 4c illustrates the rationale behind the choice of default Intersection Tolerance parameters by quantifying the uncertainty of a small, ambiguous mapped subsidence.

3.6.2. Per-Patch Evaluation and Full-Range Inference Reconstruction

To assess the model performance, we compare the ground-truth masks derived from the manually mapped polygons with the model’s predictions and compute the metrics defined in Section 3.6.1 for two output modes:
(1)
local (per-patch) evaluation, performed on all non-zero ground-truth patches in the test dataset, and
(2)
full-range inference reconstruction, in which predictions are combined in spatial order across overlapping patches to generate a continuous output over a target region.
Per-patch evaluation: We compute the pixel-level and OL metric scores on the non-zero mask patches in the test datasets of the three partition schemes outlined in Section 3.3. the OL metrics are computed using the default Intersection Tolerance parameters: ITh = 0.7 and b = 5 pixels (~15 m). For the Random by Patches partition with overlap, we also calculate the OL metrics for stricter Intersection Tolerance parameters ITh = 0.9 and b = 2 pixels (~6 m). For all other models, we calculate the OL metrics with softer Intersection Tolerance parameters, ITh = 0.5 and b = 10 pixels (~30 m), in addition to the default parameters.
Full-range reconstruction is performed only for the Random by Interferograms partition, where entire interferograms are withheld during training and used exclusively for testing. We reconstruct predictions over a band surrounding the subsidence trend line (longitudes ~35.35–35.45°, latitudes ~31.2–31.7°). Patches are processed in spatial order using a quarter-patch stride, resulting in each interior pixel being included in up to sixteen overlapping patches. This enables defining a Confidence Factor as the fraction of overlapping patches that assign a positive (1) label to a given pixel, taking values in 0 ,   0.0625 , ,   1 . Thresholding this quantity using the Reconstruction Threshold (RTh) produces the final binary reconstructed inference. We report results for R T h = 0.125,0.25 , and 0.5 , corresponding to requiring positive predictions in at least two, four, and eight of the sixteen overlapping patches, respectively.
Since full-range reconstruction evaluates numerous all-zero patches far from the subsidence trend, it becomes especially sensitive to missing annotations and to non–sinkhole deformation sources (e.g., anthropogenic deformation, coastal compaction, clay-related motion), which can artificially increase false positives and depress Precision. We therefore treat Recall OL as the primary metric, while still reporting Precision OL and its dependence on the Reconstruction Threshold, evaluating both metrics under the default and softer (ITh = 0.5 and b = 10) Intersection Tolerance parameters.
To further characterize generalization across subsidence types, we perform a polygon-level analysis on the test set supplemented with four additional interferograms from 2022–2023, yielding ten previously unseen interferograms containing 1309 manually mapped polygons. For RTh   =   0.375 , we evaluate detection performance as a function of geometric (area, roundness), textural (normalized phase standard deviation, mean local 2D phase gradient), and temporal (acquisition year) attributes. For each feature, we report the distributions of detected versus undetected polygons and their corresponding detection ratios. Additional details on feature computation are provided in Supplementary Materials S4.

4. Results

4.1. Per-Patch Inference

Table 1 presents the metric scores described in Section 3.6.1, calculated on the non-zero mask patches in the test datasets of all partition schemes and their subcases, outlined in Section 3.3. The OL metric scores are marked in bold.
Figure 5 presents examples of model results on patches from the test sets for each partition scheme. For each partition, four examples are shown, ordered by decreasing OL metric scores (from top-left to bottom-right). Each example displays the input patch (phase values between −π and π), the corresponding ground-truth mask, and the predicted mask produced by the model, illustrating performance across a range of subsidence patterns.
1.
Random by Patches Partition Scheme
a. First Variation: Random by Patches with Overlap
Table 1 (part 1a) summarizes the metric scores of the model performance for the Random by Patches scheme where patches in the validation/test sets are allowed to partially overlap with patches in the training set. The OL metrics, using the default Intersection Tolerance parameters, are notably high—0.98 for both RecallOL and PrecisionOL. This may indicate partial overfitting, as discussed later. Tightening the Intersection Tolerance parameters slightly lowers the scores but still maintains strong performance with 0.92 and 0.96 for RecallOL and PrecisionOL, respectively. In the corresponding Figure 5a the first two examples show cases of perfect performance—both metrics equal to 1—and the last two show cases with reduced RecallOL and PrecisionOL, respectively.
b. Second Variation: Random by Patches without Overlap
For the default Intersection Tolerance parameters, the scores drop compared to the first variation and are 0.82 and 0.87 for RecallOL and PrecisionOL—supporting the interpretation of partial overfitting when overlap is allowed (part 1b in Tale 1). For softer Intersection Tolerance parameters, the scores are 0.9 and 0.92 for RecallOL and PrecisionOL. In Figure 5b the first example illustrates near-perfect performance, the second and third exhibit reduced PrecisionOL and RecallOL, respectively, and the fourth example shows weaker performance, with both metrics falling well below 0.8.
2.
Random by Interferograms Partition Scheme
The average RecallOL and PrecisionOL in this scheme are 0.81 and 0.88, respectively, for the default Intersection Tolerance parameters, reaching 0.9 and 0.93 for the softer parameters (part 2 in Table 1). In the corresponding Figure 5c, the first example illustrates near-perfect performance, the second and third exhibit reduced PrecisionOL and RecallOL, respectively, and the fourth shows weaker performance, with both metrics falling below 0.8.
3.
Spatial Partition Scheme
In the Spatial partition scheme, for the model trained on the 11-day interval interferograms from 2019–2021 with a dividing latitude at 31.3° the RecallOL and PrecisionOL are 0.72 and 0.8, respectively, for the default Intersection Tolerance parameters, increasing to 0.81 and 0.87, respectively, for the softer Intersection Tolerance parameters (part 3, first row in Table 1). In Figure 5d the first example illustrates near-perfect performance, the second and third exhibit reduced PrecisionOL and RecallOL, respectively, and the fourth shows weaker performance, with both metrics falling well below 0.7. The model trained on the larger dataset with a northern latitude threshold (part 3, second row) demonstrates improved performance scores, RecallOL and PrecisionOL at 0.78 and 0.83, respectively, for the default Intersection Tolerance parameters and 0.86 and 0.88 for the softer Intersection Tolerance parameters. In the third row of part 3 in Table 1, we report the metric scores obtained after applying the data augmentations described in Section 3.5 to the model trained on the larger dataset. A substantial improvement is observed, with both RecallOL and PrecisionOL reaching 0.86 under the default Intersection Tolerance parameters, and further improving to 0.91 and 0.9, respectively, under softer intersection conditions.

4.2. Full Range Inference Reconstruction

Figure 6 and Figure 7 show reconstructed predictions of unseen northern and southern frames from the Random-by-Interferograms test set over a ~30 km × 8 km band (Longitude ~35.35–35.425°, Latitude ~31.2–31.63°). Panels (a1,b1,a2,b2) display the phase image and its ground-truth mask; panels (c1–e1,c2–e2) show predictions for increasing reconstruction thresholds RTh, and panel (f2) presents the confidence-factor map used to derive them.
Table 2 reports OL metrics for three RTh values under strict and soft Intersection Tolerance parameters. At RTh = 0.125 (demanding positive prediction in ≥2/16 overlapping patches), RecallOL reaches 0.9 and 0.94, while PrecisionOL is 0.73 and 0.75. Increasing RTh to 0.25 lowers RecallOL to 0.82 and 0.90 and raises PrecisionOL to 0.82 and 0.84. At RTh = 0.5 (positive in ≥8/16 patches), RecallOL decreases to 0.71 and 0.80, while PrecisionOL increases to 0.89 and 0.90.
The results just described are supported by the images in Figure 6 and Figure 7, which show the reproduction of the elongated subsidence trend line along the shore. A closer examination of these figures reveals that for RTh = 0.125, the inferred trend line in the predicted mask panel aligns more closely with the true mask but exhibits noisy regions outside of it, indicating high Recall and low Precision (Figure 6(c1,c2) and Figure 7(c1,c2,c3)). Conversely, with increased RTh, the noise diminishes but the trend line becomes incomplete, with missing true polygons (Figure 6(d1,e1,d2,e2) and Figure 7(d1,e1,d2,e2,d3,e3)), which is reflected in lowered Recall and increased Precision scores in Table 2. These observations are evident in both the full range images and the corresponding zoomed-in views.
Figure 8 presents the model’s detection analysis on the test dataset, supplemented with four additional interferograms from 2022 and 2023, resulting in a total of 1309 manually mapped subsidence polygons. The distribution of detected and undetected polygons at RTh = 0.375, using the default Intersection Tolerance parameters (ITh = 0.7, b = 5 pixels), is shown as a stacked histogram with respect to geometric features (area, roundness), textural features (normalized phase standard deviation and mean local 2D phase gradient), and temporal attribute (acquisition year). Each feature’s histogram is bounded by its distribution range, along with the corresponding detection ratio curve. The detection ratio remains stable across most of the area and roundness ranges, with a slight dip for smaller areas, around the peak of the area distribution (below 5000 m2, corresponding to diameters of ~20–80 m), where it increases from below 0.8 to above 0.8, peaking around 9000 m2 (approximately 100 m diameter) with a maximum ratio of 0.9. For normalized phase standard deviation and mean local phase gradient, detection starts at low values with detection ratios of 0.4 and 0.57, respectively, stabilizes when these features reach around 0.1, with detection values exceeding 0.8, and then declines again at higher values. Importantly, the detection ratio not only remains stable for interferograms acquired outside of the training period (2022 and 2023) but also increases to 0.9.

4.3. Computational Performance

To evaluate the potential deployment and integration of the model within the GSI system, inference on a full range interferogram (longitude 35.3 ° 35.7 ° , latitude 31.2 ° 31.8 ° ) of approximately 60 km × 50 km was conducted on a machine equipped with a single NVIDIA A40 GPU (49 GB VRAM), configured in exclusive mode, supported by 8 threads per task, 320 GB of reserved memory, and a high-speed NVMe SSD for efficient data access. The GPU utilized driver version 535.154.05 and CUDA 12.2 for acceleration.
The inference of a new full range interferogram involves the following steps: subdividing the interferogram into patches based on the trained model’s input size, incorporating a LiDAR elevation model mask to exclude areas outside the reliable region, running the model on the patches, reconstructing the inference as described in Section 3.6.2, and saving the output as a polygon layer file. Using the setup detailed above, the entire process achieved a runtime of 1 min per full range interferogram, generating 246 predicted subsidence area polygons. This underscores the remarkable efficiency of automation in simplifying and expediting what is otherwise a labor-intensive and time-consuming task of a few days.

5. Discussion

In this study, we implement and evaluate for the first time a DL model to identify land subsidence and sinkhole formation in InSAR data along the western shores of the Dead Sea. Using UNet architecture on interferograms, we train the system to automatically identify land subsidence areas. The training and test sets use manually annotated areas from the years 2019–2021 from the Geological Survey of Israel (GSI).

5.1. Method Limitations

Region: The research area is located in an arid environment with low precipitation, sparse anthropogenic activity, and mostly no vegetation. These attributes allow the use of high-resolution X-band SAR data, which is sensitive to vegetation and small changes in the pixel level. The low levels of ground disturbance allow high levels of interferometric phase signal coherency, except following, e.g., sporadic local rain, flashfloods, anthropogenic activity, or sinkhole collapses.
Annotation: The model’s ground truth is based on manually annotated subsidence, performed under the framework of the sinkhole monitoring system at the GSI [22]. While the manual annotations are supported by field inspections and LiDAR measurements, the manual process is inherently limited due to the amorphous nature of the subsidence boundaries. These differences between annotated and unannotated signals are small, but may impact the model’s learning, testing, and validation procedures.
InSAR signal limitations: Subsiding areas can surround or intersect decorrelated areas. Rapid changes like sinkhole collapse or flashfloods reduce the InSAR phase signal coherency. In addition, due to the LOS observation, SAR signal shadow and overlay along sharp topographic features mask the phase signal, leaving no-data areas that are partly overlapping with the subsidence signal. These limitations complicate both manual mapping and model training and evaluation.
Segmentation: The segmentation learning task remains highly challenging due to several inherent factors, including the amorphous and noisy appearance of subsidence regions in phase images, their wide range of sizes, and the absence of meaningful contextual cues, all of which complicate their detection and differentiation from surrounding areas.
Non-Sinkhole related land deformation: Distinguishing sinkhole-related subsidence from other land movements, such as anthropogenic activities (e.g., gravel piles, road maintenance) or shoreline subsidence caused by soil compaction, is non-trivial. Since these processes were not manually mapped in the data, they were not included as a separate class, and the model occasionally identified them as subsidence. This suggests that their patterns partly resemble features learned from sinkhole-related subsidence. However, many of these phenomena are in fact quite distinct (e.g., long shoreline depletion zones) and occur in specific regions, thus it is straightforward to filter them out from predictions. Importantly, the current model also aids in re-annotating such cases and will support transfer learning in future iterations as new data becomes available.
No spatial and temporal context: To examine DS subsidence as a general case study, we excluded spatial and temporal information that might be unique to the region (e.g., the characteristic shoreline trend line). This choice challenged the model because it had to rely solely on local patch information, while manual annotations may have implicitly incorporated broader spatial and temporal context.
Architectural Enhancements Constrained by Data Limitations: To better exploit the available data, we implemented two additional UNet variants [36,38] that incorporate distinct attention mechanisms at critical points in the architecture—specifically, the bottleneck and the skip connections (see Supplementary Materials S1). These designs aim to enhance the extraction of meaningful spatial features and suppress noise or irrelevant information. Despite their theoretical advantages, these variants did not yield measurable improvements over the baseline UNet. This suggests that given the current data limitations, the network architecture appears to fully exploit the available information in the data. Moreover, the attention-based implementations increase the number of model parameters, which may lead to overfitting given the limited dataset size. Nonetheless, the proposed attention-based extensions remain promising and will be revisited in the following studies, as the dataset is expanded and further refined.

5.2. Generalization Potential of the DL Model

The partition experiments highlight the model’s capacity to generalize across different degrees of novelty in the data.
In the Random by Patches scheme, RecallOL and PrecisionOL reach 0.98 under default Intersection Tolerance parameters (Table 1, part 1a; Figure 5a), nearly reproducing the groun Truth data. Although this reflects partial overfitting due to patch overlap, it also shows that the model can detect subsidence areas when they are shifted, fragmented, or embedded in altered contexts. Removing overlap lowers performance (RecallOL = 0.82, PrecisionOL = 0.87; Table 1, (1)b; Figure 5b), but softer parameters recover values above 0.9, indicating that the model identifies subsidence signals beyond direct memorization.
In the Random by Interferograms scheme, the model is evaluated on entirely unseen interferograms. Performance is comparable to the non-overlap patch case and again exceeds 0.9 under softer parameters. This demonstrates generalization across temporal variability—including atmospheric differences, ground-state changes, acquisition setups, and noise patterns—while still detecting subsidence areas with geometrical and pattern diversity.
The Spatial partition represents the most demanding generalization test, requiring transfer to regions with distinct geospatial characteristics. When trained on northern regions and tested south of 31.3°, the model attains RecallOL = 0.72 and PrecisionOL = 0.80 (Table 1, part 3; Figure 5d), the lowest scores among the schemes. This reduction reflects both genuine geospatial differences and the smaller, more scattered nature of southern subsidence areas, which are inherently harder for the model and may bias the scores. To mitigate this and test scalability, we expanded the dataset and raised the latitude boundary, improving RecallOL to 0.86 under softer parameters. Applying optimized geospatial augmentations further increased RecallOL to 0.86 under default parameters and 0.91 under softer ones, alongside corresponding Precision gains, demonstrating enhanced geographical transferability.
Overall, the results reveal a hierarchy of generalization: from recognizing partially seen subsidence objects under varied input arrangements in the same acquisitions, to detecting new objects in unseen acquisitions, and finally transferring to geospatially distinct regions with improved robustness when larger datasets and augmentations are us

5.3. Full-Range Inference Reconstruction

The top panels of Figure 6 and Figure 7 demonstrate the model’s ability to reproduce the full scene elongated subsidence strip extending for tens of kilometers along the DS shoreline.

5.3.1. False Positives and Inference Confidence

The annotation limitations discussed in Section 5.1 are amplified in full-range inference, where zero-mask patches farther from the subsidence trend increase sensitivity to missing annotations and to subsidence-like signals unrelated to sinkholes (e.g., anthropogenic deformation and clay or coastal compaction), artificially inflating false positives and reducing Precision. In addition, some subsidence areas may be obscured by decorrelation or noise but still partially detected by the model. We therefore consider RecallOL as a more reliable performance indication, as some predictions labeled as false positives may correspond to genuine yet undocumented subsidence.
To further support this point, Figure 9 presents model confidence maps on two unseen 11-day interval south and norths frames (March and May 2021) alongside their subsequent 77-day frames (June–September 2021), which serve as references for subsidence evolution. three cases illustrate situations where manual annotations missed or misrepresented subsidence in the short-interval signals, while the model inferred it with varying confidence. The 77-day interferograms and confidence maps further support these findings by revealing phase patterns indicating the presence of subsidence, providing evidence for the model’s ability to identify hindered phase patterns of subsidence beyond manual mapping detection.
In the southern example (top row), the model detects a ~50 m region with ~0.5 confidence. Although this area is largely obscured by strong decorrelation and is therefore absent from the ground truth, it is detected by the model in the short-interval interferogram. In the 77-day frame, this region evolves into a clear subsidence feature inferred with high confidence. In the first northern example (second row), two annotated areas ~120 m apart are merged by the model into a single object, with an additional connected region to the north; the 77-day frame confirms this merged structure and orientation. In the second northern example (third row), the model infers a developing subsidence zone with varying confidence around a single annotated object, later validated in the 77-day frame as a larger merged region.
In addition to potential true positives missed in annotation, false positives appear west of the mapped subsidence areas (Figure 6 and Figure 7, top row). These are typically small, scattered, low-confidence predictions, often in data-gap regions, that emerge under a low Reconstruction Confidence Threshold (Panels c) but disappear as the threshold increases. Raising the threshold, however, also reduces true positives (Panels e), illustrating the Recall–Precision trade-off shown in Table 2 and confirming that higher-confidence predictions better match manually mapped areas.

5.3.2. Polygon-Level Detection Analysis

Figure 8 indicates a slightly reduced detection performance for small subsidence areas, despite their high representation in the dataset, supporting the decision to base the default intersection tolerance parameters on such challenging cases. Detection performance shows limited sensitivity to polygon roundness, but a stronger dependence on textural features, as reflected by trends in normalized phase standard deviation and mean local phase gradient. The model performs less effectively for both low phase variation, associated with weak or ambiguous signals, and very high variation, likely dominated by noise, suggesting that learning relies more on texture than on geometric characteristics. The lower detection rates for smaller regions may therefore result from insufficient textural information. These cases also tend to lie in sparsely represented parts of the textural feature space, indicating that their difficulty is compounded by limited representation in the dataset. Detection performance in such cases could likely be improved by increasing dataset size and variability. Importantly, detection remains consistent for interferograms acquired outside the training period, demonstrating robust temporal generalization.

5.4. The Potential for an Automatic AI Model for the Dead Sea Monitoring System

For deploying the model in the GSI Dead Sea Sinkhole Monitoring System, the model will be trained on all available data patches across all time ranges. To achieve this, we will address missing annotations by adopting the Random by Patches scheme, followed by validation through inference on later, longer-span interferograms and manual inspection. The deployed model will be continuously updated with incoming data and annotations through a transfer learning process and dynamically integrated within the system.
Manual inspection will remain necessary for the highest confidence level following automatic interferogram processing. Nonetheless, as detailed in Section 4.3, manual inspection is expected to become significantly faster and more efficient due to the automation provided by the model.
Figure 10 presents the model’s inference on a new 44-day interval interferogram from October to November 2024, in the GSI monitoring system. The results are categorized into regions of true positive, false positive, and false negative inferences. This highlights the model’s ability to generalize to newly emerging subsidence patterns over time and acquisition intervals, while also emphasizing the importance of continuous updates with new data and annotations to further improve its performance.

6. Summary and Conclusions

In this work, we provided the first evaluation of a DL segmentation model for automatically mapping sinkhole-related subsidence from SAR interferograms in the Dead Sea. We examined the model’s ability to generalize across multiple levels of unseen data based on manually identified areas as ground truth. Using task-specific metrics, our results demonstrate strong capacity for completing partially mapped interferograms and reconstructing full interferograms from acquisition dates excluded from training. Notably, we observed considerable geographical generalization potential, with performance improving as the dataset increases in size and diversity, and through the introduction of geologically informed data augmentations. We further analyzed the model’s full-scale reconstruction outputs, identifying subsidence areas that were not included in the ground truth but are likely true positives, as verified using longer-span subsequent interferograms.
These findings lay the foundation for future research. Our next steps include expanding the annotation pool using the current model and new incoming data, integrating temporal information to improve and validate inference, analyzing the spatio-temporal evolution of sinkhole formation, and distinguishing sinkhole-related subsidence from land movements induced by other sources. We also plan to broaden the generalization study by evaluating the model on data from additional geographic regions worldwide.
Beyond the scientific scope, deploying the model within GSI’s operational monitoring system in the Dead Sea may enable automated inference of subsidence evolution. This, in turn, could provide additional lead time for planning and mitigating sinkhole hazards—reducing environmental risks and supporting safer land use for infrastructure, industry, agriculture, and recreation.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs18020211/s1.

Author Contributions

Conceptualization, G.D., R.N.N., R.S. and Y.R.; methodology, G.D. and R.S.; Data, R.N.N.; software, G.D.; validation, R.N.N., Y.R. and R.S.; writing—original draft preparation, G.D.; writing—review and editing, R.N.N., R.S. and Y.R.; visualization, G.D. and R.N.N.; supervision, Y.R. and R.N.N.; funding acquisition, Y.R. and R.N.N. All authors have read and agreed to the published version of the manuscript.

Funding

The data and APC of the project were funded by the Ministry of Prime Minister, Israel: GSI DS project 40702.

Data Availability Statement

TerraSAR-X SAR images are available from ESA EO archive or any Airbus distributer. The data presented in this study are available upon request form the corresponding author. Restrictions may apply according to the GSI data sharing policy. Software is available upon request from the corresponding author. Training and evaluation code is publicly available at https://github.com/galidekel/sinkhole_training_test_code (accessed on 1 January 2026).

Acknowledgments

Y.R. acknowledges support from the Institute for Environmental Sustainability, Weizmann Institute. We wish to thank Michael Bernstein for the manual annotations of subsidence under the framework of the GSI sinkhole monitoring system which were used in this work. We wish to thank the editor and the anonymous reviewers for their comments and suggestions, leading to the improvement of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Galloway, D.; Jonse, D.R.; Ingebritsen, S.E. Land Subsidence in the United States; U.S. Geological Survey: Reston, VA, USA, 1999.
  2. Martinez, J.; Johnson, K.; Neal, J. Sinkholes in Evaporite Rocks. Am. Sci. 1998, 86, 38. [Google Scholar] [CrossRef]
  3. Orhan, O.; Haghshenas Haghighi, M.; Demir, V.; Gökkaya, E.; Gutiérrez, F.; Al-Halbouni, D. Spatial and Temporal Patterns of Land Subsidence and Sinkhole Occurrence in the Konya Endorheic Basin, Turkey. Geosciences 2023, 14, 5. [Google Scholar] [CrossRef]
  4. Gutiérrez, F.; Parise, M.; De Waele, J.; Jourde, H. A Review on Natural and Human-Induced Geohazards and Impacts in Karst. Earth-Sci. Rev. 2014, 138, 61–88. [Google Scholar] [CrossRef]
  5. Waltham, T.; Bell, F.G.; Culshaw, M.G. Sinkholes and Subsidence; Karst and Cavernous Rocks in Engineering and Const; Springer: Berlin/Heidelberg, Germany, 2005; ISBN 978-3-540-20725-2. [Google Scholar]
  6. Parise, M. Rock Failures in Karst. In Landslides and Engineered Slopes. From the Past to the Future; Chen, Z., Zhang, J., Li, Z., Wu, F., Ho, K., Eds.; CRC Press: Boca Raton, FL, USA, 2008; pp. 275–280. ISBN 978-0-415-41196-7. [Google Scholar]
  7. Intrieri, E.; Gigli, G.; Nocentini, M.; Lombardi, L.; Mugnai, F.; Fidolini, F.; Casagli, N. Sinkhole Monitoring and Early Warning: An Experimental and Successful GB-InSAR Application. Geomorphology 2015, 241, 304–314. [Google Scholar] [CrossRef]
  8. Jones, C.E.; Blom, R.G. Bayou Corne, Louisiana, Sinkhole: Precursory Deformation Measured by Radar Interferometry. Geology 2014, 42, 111–114. [Google Scholar] [CrossRef]
  9. Nof, R.N.; Baer, G.; Ziv, A.; Raz, E.; Atzori, S.; Salvi, S. Sinkhole Precursors along the Dead Sea, Israel, Revealed by SAR Interferometry. Geology 2013, 41, 1019–1022. [Google Scholar] [CrossRef]
  10. Nof, R.N.; Baer, G. Monitoring Gradual Subsidence and Changes along the Dead Sea Coast Using SAR Interferometry; Infrastructure Instability Along the Dead Sea: Final Report 2008–2011; Geological Survey of Israel: Jerusalem, Israel, 2012.
  11. Paine, J.G.; Buckley, S.M.; Collins, E.W.; Wilson, C.R. Assessing Collapse Risk in Evaporite Sinkhole-Prone Areas Using Microgravimetry and Radar Interferometry. JEEG 2012, 17, 75–87. [Google Scholar] [CrossRef]
  12. Baer, G.; Magen, Y.; Nof, R.N.; Raz, E.; Lyakhovsky, V.; Shalev, E. InSAR Measurements and Viscoelastic Modeling of Sinkhole Precursory Subsidence: Implications for Sinkhole Formation, Early Warning, and Sediment Properties. JGR Earth Surf. 2018, 123, 678–693. [Google Scholar] [CrossRef]
  13. Niemi, T.M.; Ben-Avraham, Z.; Gat, J.R. The Dead Sea: The Lake and Its Setting; Oxford monographs on geology and geophysics; Oxford University Press: New York, NY, USA, 1997; ISBN 978-0-19-508703-1. [Google Scholar]
  14. Baer, G.; Bernstein, M.; Yechieli, Y.; Nof, R.N.; Abelson, M.; Gavrieli, I. Elevation and Thickness of the 11–10 Kyr Old ‘Sinkholes Salt’ Layer in the Dead Sea: Clues to Past Limnology, Paleo-Bathymetry and Lake Levels. J. Paleolimnol. 2023, 70, 159–173. [Google Scholar] [CrossRef]
  15. Yechieli, Y.; Abelson, M.; Bein, A.; Crouvi, O.; Shtivelman, V. Sinkhole “Swarms” along the Dead Sea Coast: Reflection of Disturbance of Lake and Adjacent Groundwater Systems. Geol. Soc. Am. Bull. 2006, 118, 1075–1087. [Google Scholar] [CrossRef]
  16. Yechieli, Y.; Magaritz, M.; Levy, Y.; Weber, U.; Kafri, U.; Woelfli, W.; Bonani, G. Late Quaternary Geological History of the Dead Sea Area, Israel. Quat. Res. 1993, 39, 59–67. [Google Scholar] [CrossRef]
  17. Abelson, M.; Yechieli, Y.; Crouvi, O.; Baer, G.; Wachs, D.; Bein, A.; Shtivelman, V. Evolution of the Dead Sea Sinkholes. Geol. Soc. Am. Spec. Pap. 2006, 401, 241–253. [Google Scholar] [CrossRef]
  18. Baer, G.; Gavrieli, I.; Swaed, I.; Nof, R.N. Remote Sensing of Floodwater-Induced Subsurface Halite Dissolution in a Salt Karst System, with Implications for Landscape Evolution: The Western Shores of the Dead Sea. Remote Sens. 2024, 16, 3294. [Google Scholar] [CrossRef]
  19. Closson, D.; Karaki, N.A.; Milisavljevic, N.; Hallot, F.; Acheroy, M. Salt-Dissolution-Induced Subsidence in the Dead Sea Area Detected by Applying Interferometric Techniques to ALOS Palsar Synthetic Aperture Radar Images. Geodin. Acta 2010, 23, 65–78. [Google Scholar] [CrossRef]
  20. Frumkin, A.; Raz, E. Collapse and Subsidence Associated with Salt Karstification along the Dead Sea. Carbonates Evaporites 2001, 16, 117–130. [Google Scholar] [CrossRef]
  21. Baer, G.; Schattner, U.; Wachs, D.; Sandwell, D.; Wdowinski, S.; Frydman, S. The Lowest Place on Earth Is Subsiding—An InSAR (Interferometric Synthetic Aperture Radar) Perspective. Geol. Soc. Am. Bull. 2002, 114, 12–23. [Google Scholar] [CrossRef]
  22. Nof, R.N.; Abelson, M.; Raz, E.; Magen, Y.; Atzori, S.; Salvi, S.; Baer, G. SAR Interferometry for Sinkhole Early Warning and Susceptibility Assessment along the Dead Sea, Israel. Remote Sens. 2019, 11, 89. [Google Scholar] [CrossRef]
  23. Willner, S.E.; Lipchin, C.; Aloni, Z. Salt Storms, Sinkholes and Major Economic Losses: Can the Deteriorating Dead Sea Be Saved from the Looming Eco Crisis? Negev Dead Sea Arav. Stud. 2015, 7, 27–37. [Google Scholar]
  24. Closson, D. Structural Control of Sinkholes and Subsidence Hazards along the Jordanian Dead Sea Coast. Environ. Geol. 2005, 47, 290–301. [Google Scholar] [CrossRef]
  25. Bai, Z.; Wang, Y.; Li, M.; Sun, Y.; Zhang, X.; Wu, Y.; Li, Y.; Li, D. Land Subsidence in the Singapore Coastal Area with Long Time Series of TerraSAR-X SAR Data. Remote Sens. 2023, 15, 2415. [Google Scholar] [CrossRef]
  26. Hamdi, L.; Defaflia, N.; Merghadi, A.; Fehdi, C.; Yunus, A.P.; Dou, J.; Pham, Q.B.; Abdo, H.G.; Almohamad, H.; Al-Mutiry, M. Ground Surface Deformation Analysis Integrating InSAR and GPS Data in the Karstic Terrain of Cheria Basin, Algeria. Remote Sens. 2023, 15, 1486. [Google Scholar] [CrossRef]
  27. Zhao, J.; Yang, X.; Zhang, Z.; Niu, Y.; Zhao, Z. Mine Subsidence Monitoring Integrating DS-InSAR with UAV Photogrammetry Products: Case Studies on Hebei and Inner Mongolia. Remote Sens. 2023, 15, 4998. [Google Scholar] [CrossRef]
  28. Anantrasirichai, N.; Biggs, J.; Albino, F.; Bull, D. The Application of Convolutional Neural Networks to Detect Slow, Sustained Deformation in InSAR Time Series. Geophys. Res. Lett. 2019, 46, 11850–11858. [Google Scholar] [CrossRef]
  29. Biggs, J.; Anantrasirichai, N.; Albino, F.; Lazecky, M.; Maghsoudi, Y. Large-Scale Demonstration of Machine Learning for the Detection of Volcanic Deformation in Sentinel-1 Satellite Imagery. Bull. Volcanol. 2022, 84, 100. [Google Scholar] [CrossRef] [PubMed]
  30. Bountos, N.I.; Michail, D.; Papoutsis, I. Learning From Synthetic InSAR With Vision Transformers: The Case of Volcanic Unrest Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  31. Anantrasirichai, N.; Biggs, J.; Albino, F.; Hill, P.; Bull, D. Application of Machine Learning to Classification of Volcanic Deformation in Routinely Generated InSAR Data. JGR Solid Earth 2018, 123, 6592–6606. [Google Scholar] [CrossRef]
  32. Alrabayah, O.; Caus, D.; Watson, R.A.; Schulten, H.Z.; Weigel, T.; Rüpke, L.; Al-Halbouni, D. Deep-Learning-Based Automatic Sinkhole Recognition: Application to the Eastern Dead Sea. Remote Sens. 2024, 16, 2264. [Google Scholar] [CrossRef]
  33. Kaushal, A.; Gupta, A.K.; Sehgal, V.K. A Semantic Segmentation Framework with UNet-Pyramid for Landslide Prediction Using Remote Sensing Data. Sci. Rep. 2024, 14, 30071. [Google Scholar] [CrossRef]
  34. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  35. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; et al. Segment Anything. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023. [Google Scholar]
  36. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
  37. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
  38. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
  39. Wegmuller, U.; Werner, C.; Strozzi, T. SAR Interferometric and Differential Interferometric Processing Chain. In Proceedings of the 1998 IEEE International Geoscience and Remote Sensing Symposium (IGARSS’98), Seattle, WA, USA, 6–10 July 1998; IEEE: Seattle, WA, USA, 1998; Volume 2, pp. 1106–1108. [Google Scholar]
  40. Goldstein, R.M.; Werner, C.L. Radar Interferogram Filtering for Geophysical Applications. Geophys. Res. Lett. 1998, 25, 4035–4038. [Google Scholar] [CrossRef]
  41. Hasan, K.R.; Tuli, A.B.; Khan, M.A.-M.; Kee, S.-H.; Samad, M.A.; Nahid, A.-A. Deep-Learning-Based Semantic Segmentation for Remote Sensing: A Bibliometric Literature Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 1390–1418. [Google Scholar] [CrossRef]
  42. Li, J.; Cai, Y.; Li, Q.; Kou, M.; Zhang, T. A Review of Remote Sensing Image Segmentation by Deep Learning Methods. Int. J. Digit. Earth 2024, 17, 2328827. [Google Scholar] [CrossRef]
  43. Yuan, X.; Shi, J.; Gu, L. A Review of Deep Learning Methods for Semantic Segmentation of Remote Sensing Imagery. Expert Syst. Appl. 2021, 169, 114417. [Google Scholar] [CrossRef]
  44. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  45. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; IEEE: Stanford, CA, USA, 2016; pp. 565–571. [Google Scholar]
  46. RMSprop—Porch 2.7 Documentation. Available online: https://docs.pytorch.org/docs/stable/generated/torch.optim.RMSprop.html (accessed on 30 May 2025).
Figure 1. The research area. (a) Sinkhole-related subsidence (red polygons) derived from the GSI monitoring database between August 2022 and August 2023. The white rectangle inset corresponds to the location of the panels on the right; (b) Hillshade LiDAR map based on 2022 annual measurements. Sinkhole-related subsidence areas and collapse sinkholes are marked in red and white polygons, respectively; (c) Interferogram for the time interval of 44 days between 10 July 2023 and 23 August 2023. The colors correspond to LOS displacement wrapped between 0 and 2π radians. Sinkhole-related subsidence is marked by white polygons. Examples of other sources of signals and noise are marked by Roman numbers: (I) Atmospheric signals, (II) Anthropogenic signals, (III) Coastal soil compaction processes, and (IV) Water body decorrelation noise.
Figure 1. The research area. (a) Sinkhole-related subsidence (red polygons) derived from the GSI monitoring database between August 2022 and August 2023. The white rectangle inset corresponds to the location of the panels on the right; (b) Hillshade LiDAR map based on 2022 annual measurements. Sinkhole-related subsidence areas and collapse sinkholes are marked in red and white polygons, respectively; (c) Interferogram for the time interval of 44 days between 10 July 2023 and 23 August 2023. The colors correspond to LOS displacement wrapped between 0 and 2π radians. Sinkhole-related subsidence is marked by white polygons. Examples of other sources of signals and noise are marked by Roman numbers: (I) Atmospheric signals, (II) Anthropogenic signals, (III) Coastal soil compaction processes, and (IV) Water body decorrelation noise.
Remotesensing 18 00211 g001
Figure 2. (a) Evolution of sinkhole number (blue line) and area (red line) at the Dead Sea; (b) Evolution of sinkhole-related subsidence number of polygons (blue) and area (red). Values are calculated based on annually merged polygons. Data is derived from the GSI monitoring database, which holds manually marked polygons.
Figure 2. (a) Evolution of sinkhole number (blue line) and area (red line) at the Dead Sea; (b) Evolution of sinkhole-related subsidence number of polygons (blue) and area (red). Values are calculated based on annually merged polygons. Data is derived from the GSI monitoring database, which holds manually marked polygons.
Remotesensing 18 00211 g002
Figure 3. Data pipeline and UNet architecture used in this work. The default input, ground-truth (GT), and output patch sizes are (200 × 100) pixels. These patches are extracted from the interferograms within the LiDAR mask region (outlined in magenta on the input interferogram) using a default stride of half a window in both directions to ensure overlap. The input patches (phase values) are fed into the U-Net, while the corresponding GT patches, derived from manual mapping (binary values 0/1), are used for loss computation during training and for evaluation during inference. The network consists of an encoder—a contractive path composed of double 3 × 3, stride-1 convolution blocks followed by 2 × 2 max-pooling layers leading to a bottleneck—and a decoder, a symmetrically expanding path composed of deconvolution (up-sampling) layers followed by double 3 × 3, stride-1 convolution blocks and a final Sigmoid layer that maps the decoded representation into a binary segmentation mask. Skip connections are implemented by concatenating each encoder output with its corresponding decoder input. The number of output channels at each stage is indicated above each block.
Figure 3. Data pipeline and UNet architecture used in this work. The default input, ground-truth (GT), and output patch sizes are (200 × 100) pixels. These patches are extracted from the interferograms within the LiDAR mask region (outlined in magenta on the input interferogram) using a default stride of half a window in both directions to ensure overlap. The input patches (phase values) are fed into the U-Net, while the corresponding GT patches, derived from manual mapping (binary values 0/1), are used for loss computation during training and for evaluation during inference. The network consists of an encoder—a contractive path composed of double 3 × 3, stride-1 convolution blocks followed by 2 × 2 max-pooling layers leading to a bottleneck—and a decoder, a symmetrically expanding path composed of deconvolution (up-sampling) layers followed by double 3 × 3, stride-1 convolution blocks and a final Sigmoid layer that maps the decoded representation into a binary segmentation mask. Skip connections are implemented by concatenating each encoder output with its corresponding decoder input. The number of output channels at each stage is indicated above each block.
Remotesensing 18 00211 g003
Figure 4. (a) Examples illustrating the variability of mapped subsidence areas in the interferograms (LOS phase maps, scale bar on the right, mapping polygons in white) and the inherent ambiguity in their annotation. (b) Illustration of the Object-Level (OL) metrics ( Recall OL ) and the Intersection-Tolerance scheme. The first panel shows an input interferogram patch of the default size (200 × 100 pixels) with the manually mapped subsidence polygons delineated on top. These polygons are converted into the corresponding ground-truth (GT) mask shown in the second panel. The third panel displays the predicted mask generated by the model. Each predicted object is expanded by the default buffer of b = 5   pixels to form the buffered prediction set. The fourth panel visualizes the OL-evaluation process: for each ground-truth polygon, the ratio between its intersection area with the buffered predicted polygons (light-blue region) and its total area (light-blue + light-red regions) is compared against the Intersection Threshold (ITh). Objects whose intersection ratio exceeds ITh are counted as detected, while the others are marked as undetected. The detected and undetected ground-truth objects for the default Intersection-Tolerance parameters are indicated accordingly. (c) A closer look at one of the smallest mapped subsidence regions (7 × 12 pixels) in a noisy environment, illustrates the rationale behind the default intersection parameters. The mapped subsidence polygon shown in white and a model prediction in black. Applying a 5 pixel buffer produces a substantially better alignment between the prediction and the mapped polygon. Even after buffering, a relative intersection threshold of 0.7 remains an appropriate criterion given the inherent ambiguity in the manually delineated subsidence boundaries.
Figure 4. (a) Examples illustrating the variability of mapped subsidence areas in the interferograms (LOS phase maps, scale bar on the right, mapping polygons in white) and the inherent ambiguity in their annotation. (b) Illustration of the Object-Level (OL) metrics ( Recall OL ) and the Intersection-Tolerance scheme. The first panel shows an input interferogram patch of the default size (200 × 100 pixels) with the manually mapped subsidence polygons delineated on top. These polygons are converted into the corresponding ground-truth (GT) mask shown in the second panel. The third panel displays the predicted mask generated by the model. Each predicted object is expanded by the default buffer of b = 5   pixels to form the buffered prediction set. The fourth panel visualizes the OL-evaluation process: for each ground-truth polygon, the ratio between its intersection area with the buffered predicted polygons (light-blue region) and its total area (light-blue + light-red regions) is compared against the Intersection Threshold (ITh). Objects whose intersection ratio exceeds ITh are counted as detected, while the others are marked as undetected. The detected and undetected ground-truth objects for the default Intersection-Tolerance parameters are indicated accordingly. (c) A closer look at one of the smallest mapped subsidence regions (7 × 12 pixels) in a noisy environment, illustrates the rationale behind the default intersection parameters. The mapped subsidence polygon shown in white and a model prediction in black. Applying a 5 pixel buffer produces a substantially better alignment between the prediction and the mapped polygon. Even after buffering, a relative intersection threshold of 0.7 remains an appropriate criterion given the inherent ambiguity in the manually delineated subsidence boundaries.
Remotesensing 18 00211 g004
Figure 5. Model results on example input patches from the test datasets of the four partition schemes. (a) Random by Patches with Overlap, (b) Random by Patches without Overlap, (c) Random by Interferograms, and (d) Spatial. For each scheme, four examples are shown, ordered by their OL metric scores in descending order (left to right, top to bottom). Each example includes the input patch (phase values between −π and π), the ground-truth mask (manual annotation), and the predicted mask generated by the model. In the mask images, yellow denotes subsidence (1), and dark purple denotes no subsidence (0).
Figure 5. Model results on example input patches from the test datasets of the four partition schemes. (a) Random by Patches with Overlap, (b) Random by Patches without Overlap, (c) Random by Interferograms, and (d) Spatial. For each scheme, four examples are shown, ordered by their OL metric scores in descending order (left to right, top to bottom). Each example includes the input patch (phase values between −π and π), the ground-truth mask (manual annotation), and the predicted mask generated by the model. In the mask images, yellow denotes subsidence (1), and dark purple denotes no subsidence (0).
Remotesensing 18 00211 g005
Figure 6. Full-range inference reconstruction—Northern frame from 10 May 2021 to 21 May 2021. The top image displays a ~30 km × 8 km band around the subsidence trend line: (a1) the original input interferogram (phase values between −π and π), with the manually mapped subsidence areas overlaid in white; (b1) the corresponding ground-truth mask derived from the mapped polygons; (c1e1) reconstructed inferences showing the predicted masks obtained using Reconstruction Confidence Thresholds of 0.125, 0.25, and 0.5, respectively. On the right, the target area is shown on a physical map for geographic context. The panels in the bottom image, (a2e2) present a zoomed-in region (dashed black rectangle on the input panel) along the subsidence trend line. Panel (f2) displays the corresponding zoomed-in Confidence map used for reconstruction, on which the thresholds are applied to generate the final masks. Scale bars for the input interferogram and the Confidence map are shown on the right. In the ground-truth and predicted-mask panels, yellow denotes subsidence (1) and purple denotes no subsidence (0). Data-void regions in the input interferogram are shown in black.
Figure 6. Full-range inference reconstruction—Northern frame from 10 May 2021 to 21 May 2021. The top image displays a ~30 km × 8 km band around the subsidence trend line: (a1) the original input interferogram (phase values between −π and π), with the manually mapped subsidence areas overlaid in white; (b1) the corresponding ground-truth mask derived from the mapped polygons; (c1e1) reconstructed inferences showing the predicted masks obtained using Reconstruction Confidence Thresholds of 0.125, 0.25, and 0.5, respectively. On the right, the target area is shown on a physical map for geographic context. The panels in the bottom image, (a2e2) present a zoomed-in region (dashed black rectangle on the input panel) along the subsidence trend line. Panel (f2) displays the corresponding zoomed-in Confidence map used for reconstruction, on which the thresholds are applied to generate the final masks. Scale bars for the input interferogram and the Confidence map are shown on the right. In the ground-truth and predicted-mask panels, yellow denotes subsidence (1) and purple denotes no subsidence (0). Data-void regions in the input interferogram are shown in black.
Remotesensing 18 00211 g006
Figure 7. Full-range inference reconstruction—Southern frame from 26 March 2021 to 6 April 2021. The top image displays a ~30 km × 8 km band around the subsidence trend line: (a1) the original input interferogram (phase values between −π and π), with the manually mapped subsidence areas overlaid in white; (b1) the corresponding ground-truth mask derived from the mapped subsidence polygons; (c1e1) reconstructed inferences showing the predicted masks obtained using Reconstruction Confidence Thresholds of 0.125, 0.25, and 0.5, respectively. On the right, the target area is shown on a physical map for geographic context. The panels in the bottom images, (a2e2,a3e3) present zoomed-in regions (dashed black rectangles on the input panel) along the subsidence trend line. Panels (f2,f3) display the corresponding zoom-ins of the Confidence map used for reconstruction, on which the thresholds are applied to generate the final masks. Scale bars for the input interferogram and the Confidence map are shown on the right. In the ground-truth and predicted-mask panels, yellow denotes subsidence (1) and purple denotes no subsidence (0). Data-void regions in the input interferogram are shown in black.
Figure 7. Full-range inference reconstruction—Southern frame from 26 March 2021 to 6 April 2021. The top image displays a ~30 km × 8 km band around the subsidence trend line: (a1) the original input interferogram (phase values between −π and π), with the manually mapped subsidence areas overlaid in white; (b1) the corresponding ground-truth mask derived from the mapped subsidence polygons; (c1e1) reconstructed inferences showing the predicted masks obtained using Reconstruction Confidence Thresholds of 0.125, 0.25, and 0.5, respectively. On the right, the target area is shown on a physical map for geographic context. The panels in the bottom images, (a2e2,a3e3) present zoomed-in regions (dashed black rectangles on the input panel) along the subsidence trend line. Panels (f2,f3) display the corresponding zoom-ins of the Confidence map used for reconstruction, on which the thresholds are applied to generate the final masks. Scale bars for the input interferogram and the Confidence map are shown on the right. In the ground-truth and predicted-mask panels, yellow denotes subsidence (1) and purple denotes no subsidence (0). Data-void regions in the input interferogram are shown in black.
Remotesensing 18 00211 g007
Figure 8. Detection analysis on the reconstructed inference of ten unseen interferograms, comprising 1309 mapped subsidence polygons. Stacked histograms show the counts of detected and undetected polygons with respect to area, roundness, normalized phase standard deviation, normalized phase mean local gradient, and acquisition year. The detection ratio, defined as the number of detected polygons over the total number of ground truth polygons in each bin, is overlaid as a dashed blue line, highlighting detection trends with respect to each feature. Ranges with reduced performance are marked by light-blue circles.
Figure 8. Detection analysis on the reconstructed inference of ten unseen interferograms, comprising 1309 mapped subsidence polygons. Stacked histograms show the counts of detected and undetected polygons with respect to area, roundness, normalized phase standard deviation, normalized phase mean local gradient, and acquisition year. The detection ratio, defined as the number of detected polygons over the total number of ground truth polygons in each bin, is overlaid as a dashed blue line, highlighting detection trends with respect to each feature. Ranges with reduced performance are marked by light-blue circles.
Remotesensing 18 00211 g008
Figure 9. Assessment of the model’s inference on two 11-day interval interferograms from the test set, using subsequent 77-day interval frames to provide additional context for potential subsidence evolution. Each row presents a different region in the interferogram. From left to right: original input (normalized values), ground truth mask, reconstructed inference confidence map (0–1), reconstructed inference confidence map of the 77-day interval frame, and the 77-day interval input.
Figure 9. Assessment of the model’s inference on two 11-day interval interferograms from the test set, using subsequent 77-day interval frames to provide additional context for potential subsidence evolution. Each row presents a different region in the interferogram. From left to right: original input (normalized values), ground truth mask, reconstructed inference confidence map (0–1), reconstructed inference confidence map of the 77-day interval frame, and the 77-day interval input.
Remotesensing 18 00211 g009
Figure 10. An example of model results for a 44-day interval interferogram between October 3rd and November 16th, 2024, using Random by Interferograms partition model. LOS phase difference in radians (color scale). (a) Manually mapped ground truth subsidence areas related to sinkholes are marked by white polygons; (b) Automatically predicted polygons by the model are marked by white polygons. Zone (1) shows a good match between the med and predicted polygons, with minor differences but with over 70% overlap in the area (True Positive). Zone (2) shows an area where mapped polygons were missed by the model (False Negative). Zone (3) shows an example of an unmapped non-sinkhole related subsidence area near the Dead Sea shores, predicted by the model as subsidence (False Positive). Zone (4) shows an example of anthropogenic activity, unmapped manual subsidence, predicted by the model (False Positive). Dark area near zone (4) is missing data due to SAR shadow/overlay conditions.
Figure 10. An example of model results for a 44-day interval interferogram between October 3rd and November 16th, 2024, using Random by Interferograms partition model. LOS phase difference in radians (color scale). (a) Manually mapped ground truth subsidence areas related to sinkholes are marked by white polygons; (b) Automatically predicted polygons by the model are marked by white polygons. Zone (1) shows a good match between the med and predicted polygons, with minor differences but with over 70% overlap in the area (True Positive). Zone (2) shows an area where mapped polygons were missed by the model (False Negative). Zone (3) shows an example of an unmapped non-sinkhole related subsidence area near the Dead Sea shores, predicted by the model as subsidence (False Positive). Zone (4) shows an example of anthropogenic activity, unmapped manual subsidence, predicted by the model (False Positive). Dark area near zone (4) is missing data due to SAR shadow/overlay conditions.
Remotesensing 18 00211 g010
Table 1. Metric scores for the model performance across the three partition schemes and their variations. OL metrics are in bold.
Table 1. Metric scores for the model performance across the three partition schemes and their variations. OL metrics are in bold.
(1) Partition 1: Random by Patches
a. With Overlap
Dice Coeff.Pixel-Level RecallPixel-Level
Precision
RecallOLPrecisionOL
ITh = 0.7
b = 5 pixels
ITh = 0.9
b = 2 pixels
ITh = 0.7
b = 5 pixels
ITh = 0.9
b = 2 pixels
0.820.890.940.980.920.980.96
b. Without Overlap
Dice Coeff.Pixel-Level RecallPixel-
Level
Precision
RecallOLPrecisionOL
ITh = 0.7
b = 5 pixels
ITh = 0.5
b = 10 pixels
ITh = 0.7
b = 5 pixels
ITh = 0.5
b = 10 pixels
0.620.710.750.820.90.870.92
(2) Partition 2: Random by Interferograms
0.620.670.80.810.90.880.93
(3) Partition 3: Spatial Partition
Small dataset
Lat. 31.3°
0.540.590.710.720.810.80.87
Large dataset
Lat. 31.4°
0.560.670.710.780.860.830.88
Large dataset
Lat. 31.4°
Augmented
0.60.770.650.860.910.860.9
Table 2. Mean OL metrics for reconstructed inference on unseen interferograms from the Random by Interferograms partition, evaluated across three Reconstruction Confidence Thresholds and two Intersection Tolerance parameter sets.
Table 2. Mean OL metrics for reconstructed inference on unseen interferograms from the Random by Interferograms partition, evaluated across three Reconstruction Confidence Thresholds and two Intersection Tolerance parameter sets.
RTh = 0.125RTh = 0.25RTh = 0.5
ITh = 0.7, b = 5 pixelsRecallOL = 0.9
PrecisionOL = 0.73
RecallOL = 0.82
PrecisionOL = 0.82
RecallOL = 0.71
PrecisionOL = 0.89
ITh = 0.5, b = 10 pixelsRecallOL = 0.94
PrecisionOL = 0.75
RecallOL = 0.9
PrecisionOL = 0.84
RecallOL = 0.8
PrecisionOL = 0.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dekel, G.; Nof, R.N.; Sarafian, R.; Rudich, Y. Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea. Remote Sens. 2026, 18, 211. https://doi.org/10.3390/rs18020211

AMA Style

Dekel G, Nof RN, Sarafian R, Rudich Y. Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea. Remote Sensing. 2026; 18(2):211. https://doi.org/10.3390/rs18020211

Chicago/Turabian Style

Dekel, Gali, Ran Novitsky Nof, Ron Sarafian, and Yinon Rudich. 2026. "Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea" Remote Sensing 18, no. 2: 211. https://doi.org/10.3390/rs18020211

APA Style

Dekel, G., Nof, R. N., Sarafian, R., & Rudich, Y. (2026). Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea. Remote Sensing, 18(2), 211. https://doi.org/10.3390/rs18020211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop