You are currently viewing a new version of our website. To view the old version click .
Remote Sensing
  • Article
  • Open Access

11 December 2025

Feasibility of Deep Learning-Based Iceberg Detection in Land-Fast Arctic Sea Ice Using YOLOv8 and SAR Imagery

and
Department of Physics, Lancaster University, Lancaster LA1 4YB, UK
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Applications of SAR for Environment Observation Analysis

Highlights

What are the main findings?
  • A YOLOv8 convolutional neural network, combined with iDPolRAD-filtered Sentinel-1 SAR imagery, can reliably detect icebergs embedded in fast ice with high precision (0.81) and recall (0.68).
  • Iceberg detection performance is highest for large, bright targets, and the pipeline outperforms classical CFAR-based methods under challenging Arctic conditions.
What are the implications of the main findings?
  • The integrated CNN + iDPolRAD approach provides a feasible workflow for near-real-time iceberg monitoring in Arctic regions with limited optical coverage.
  • This methodology can support maritime safety and climate monitoring while serving as a foundation for scaling detection to broader regions and datasets once more labelled SAR data become available.

Abstract

Iceberg detection in Arctic sea-ice environments is essential for navigation safety and climate monitoring, yet remains challenging due to observational and environmental constraints. The scarcity of labelled data, limited optical coverage caused by cloud and polar night conditions, and the small, irregular signatures of icebergs in synthetic aperture radar (SAR) imagery make automated detection difficult. This study evaluates the environmental feasibility of applying a modern deep learning model for iceberg detection within land-fast sea ice. We adapt a YOLOv8 convolutional neural network within the Dual Polarisation Intensity Ratio Anomaly Detector (iDPolRAD) framework using dual-polarised Sentinel-1 SAR imagery from the Franz Josef Land region, validated against Sentinel-2 optical data. A total of 2344 icebergs were manually labelled to generate the training dataset. Results demonstrate that the network is capable of detecting icebergs embedded in fast ice with promising precision under highly constrained data conditions (precision = 0.81; recall = 0.68; F1 = 0.74; mAP = 0.78). These findings indicate that deep learning can function effectively within the physical and observational limitations of current Arctic monitoring, establishing a foundation for future large-scale applications once broader datasets become available.

1. Introduction

Icebergs in the Arctic are hazards to maritime operations. Iceberg detection with synthetic aperture radar (SAR) images is therefore of paramount importance to numerous industries, including shipping [1], insurance [2], oil and gas [3], fishing and tourism [4], as well as state-run ice services [5]. SAR is particularly suitable for iceberg monitoring as it is an active remote sensing system capable of operating day and night, through all weather and cloud conditions. The Arctic’s remoteness and harsh environment make in situ iceberg observation logistically difficult, further emphasising the importance of remote methods.
Beyond SAR, iceberg detection has been approached with radar altimetry and scatterometry, which provide useful large-scale detection and size/height characterisation [6,7]. Optical sensors (e.g., Sentinel-2) are also commonly used for validation under cloud-free conditions. SAR remains attractive due to all-weather, day-night capability and high spatial resolution; however, combining modalities often yields the most robust operational solutions.
Globally, Arctic shipping activity has tripled since 2010 [8]. Historical analyses of archival records indicate an average of roughly 2–3 ship–iceberg collisions per year over past centuries, but no recent consolidated global annual collision statistic is available [9]. Climate change has accelerated sea-ice decline, opening new shipping routes such as the Northwest Passage, while simultaneously increasing iceberg calving rates [10]. Around 30,000–40,000 medium-to-large icebergs are produced annually from Greenland’s marine-terminating glaciers [11,12].
Franz Josef Land (FJL), located in the Barents Sea, is an area where iceberg monitoring is especially needed due to high levels of shipping, fishing, and oil and gas activities. Icebergs in the Barents Sea originate primarily from marine-terminating glaciers in FJL, Svalbard, and Novaya Zemlya [13]. Typical iceberg sizes are 91 m ± 51 m × 64 m ± 37 m × 15 ± 7 m (length × width × height) [14]. The FJL region is characterised by the presence of fast ice—a stationary form of sea ice that remains attached to the coast or seabed [15]. This environment creates a shear zone between fast and drift ice, within which icebergs can become embedded for months until the fast ice melts sufficiently for them to move freely. Detecting icebergs in such conditions allows for the labelling of icebergs in both optical and SAR data acquired several hours apart, as performed in this study.
Traditional iceberg detection approaches have largely relied on adaptive threshold algorithms [16], such as Constant False Alarm Rate (CFAR) detection [17], which exploit contrasts in radar backscatter between icebergs and surrounding sea ice. Many variants have been proposed, including the Dual Intensity Polarisation Ratio Anomaly Detector (iDPolRAD) introduced by ref. [18] and subsequently applied in several studies [12,19]. The iDPolRAD algorithm leverages dual-polarisation Sentinel-1 Extra Wide Swath (EWS) imagery to enhance iceberg contrast by filtering and combining co-polarised (HH/VV) and cross-polarised (HV/VH) intensity images.
In recent years, deep learning methods have gained traction for iceberg detection and classification [20,21,22,23] due to the robustness of convolutional neural networks (CNNs). However, discriminating icebergs from sea ice remains challenging, as their SAR backscatter characteristics can be similar [24] and are affected by factors such as surface roughness, dielectric constant, incidence angle, and proximity to the sensor’s noise floor [25]. Sentinel-1′s spatial resolution in EWS mode (90 m) limits reliable detection to icebergs larger than ~100 m. Icebergs smaller than this threshold occupy only a few pixels and are difficult to detect reliably with Sentinel-1 EWS imagery. Optical Sentinel-2 imagery offers higher spatial resolution but suffers from cloud cover and seasonal darkness. Consequently, combining both datasets provide complementary information: SAR offers consistent temporal coverage, while optical imagery supports manual labelling and validation.
Several CNN architectures have been explored for general object detection, including ResNet [26], Faster R-CNN [27], Single Shot Detector (SSD) [28], and You Only Look Once (YOLO) [29]. Within iceberg detection, applications of YOLO remain limited, with [30] being the only known study employing YOLOv3 for iceberg–ship discrimination in Sentinel-1 imagery. While they achieved a reasonable F1 score (0.53), the authors highlighted the limited availability of labelled iceberg data as the main barrier to improved performance. This challenge persists across the field, where the scarcity of labelled data and environmental constraints restrict the application of standard model-optimisation practices.
Previous detection methods, such as classical thresholding and CFAR, often lack the precision required to discriminate icebergs within high-clutter fast-ice environments, and can be computationally intensive for near-real-time use. This raises an important question regarding environmental feasibility: Can a modern CNN (YOLOv8) feasibly detect icebergs embedded in land-fast ice within the physical and observational limitations of SAR imagery?
Accordingly, the aim of this proof-of-concept study is to evaluate the feasibility of applying a YOLOv8 deep learning model within an iDPolRAD filtering framework to detect icebergs in the fast-ice environments of Franz Josef Land. The study demonstrates that a modern deep learning model can be adapted for a niche, data-limited, and environmentally constrained application, establishing the foundation for future large-scale implementations.
We note that the present experiments are restricted to land-fast sea ice. This environment is particularly challenging, with stationary floes and subtle iceberg contrasts, making it ideal for testing detection methods. While studies of icebergs in open water are well-established in the literature, we focus on sea ice to demonstrate performance under more difficult conditions. Consequently, although our model is tailored to fast sea ice, we expect it to perform reliably in open-water scenarios, where iceberg-background contrast is generally higher. While our findings motivate potential maritime-safety applications, direct operational deployment in shipping lanes requires further validation under open-water and drifting-ice conditions. Expected effects and a plan for such validation are discussed in Section 5.
This paper is organised as follows: Section 2 describes the dataset, Section 3 outlines the methodology, and Section 4 presents the results. Section 5 summarises the findings and implications. Section 6 outlines conclusions.

2. Dataset

This study utilises Sentinel-1 Extra Wide Swath (EWS) mode imagery [31], which operates at C-band frequency in dual-polarisation (HH and HV) (Table 1). The data are provided as Ground Range Detected (GRD) products, where the nominal pixel spacing is 40 m, and the effective spatial resolution is approximately 90 m. This distinction is important, as pixel spacing describes the image sampling interval, whereas spatial resolution describes the smallest ground feature that can be reliably distinguished. In the Barents Sea region, this mode allows image acquisitions up to twice per day, making it ideal for tracking iceberg and sea-ice dynamics under varying conditions.
Table 1. Date and time (UTC) of the Sentinel-1 SAR EWS acquisitions, along with the imaged region, the file name, image dimensions (rows × columns), and the corresponding incidence angle range. This table summarises the SAR input data used in the study.
Standard preprocessing steps were applied, including thermal-noise removal and radiometric calibration to a normalised radar cross section (σ0) using the lookup tables provided in the accompanying XML metadata. All calibrated σ0 values were converted to the decibel (dB) scale for analysis. Terrain correction was performed using the Sentinel-1 toolbox orthorectification routine with a digital elevation model, ensuring geometric (not radiometric) accuracy; multi-looking was not applied to preserve the spatial detail of small iceberg targets. All images were georeferenced using the GDAL 3.8.3. software library and projected into the EPSG:32640 coordinate reference system (UTM Zone 37 N). The selected acquisitions had incidence angles between 18° and 46°, providing sufficient contrast between icebergs and the surrounding fast ice for the application of the iDPolRAD algorithm (Section 3.1). The two SAR acquisitions used in this study have the same overall incidence-angle range in the original metadata. While this removes variability between images at the acquisition stage, iceberg detectability may still vary locally within each swath due to incidence angle and orientation effects. We therefore continue to discuss this as a methodological limitation in Section 5
Complementary Sentinel-2 MSI optical imagery (Table 2) was used for visual validation and manual iceberg labelling. Optical images were selected to have < 30% cloud cover and to be acquired outside the polar night period. Sentinel-2 provides up to 13 spectral bands covering the 440–2200 nm range; here, we used the visible bands (2, 3, 4) with a spatial resolution of 10 m to identify icebergs under clear-sky conditions. To facilitate co-registration and visual comparison, the SAR images were resampled to 10 m pixel size using bilinear interpolation.
Table 2. Date and time (UTC) of the Sentinel-2 MSI acquisitions, the imaged region, the file name, and the spatial resolution in metres. This table summarises the optical input data used for comparison with the SAR data.
Image pairs were chosen based on overlapping Sentinel-1 and Sentinel-2 acquisitions within the FJL region (Figure 1). Temporal variability in iceberg drift and sea-ice deformation limited the number of valid pairs, as only scenes with consistent iceberg positions could be used. Additionally, high-latitude orbit constraints reduced overlap frequency, so the dataset represents carefully selected dates with adequate dual-sensor coverage.
Figure 1. Location of Sentinel-1 SAR and Sentinel-2 optical acquisitions over the Franz Josef Land (FJL) region. Blue and orange polygons indicate Sentinel-1 image footprints, while green and purple boxes show the Sentinel-2 tiles (T40XEQ and T40XDQ) used for iceberg detection. The basemap shows the major islands of FJL with geographic coordinates in WGS84. The scale bar and north arrow are provided for reference.
For each selected region, SAR images were clipped to match the Sentinel-2 tiles (T40XEQ and T40XDQ; Table 3), producing subset images aligned across both datasets. To exclude static land areas, a land mask from the Polar Geospatial Centre (500 m spatial resolution) was applied. Manual labelling was performed on these subsets, resulting in 1128 individual icebergs identified in T40XEQ and 1216 in T40XDQ, producing a high-quality dataset for CNN training and validation. The top-left physical coordinates of each SAR image slice are provided in Supplementary Files S1 (tile T40XEQ) and S2 (tile T40XDQ).
Table 3. Sentinel-2 optical tiles with their corresponding Sentinel-1 SAR acquisition date and whether each image was assigned to training or testing. The final column lists the number of icebergs present in each tile, highlighting the dataset distribution.
Overall, this dataset reflects the practical constraints of iceberg detection in fast-ice environments: limited temporal overlap, environmental variability, and the need for manual labelling. Such conditions are central to evaluating the environmental feasibility of deep learning–based iceberg detection.

3. Materials and Methods

This study implements a deep learning–based detection workflow for icebergs embedded in land-fast sea ice (Figure 2). For an introduction to polarimetric SAR (PolSAR) concepts, the reader is referred to [32]. The methods proceed as follows:
Figure 2. Schematic overview of the iceberg detection pipeline. Sentinel-1 dual-polarisation SAR and Sentinel-2 optical images are used as inputs (blue). Image processing steps, including land masking, iDPolRAD filtering, and image slicing, are shown in green. Machine learning steps, including labelling icebergs, training the convolutional neural network, and detecting icebergs, are shown in purple. The final output is the labelled iceberg image (yellow). Colour coding highlights the different stages of the workflow.
  • iDPolRAD filtering to enhance iceberg contrast in SAR imagery (Section 3.1).
  • Manual labelling of icebergs by visually comparing SAR and optical imagery to create a training dataset (Section 3.2).
  • Convolutional neural network (CNN) architecture used for object detection (Section 3.3).
  • Model training and evaluation, including performance metrics (Section 3.4 and Section 3.5).
This workflow allows us to demonstrate the environmental feasibility of applying CNNs to detect icebergs under the physical and observational constraints of the Franz Josef Land region.

3.1. iDPolRAD Filter

The iDPolRAD filter was proposed by [18] and has been successfully applied in previous studies to separate and detect icebergs in sea-ice environments [19,33]. The filter exploits dual polarisation SAR data (in this case, HH and HV) to enhance contrast between icebergs and surrounding fast ice.
Icebergs exhibit distinct radar signatures due to volume scattering (from undulations, cracks, and crevasses) and surface scattering (from reflective ice surfaces or toppled ice bodies). In some cases, double-bounce scattering can occur when the radar pulse reflects off both the iceberg and the water surface. These mechanisms lead to a higher depolarisation ratio, defined as the ratio between the cross-polarised (HV) and co-polarised (HH) intensity images, which differentiates icebergs from surrounding sea ice.
To quantify this, two boxcar filters are applied over the HV and HH intensity images using a small testing window (1 × 1 pixels) within a larger training window (57 × 57 pixels). The detector can be written as
Λ = H V 2 t e s t H V 2 t r a i n H H 2 t r a i n > T Λ
where < > t e s t and < > t r a i n represent spatial averages over the testing and training windows, and T Λ is a threshold. Traditionally, the threshold is set to optimise a probability density function as in CFAR detectors. Here, rather than applying a threshold at this stage, we generate iDPolRAD filtered images as input for the CNN, allowing the network to learn features directly from the processed data.
The training window is implemented as a 57 × 57 pixel region with a 2D Gaussian weighting (σ = 7) centred on the pixel of interest, following Soldal et al. (2019) [33], and the testing window is a single pixel (1 × 1). These choices were selected because comparison of different training σ values and test window sizes showed that this configuration gave the best separation between icebergs and background. The training window captures local background statistics while preserving small iceberg features, and the testing window corresponds to the pixel being evaluated in the HV image.
Because Λ is a ratio, the detector is scale-invariant, which can reduce intensity information. To restore this, we multiply Λ by the HV intensity:
I =   Λ ·   H V 2 t e s t
A detection occurs when pixels in the testing window exhibit stronger volume or double-bounce scattering than the surrounding training window. Negative values indicate lower cross-polarisation than the local background, which can happen over open water patches.

3.2. Manual Detection

Manual detection was performed to generate a high-quality training dataset for the CNN, based on both SAR and optical imagery. To facilitate this, the Sentinel-1 and Sentinel-2 images were divided into image slices of 549 × 549 pixels. Twenty slices per row were selected, producing a total of 400 slices for manual inspection. This tiling also reduced computational load. Assuming that the data is sampled with a 10 m pixel spacing, the physical coordinates of each slice’s top-left corner were recorded to enable conversion between pixel and real-world coordinates:
x p h y s i c a l = 10 ( x p i x e l ) +   x T L
y p h y s i c a l = y T L 10 ( y p i x e l )
where x T L and y T L are the top-left physical x and y coordinates of each slice.
SAR HH and HV slices were combined using the iDPolRAD filter proposed by [18] (Equations (1) and (2)) to enhance contrast between icebergs and fast ice. Training and testing windows of 57 and 1 pixels, respectively, were found to be the most optimal for distinguishing icebergs from clutter in the SAR images to optimise iceberg discrimination. Image display contrast was adjusted using the 5th and 95th percentile pixel values to improve visual identification. Preliminary RGB composites of SAR and optical images were used to aid manual detection (Figure 3).
Figure 3. Comparison of (a) Pauli RGB composite of Sentinel-1 dual-polarisation SAR data and (b) the iDPolRAD-filtered output. The iDPolRAD filter enhances iceberg visibility against the background, shown here as a negative for clarity. Each pixel represents 10 m. This preprocessing step highlights iceberg features prior to CNN-based detection.
Icebergs were selected if they satisfied the following criteria:
  • Visible in both SAR and optical imagery.
  • Bright relative to the surrounding clutter in SAR.
  • Exhibited a shadow in optical imagery.
  • Embedded in fast ice, reducing the likelihood of drift between image acquisitions.
Because pre-filtering excludes icebergs not visible in SAR, reported accuracy reflects the CNN’s ability to detect icebergs visible in SAR data, not the total number present. This ensures meaningful validation under Sentinel-1 resolution constraints while minimising false detections from ships, islands, or other sea ice features.
Land areas were excluded using a two-stage masking procedure. First, a coarse 500 m Polar Geospatial Centre land mask was applied to remove large-scale continental land. Second, a fine shoreline filter based on the DEM with a 13 m elevation threshold was applied to remove near-shore high-reflectance features and coastal clutter not excluded by the coarse mask. The 13 m threshold was chosen empirically by visual comparison with HH SAR images to best separate water/fast ice from land and represents a conservative elevation above typical tidal and shoreline variations in the study area. This approach reduced false positives at the coastline.
For labelling, bounding boxes were manually drawn around each iceberg in SAR slices, validated against the corresponding optical image. All boxes were annotated by a single author. Objects were included only if they exhibited sufficient contrast with the surrounding ice; borderline or uncertain cases were excluded from the training set. Boxes were centred approximately on each iceberg, but not perfectly aligned, to prevent the model from overfitting to central positions. Box sizes varied with iceberg size; those near slice edges were clipped by 10 pixels to preserve spatial context (avoiding edge artifacts introduced by local filters and resampling). Bounding box coordinates (pixel-based) and class labels were exported using Label Studio 1.13.1 in a format compatible with the CNN, along with the corresponding image slices (PNG format).
This procedure produced a labelled dataset suitable for CNN training while maintaining high visual fidelity to both SAR and optical observations.

3.3. YOLOv8 Model

For iceberg detection, we employ YOLOv8, a state-of-the-art single-stage CNN known for high accuracy, speed, and ease of use [29]. YOLOv8 predicts bounding boxes and class probabilities for objects in an image and is trained on annotated datasets [34]. In this study, the model predicts the presence of icebergs in iDPolRAD-processed SAR slices.
The network consists of a backbone, a neck, and a head. The backbone uses a Cross Stage Partial-Darknet architecture to extract features at multiple scales, the neck merges the features to capture information across scales, and the head predicts bounding boxes, objectness scores, and class probabilities. YOLOv8 employs coarse-to-fine convolutional blocks and anchor-free detection, making it particularly suited for small, variable-brightness targets like icebergs. Unlike sliding-window approaches, the model processes the full image in a single forward pass, so pre-filtering with iDPolRAD ensures small icebergs remain detectable.
Key advantages of YOLOv8 for this application include the following:
  • Fast inference speed, enabling near-real-time monitoring.
  • Robustness to SAR backscatter variability through integrated augmentation strategies.
  • Improved localisation precision in cluttered fast-ice environments.
  • Detection of small objects with variable brightness, critical for icebergs.
We use the medium YOLOv8 model (yolov8m.pt) with 218 layers and 25.9 million parameters. The network employs three loss functions:
  • Centre distance intersection over union (CIoU) loss for bounding box geometry;
  • Varifocal loss for classification accuracy;
  • Distribution Focal Loss (DFL) for localisation precision.
These three components use the default loss-weighting scheme implemented in Ultralytics YOLOv8 (i.e., the framework applies its own internal weights to Lobj, Lcls, and Lbox), and no additional manual re-weighting was applied.
Bounding boxes are used instead of instance segmentation because icebergs are small relative to Sentinel-1 resolution, and bounding boxes reduce sensitivity to slight shape errors. Only icebergs visible in both SAR and optical imagery are labelled to ensure target purity.
This study is the first application of a CNN (YOLOv8) to iDPolRAD-processed Sentinel-1 data. Novel contributions include the following:
  • Replacing classical CFAR detection with a CNN.
  • A dual SAR/optical verification strategy to reduce false positives.
  • A pre-processing pipeline in which SAR images are terrain-corrected and geocoded, and Sentinel-2 images are resampled to the same projection, enabling dual SAR/optical validation without performing explicit image co-registration. Benchmarking CNN performance against prior CFAR results in the same region.
The choice of YOLOv8 for this study is motivated by its combination of accuracy, speed, and ease of use, which makes it well-suited for a proof-of-concept evaluation under Arctic SAR constraints. Although two-stage detectors such as Faster R-CNN may offer higher small-object localisation precision, YOLOv8 provides a practical balance between detection accuracy and inference speed. Pre-filtering with iDPolRAD enhances small-object contrast, improving YOLOv8’s ability to detect icebergs embedded in fast ice while maintaining computational efficiency. Unlike previous approaches using CFAR or older YOLO versions, this study demonstrates that a modern CNN can be applied successfully to iDPolRAD-processed Sentinel-1 data and validated with Sentinel-2 imagery. The novelty lies not merely in replacing CFAR with a CNN, but in demonstrating that the YOLOv8-based pipeline is feasible in a challenging, high-latitude, cluttered fast-ice environment with limited labelled data. While YOLOv8 is a large model, the goal here is environmental feasibility rather than model optimisation; pre-filtering with iDPolRAD ensures that small icebergs are detectable despite the network’s size and limited training dataset. Finally, bounding boxes were chosen over polygons due to Sentinel-1’s spatial resolution, which preserves the key features for CNN detection while minimising errors. This approach validates the concept that CNN-based detection can function effectively with the combination of SAR and optical data under operationally realistic conditions.

3.4. Evaluation Metrics

The labelled iDPolRAD image slices were used to train the YOLOv8 model. Training was performed using the standard medium model with pretrained weights to improve feature extraction, given the limited dataset. We used the default Ultralytics COCO-trained weights. The dataset was split into training (80%) and validation (20%) sets, ensuring that images from the same temporal acquisition were not split across sets to avoid data leakage.
To augment the limited SAR dataset, data augmentations such as horizontal/vertical flips, small rotations were applied. Because iceberg shapes lack a preferred orientation in SAR, augmentation was performed using flips rather than full rotations. Each image was augmented with a horizontal flip, a vertical flip, and a horizontal–vertical flip, resulting in three additional training samples per original. These simulate variations in SAR backscatter, iceberg orientation, and ice state, improving the model’s ability to generalise to unseen images. Training was performed on a standard GPU setup, taking approximately 12 min, with validation performed in 1 min.
Evaluation metrics were calculated to quantify model performance, including Precision, Recall, F1 score, mean average precision (mAP), and CIoU. These are defined as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
where TP, FP, and FN are true positives, false positives, and false negatives, respectively. Precision measures the ratio of correctly detected icebergs to all detections, while recall measures the proportion of detectable icebergs successfully identified. F1 combines both to assess overall performance.
To complement precision, recall, and F1, we also report mean Average Precision (mAP), defined as the mean of the area under the precision–recall curve across IoU thresholds (t). Specifically, mAP is computed as follows:
m A P = 1 τ t τ A P ( t )  
where A P ( t ) is the integral of the precision–recall curve at IoU threshold t , which is defined as
A P ( t ) = 0 1 p r , t   d r
where p r , t is the precision–recall curve at a given IoU threshold.
For this study, we report mAP 50 and mAP 50–95, where 50 denotes average precision (AP) at IoU = 0.50, and 50–95 denotes the average AP across IoU thresholds from 0.50 to 0.95 in 0.05 increments (COCO-style mAP [0.5:0.95]). Although CIoU (centre distance IoU) was used as the bounding-box loss during training, evaluation metrics in Figure 4 are computed using standard AP measures rather than generalised intersection over union (GIoU). YOLOv8 model training metrics are reported in Supplementary File S3. The configuration parameters used for YOLOv8 training are listed in Supplementary File S4.
Figure 4. (a) Model performance metrics for the YOLOv8 iceberg detection model showing precision, recall, and mean average precision (mAP) scores. (b) Training and validation loss curves over 50 epochs. mAP@50 means the average precision at IoU of 0.50. mAP@50–95 means the average precision across IoU thresholds from 0.50 to 0.95 in 0.05 increments.

3.5. Comparison to Soldal et al. (2019) [33]

The iceberg detection workflow in this study uses Sentinel-1 data pre-processed with the iDPolRAD chain, which includes radiometric calibration, geometric correction, and speckle filtering. These processed backscatter tiles are then supplied to the YOLOv8 CNN for supervised training and testing.
While the iDPolRAD preprocessing itself is not novel, the key difference lies in the detection stage. Ref. [33] employed classical CFAR thresholding combined with a technique they referred to as “blob detection” to identify iceberg candidates. Here, “blob” simply denotes regions exceeding a backscatter threshold. In contrast, our approach replaces this rule-based stage with a deep learning–based CNN, enabling the model to directly learn iceberg signatures from labelled data, including subtle features in complex fast-ice environments.
This comparison highlights that the novelty of this study lies in the integration of a modern CNN detection framework with iDPolRAD-pre-processed SAR data, demonstrating the feasibility of automated iceberg detection under Arctic operational constraints.

4. Results

4.1. Training Evaluation

Model performance during training was evaluated using precision, recall, F1 score, and mean average precision (mAP) (Table 4). These scores were calculated from the validation dataset. The F1 score and mAP reached ~0.6 early in training and increased incrementally thereafter. At the end of training, the model achieved an F1 score of 0.74 and a mAP of 0.78, indicating good convergence and predictive capability.
Table 4. CNN training performance metrics at selected epochs, including precision, recall, F1 score, and mean average precision (mAP). This table shows how model accuracy evolves with training.
These metrics, calculated on a pre-filtered subset of icebergs visible in both SAR and optical imagery, likely represent an upper bound on operational performance (Section 5).

4.2. Training Loss

The evolution of the three loss components is shown in Figure 4. The overall loss decreased consistently for both the training and validation datasets, demonstrating that the model learned to minimise classification, DFL, and localisation errors. Specifically, DFL improved the model’s ability to accurately localise iceberg-like objects, the box classifier loss enhanced prediction of bounding box positions and object presence, and the classification loss improved iceberg identification. As expected, the training losses are lower than the validation losses since the model only observed training images during learning.
The loss trends indicate that the model has effectively learned patterns within the training dataset; however, given the relatively small number of scenes and limited dual-polarisation data, overfitting to local fast-ice conditions remains a possibility.

4.3. Validation

Validation performance was quantified using precision, recall, and F1 score. Figure 5a shows the precision–recall (PR) curve, while mAP provides an overall measure of predictive performance. The model achieved a mAP of 0.78 on the validation dataset, indicating robust detection capabilities.
Figure 5. (a) Precision-recall curve for the YOLOv8 model on validation images, showing the trade-off between precision and recall, (b) F1 score curve for the YOLOv8 model on validation images, illustrating overall detection performance across thresholds.
Figure 5b presents the F1 score as a function of confidence threshold. The maximum F1 score of 0.74 occurred at a confidence threshold of 0.31, corresponding to the point on the PR curve that optimises the balance between precision and recall.
While the curves suggest good predictive performance, they do not account for seasonal or environmental variability, which may influence model effectiveness in other FJL regions or times of year.

4.4. Model Testing

To assess generalisation, the model was applied to a previously unseen Sentinel-1 image and its corresponding Sentinel-2 image (image tile T40XEQ, Table 3) using the same parameters as training and validation. A qualitative assessment is shown in Figure 6, where red bounding boxes and black circles highlight predicted and labelled icebergs, respectively, in the SAR images. The detector performed best for large, bright icebergs, while smaller or less bright icebergs were predicted with lower confidence. Some false positives were observed, indicating areas for potential refinement.
Figure 6. Iceberg detection overlay for a Sentinel-1 SAR image (10 m pixel size). Red squares indicate predicted icebergs, while black circles show the manually labelled icebergs. This visualisation demonstrates the comparison between model predictions and reference labels used for training the CNN, highlighting individual icebergs against the ocean background.
Performance on previously unseen tiles highlights the influence of iceberg brightness and size, as well as fast-ice heterogeneity. Detection accuracy decreases for smaller or less reflective icebergs, underscoring limitations imposed by Sentinel-1 resolution and environmental variability.

4.5. Comparison and Summary

The YOLOv8 model was further compared to the iDPolRAD + CFAR pipeline used by [33] for the FJL region. As summarised in Table 5, the authors reported a precision of 1.5% and 5.8% and a recall of 50.8% and 43.1% for the blob and CFAR detection, respectively. In contrast, the YOLOv8 pipeline achieved a precision of 81.2%, a recall of 68%, and a mAP of 78%, with an inference speed of 9.3 ms−1 per image.
Table 5. Comparison of detection performance between YOLOv8 and CFAR detection (Soldal et al., 2019 [33]) using SAR + iDPolRAD images. Blob detection is based on a threshold of 10−7, and CFAR detection is based on a threshold of 10−6. Metrics include precision, recall, F1 score, mean average precision (mAP), inference speed, and processing chain used.
Overall, the validation and testing results indicate that the YOLOv8 model can reliably detect icebergs embedded within fast ice, with a regional mAP of 78%. Detection performance is highest for large, bright icebergs, while icebergs near fast-ice boundaries are harder to detect, reflecting the limitations imposed by SAR spatial resolution and cluttered ice conditions. Although YOLOv8 substantially outperforms the CFAR-based pipeline in terms of precision and recall on this dataset, these results should be interpreted cautiously, as environmental factors, seasonal changes, and unresolved SAR limitations could affect real-world operational performance.

5. Discussion

In this work, we considered the feasibility of implementing the YOLOv8 CNN for iceberg detection in a fast sea ice environment. Below, we outline the major issues facing detection and modelling, and place our findings in context with previous work.

5.1. Comparison with Soldal Results

The precision, recall, and F1 scores reflect the performance of our model based on one primary approach to detect icebergs in the SAR iDPolRAD images. Bright iceberg-like objects exhibited higher prediction values, likely due to strong HV backscatter observed in the imagery. While HH polarisation can also produce strong returns, HV generally provides better contrast between icebergs and fast ice in our dataset. These observations are directly derived from the Sentinel-1 imagery used in this study and are consistent with previous studies [18,33].
Ref. [33] applied a blob detector followed by a traditional CFAR approach. The blob detector first identified candidate bright regions in the SAR images based on local intensity variations, and CFAR then filtered these candidates based on local background statistics to maintain a constant false-alarm rate. Their metrics depended heavily on the CFAR Probability of False Alarm (PF) threshold, while YOLOv8 outputs both precision and recall at a single confidence threshold and aggregated mAP. Ref. [33] often experienced high false positives relative to true detections, whereas YOLOv8 achieved higher detection rates with fewer false alarms. However, since our training labels were pre-filtered to include only targets visible in Sentinel-2 imagery, the metrics likely represent an upper bound for visually verifiable icebergs. Difficulties remain in discriminating icebergs from fast ice features such as hummocks and ridges.
It is important to note that our workflow does not employ CFAR. The iDPolRAD preprocessing used here is non-CFAR-based, and detection is performed solely by the YOLOv8 model. CFAR is discussed only in the context of the baseline provided by [33]. Parameter tuning of CFAR is therefore not applicable to our method.

5.2. Alternative Baselines and Future Work

Besides CFAR and blob detection, a range of non-deep learning methods have been used for SAR target detection, including matched-filter/template matching [35], morphological blob detectors ref. [17], texture-based classification (e.g., GLCM features + Support Vector Machine) [36], Hough-transform methods for geometric features [37], and other local-statistics anomaly detectors. Lightweight deep learning models, for example, compact CNNs, or small single-stage detectors, such as YOLOv8n/YOLOv8s or MobileNet-SSD [38], also provide attractive baselines when training data are limited. A comprehensive comparison of these approaches against the YOLOv8 pipeline (including parameter optimisation for classical detectors and the design/training of a simple CNN baseline) is outside the scope of the present proof-of-concept, but we recognise its importance and plan it as immediate future work to quantify trade-offs in accuracy, false-alarm characteristics, and computational cost.

5.3. Failure Case Analysis

Although overall detection performance is high, several systematic failure modes were observed during visual inspection of the validation results. We show a representative example of a false positive and a false negative in Figure 7.
Figure 7. Representative failure cases from the validation dataset. (a,b) False-positive detection: a bright patch on a sea ice ridge was kept as background in (a), but the CNN predicted it as an iceberg in (b). (c,d) False-negative detection: a low-contrast iceberg labelled in (c) was missed by the model in (d). These examples illustrate limitations in distinguishing subtle or ambiguous SAR backscatter features from icebergs, highlighting areas for future model improvement.
False positives were primarily associated with sea-ice ridges and rough fast-ice zones, which often exhibit strong HH and HV backscatter and therefore resemble iceberg signatures in iDPolRAD-filtered imagery. In several scenes, the model responded to elongated ridge features that produce bright, compact highlights at 90 m resolution, making them indistinguishable from small tabular icebergs. A second false-positive category consisted of bright brash-ice patches or refrozen fracture zones, which form small high-backscatter clusters similar in size and texture to iceberg fragments. Occasional false responses to SAR side-lobe or multi-path artefacts were also observed, although these were infrequent.
False negatives were typically linked to icebergs with very low contrast relative to the surrounding fast ice, for example, where surface snow cover reduces HV volume scattering or where the surrounding ice was unusually bright. Several missed targets were close to or below the effective resolution limit of Sentinel-1 EWS mode (≈90 m), making them only 1–2 resolution cells in extent. In such cases, even manual identification was uncertain. A small number of icebergs embedded within high-backscatter ridge complexes were also missed, as the local background suppressed the contrast required for detection.
These patterns indicate that most failure cases arise from SAR backscatter physics and resolution constraints, rather than from model architecture limitations. Improved performance would likely require (i) training data that explicitly includes ridge-dominated and low-contrast conditions, (ii) incidence-angle-normalised inputs, or (iii) higher-resolution sensors such as Sentinel-1 Single Look Complex (SLC) or commercial SAR.

5.4. Study Limitations

Dataset and domain limitations: The study is based on a limited number of Sentinel-1 and Sentinel-2 scenes covering specific dates and regions in the FJL archipelago. Seasonal variability in iceberg production, fast-ice extent, and environmental conditions (e.g., wind, sea state, and snow cover) was not included, potentially limiting generalisability. Detection performance may also vary seasonally: in summer or melt-season conditions, melt ponds and other surface features can produce low-backscatter areas resembling open water, reducing the contrast between icebergs and surrounding sea ice. The current model has only been evaluated on fast-ice conditions; performance under summer conditions remains to be assessed in future work. Regional factors such as glacier terminus geometry, local bathymetry, and fast-ice topography may also influence backscatter and detection performance, requiring future evaluation. The model is trained exclusively on Franz Josef Land data, and performance in other Arctic regions may differ due to variations in sea ice dynamics, snow cover, salinity, and other environmental factors affecting SAR backscatter. Additional region-specific training or fine-tuning would be required to extend the approach to other Arctic areas.
Radar data characteristics: Sentinel-1 GRD data were radiometrically calibrated and terrain-corrected, but no additional normalisation for incidence angle variation was applied. Incidence angles ranged from 18 to 46°, which may affect backscatter intensity and iceberg detectability, particularly for low-contrast targets [25]. Detection performance may therefore vary with iceberg orientation and size. Future work could incorporate incidence-angle correction or gamma nought normalisation to mitigate this variability.
Polarimetric limitations: Only dual-polarisation (HH/HV) data were available. Fully polarimetric SAR could improve discrimination between icebergs and fast ice by exploiting additional scattering channels, although such data are typically available only during Announcement of Opportunity periods and have narrower swath widths.
Detection approach: Object detection was prioritised over instance segmentation due to the resolution of Sentinel-1 data. Icebergs < 120 m are unlikely to be reliably detected, and bounding boxes provide a practical compromise, capturing both the iceberg and its immediate context. Pre-filtering to include only icebergs visible in both Sentinel-1 and Sentinel-2 likely inflates the reported metrics, and results should be interpreted as an upper bound.
Training data constraints: There is a lack of available labelled iceberg datasets. Manual labelling is time-consuming, and iceberg populations change annually, meaning that comprehensive datasets across Greenland, Svalbard, and FJL remain unavailable.
Applicability to open water: In open-water/drifting-ice environments, the contrast between iceberg and background can change due to vessel-induced wakes, ocean surface roughness and larger Doppler effects. iDPolRAD preprocessing may respond differently where the background is dynamic rather than stationary. We therefore expect (a) increased false positives from transient low-backscatter features (e.g., wind-roughened water, breaking waves), and (b) potential detection of smaller icebergs due to higher contrast against open water, but with different false-alarm characteristics. We plan to evaluate performance on Sentinel-1 scenes with documented drifting ice and AIS-correlated iceberg observations in future work.
Despite dataset and environmental constraints, this study demonstrates that YOLOv8 CNNs, when combined with iDPolRAD filtering, are feasible for automated detection of icebergs in fast ice environments. The approach could be scaled to other Arctic regions with similar environmental characteristics, particularly where overlapping SAR and optical imagery are available. Operational deployment would benefit maritime safety by providing near-real-time monitoring of iceberg hazards and supporting climate monitoring by enabling consistent iceberg tracking under challenging polar conditions.

6. Conclusions

In this proof-of-concept study, we proposed an automated deep learning approach for detecting icebergs using ESA Sentinel-1 satellite imagery around Franz Josef Land (FJL) in the Arctic. Our work demonstrates the capabilities of the YOLOv8 CNN framework while highlighting key implementation challenges.
The YOLOv8 model achieved a high F1 score of 0.74 and a mAP50 of 0.78, corresponding to an overall accuracy of 78% for detecting icebergs embedded in fast ice. While this is the first study to combine a CNN with an iDPolRAD filter for iceberg detection, these results should be interpreted with caution. Model performance under more variable conditions, such as open water environments, remains unknown. Additionally, instance segmentation was not performed, so the detection of smaller icebergs may be limited by spatial resolution and polarisation modes. Future work could explore combining object detection and instance segmentation with higher-resolution, quad-polarimetric SAR imagery.
A major limitation remains the lack of extensive labelled datasets, driven by the dynamic environment around glacier tongues. Expanding training datasets through additional overlapping acquisitions could help improve model robustness. Although icebergs generally exhibit strong cross-polarised backscatter, some icebergs remain undetectable in SAR imagery compared to optical imagery. Despite these limitations, our results indicate that integrating YOLOv8 CNNs with iDPolRAD filtering offers a promising approach for automated iceberg detection, with potential benefits for maritime safety and climate change research.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs17243998/s1, Supplementary File S1: combined_physical_coord_df_top_left_T40XEQ.csv, containing physical top-left coordinates for each SAR image slice of tile T40XEQ; Supplementary File S2: combined_physical_coord_df_top_left_T40XDQ, containing physical top-left coordinates for each SAR image slice of tile T40XDQ; Supplementary File S3: training_results, containing YOLOv8 model training metrics; Supplementary File S4: args.yaml, listing model configuration parameters.

Author Contributions

Conceptualization, J.B. and J.S.; methodology, J.B. and J.S.; software, J.S. and J.B.; validation, J.B.; formal analysis, J.B.; investigation, J.B.; resources, J.S.; data curation, J.B. and J.S.; writing—original draft preparation, J.B.; writing—review and editing, J.B. and J.S.; visualisation, J.B.; supervision, J.S.; project administration, J.B.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Facilities Council, Grant: ST/Y509899/1; the Lloyd’s Register Foundation, Grant Sg3\100030; and a NERC Cross-Disciplinary Discovery Science grant NE/X018482/1.

Data Availability Statement

Raw data SAR and MSI images are freely available at https://browser.dataspace.copernicus.eu/ (accessed on 1 February 2024). Processed data, including image slice coordinate references and YOLOv8 training results, are provided in the Supplementary Materials. The source code is available at https://github.com/drsonny/iceberg-image-tutorial (accessed on 24 November 2025). Additional data are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank Matthew Chan, Ryan Cooper, James Edholm, Heather Wade, Sam Ward and Florence Wragg for their contribution to this project. The authors would like to thank Armando Marino for useful conversations. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
SARSynthetic Aperture Radar
iDPolRADDual Intensity Polarisation Ratio Anomaly Detector
FJLFranz Josef Land
EWSExtra Wide Swath
HHHorizontal Transmit, Horizontal Receive
HVHorizontal Transmit, Vertical Receive
VVVertical Transmit, Vertical Receive
VHVertical Transmit, Horizontal Receive
CNNConvolutional Neural Network
SSDSingle Shot Detector
SVMSupport Vector Machine
YOLOYou Only Look Once
GRDGround Range Detected
PolSARPolarimetric SAR
mAPMean Average Precision
DFLDistribution Focal Loss
CIoUCentre Distance Intersection Over Union
TPTrue Positive
FPFalse Positive
FNFalse Negative
PRPrecision–Recall
APAverage Precision
SLCSingle Look Complex

References

  1. Chen, J.-L.; Kang, S.-C.; Guo, J.-M.; Xu, M.; Zhang, Z.-M. Variation of sea ice and perspectives of the Northwest Passage in the Arctic Ocean. Adv. Clim. Change Res. 2021, 12, 447–455. [Google Scholar] [CrossRef]
  2. Johannsdottir, L.; Cook, D.; Arruda, G.M. Systemic risk of cruise ship incidents from an Arctic and insurance perspective. Elem. Sci. Anth. 2021, 9, 00009. [Google Scholar] [CrossRef]
  3. Tiffin, S.; Pilkington, R.; Hill, C.; Debicki, M.; McGonigal, D.; Jolles, W. A decision-support system for ice/iceberg surveillance, advisory and management activities in offshore petroleum operations. In Proceedings of the OTC Arctic Technology Conference, Houston, TX, USA, 10–12 February 2014; p. OTC–24657–MS. [Google Scholar]
  4. Heiselberg, P.; Heiselberg, H. Ship-Iceberg discrimination in Sentinel-2 multispectral imagery by supervised classification. Remote Sens. 2017, 9, 1156. [Google Scholar] [CrossRef]
  5. Scardilli, A.S.; Salvó, C.S.; Saez, L.G. Southern Ocean ice charts at the Argentine Naval Hydrographic Service and their impact on safety of navigation. Front. Mar. Sci. 2022, 9, 971894. [Google Scholar] [PubMed]
  6. Long, D.G. Polar applications of spaceborne scatterometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 2307–2320. [Google Scholar] [CrossRef] [PubMed]
  7. Tournadre, J.; Whitmer, K.; Girard-Ardhuin, F. Iceberg detection in open water by altimeter waveform analysis. J. Geophys. Res. Ocean. 2008, 113. [Google Scholar] [CrossRef]
  8. Berkman, P.; Fiske, G.; Lorenzini, D.; Young, O.; Pletnikoff, K.; Grebmeier, J.; Fernandez, L.; Divine, L.; Causey, D.; Kapsar, K. Satellite Record of Pan-Arctic Maritime Ship Traffic; National Oceanic and Atmospheric Administration: Silver Spring, MD, USA, 2022. [Google Scholar]
  9. Hill, B.T. Ship collision with iceberg database. In Proceedings of the SNAME International Conference and Exhibition on Performance of Ships and Structures in Ice, Banff, AB, Canada, 16–19 July 2006; p. D031S008R002. [Google Scholar]
  10. Mackie, S.; Smith, I.J.; Ridley, J.K.; Stevens, D.P.; Langhorne, P.J. Climate response to increasing Antarctic iceberg and ice shelf melt. J. Clim. 2020, 33, 8917–8938. [Google Scholar] [CrossRef]
  11. Frost, A.; Ressel, R.; Lehner, S. Automated iceberg detection using high-resolution X-band SAR images. Can. J. Remote Sens. 2016, 42, 354–366. [Google Scholar] [CrossRef]
  12. Bailey, J.; Akbari, V.; Liu, T.; Lauknes, T.; Marino, A. Iceberg detection with RADARSAT-2 quad-polarimetric (quad-pol) C-band SAR in Kongsfjorden, Svalbard–comparison with a ground-based radar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5790–5803. [Google Scholar] [CrossRef]
  13. Sandven, S.; Babiker, M.; Kloster, K. Iceberg observations in the Barents Sea by radar and optical satellite images. In Proceedings of the Proceedings of the ENVISAT Symposium, Montreux, Switzerland, 23–27 April 2007; pp. 23–27. [Google Scholar]
  14. Bigg, G.R. Icebergs: Their Science and Links to Global Change; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  15. (NSIDC). Cryosphere Glossary: Fast Ice. Available online: https://nsidc.org/learn/cryosphere-glossary (accessed on 20 December 2024).
  16. Tonboe, R.T.; Eastwood, S.; Lavergne, T.; Sørensen, A.M.; Rathmann, N.; Dybkjær, G.; Pedersen, L.T.; Høyer, J.L.; Kern, S. The EUMETSAT sea ice concentration climate data record. Cryosphere 2016, 10, 2275–2290. [Google Scholar] [CrossRef]
  17. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: St. Louis, MO, USA, 2004. [Google Scholar]
  18. Marino, A.; Dierking, W.; Wesche, C. A depolarization ratio anomaly detector to identify icebergs in sea ice using dual-polarization SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5602–5615. [Google Scholar] [CrossRef]
  19. Bailey, J.; Marino, A.; Akbari, V. Comparison of Target Detectors to Identify Icebergs in Quad-Polarimetric L-Band Synthetic Aperture Radar Data. Remote Sens. 2021, 13, 1753. [Google Scholar] [CrossRef]
  20. Shi, Z.; Lu, X.; Hou, L. Convolutional neural network for Iceberg Classifier. J. Phys. Conf. Ser. 2019, 1213, 042014. [Google Scholar] [CrossRef]
  21. Pandey, K.S.; Tirthkar, S.; Gaikwad, S. Deep Learning for Iceberg Detection in Satellite Images. Int. Res. J. Eng. Technol. 2021, 8, 2395. [Google Scholar]
  22. Sivapriya, M.; Mohamed Fathimal, P. Ice Berg Detection in SAR Images Using Mask R-CNN. In Emergent Converging Technologies and Biomedical Systems: Select Proceedings of ETBS 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 625–633. [Google Scholar]
  23. Krishnan, R.; Thangavelu, A.; Panneer, P.; Devulapalli, S.; Misra, A.; Putrevu, D. Iceberg detection and tracking using two-level feature extraction methodology on Antarctica Ocean. Acta Geophys. 2022, 70, 2953–2963. [Google Scholar] [CrossRef]
  24. Santamaria, C.; Greidanus, H.; Fournier, M.; Eriksen, T.; Vespe, M.; Alvarez, M.; Arguedas, V.F.; Delaney, C.; Argentieri, P. Sentinel-1 contribution to monitoring maritime activity in the Arctic. In Proceedings of the ESA Living Planet Symposium, Prague, Czech Republic, 9–13 May 2016; pp. 9–13. [Google Scholar]
  25. Færch, L.; Dierking, W.; Hughes, N.; Doulgeris, A.P. Mapping icebergs in sea ice: An analysis of seasonal SAR backscatter at C-and L-band. Remote Sens. Environ. 2024, 304, 114074. [Google Scholar] [CrossRef]
  26. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  27. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
  28. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  29. Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar]
  30. Hass, F.S.; Jokar Arsanjani, J. Deep learning for detecting and classifying ocean objects: Application of YoloV3 for iceberg–ship discrimination. ISPRS Int. J. Geo-Inf. 2020, 9, 758. [Google Scholar] [CrossRef]
  31. CLS. Sentinel-1 Product Specification; European Space Agency (ESA): Paris, France, 2023; p. 219. [Google Scholar]
  32. López-Martínez, C.; Pottier, E. Basic principles of SAR polarimetry. In Polarimetric Synthetic Aperture Radar: Principles and Application; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–58. [Google Scholar]
  33. Soldal, I.H.; Dierking, W.; Korosov, A.; Marino, A. Automatic Detection of Small Icebergs in Fast Ice Using Satellite Wide-Swath SAR Images. Remote Sens. 2019, 11, 806. [Google Scholar] [CrossRef]
  34. Ultralytics. YOLOv8 Documentation. Available online: https://docs.ultralytics.com/models/yolov8/ (accessed on 27 October 2025).
  35. Novak, L.M.; Burl, M.C. Optimal speckle reduction in POL-SAR imagery and its effect on target detection. In Proceedings of the Millimeter Wave and Synthetic Aperture Radar, Orlando, FL, USA, 27–28 March 1989; pp. 84–115. [Google Scholar]
  36. Dekker, R.J. Texture analysis and classification of ERS SAR images for map updating of urban areas in the Netherlands. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1950–1958. [Google Scholar] [CrossRef]
  37. artefMukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  38. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.