Next Article in Journal
MoHiPr-TB: A Monthly Gridded Multi-Source Merged Precipitation Dataset for the Tarim Basin Based on Machine Learning
Previous Article in Journal
CropSTS: A Remote Sensing Foundation Model for Cropland Classification with Decoupled Spatiotemporal Attention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning

1
School of Informatics, Xiamen University, Xiamen 361102, China
2
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(14), 2482; https://doi.org/10.3390/rs17142482
Submission received: 12 June 2025 / Revised: 8 July 2025 / Accepted: 11 July 2025 / Published: 17 July 2025

Abstract

Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing vast ocean areas. In contrast, medium-resolution imagery, such as 10-m Sentinel-2, provides broad ocean coverage but depicts turbines only as small bright spots and shadows, making accurate detection challenging. To address these limitations, We propose a novel deep learning approach to capture the variability in OWT appearance and shadows caused by changes in solar illumination and satellite viewing geometry. Our method learns intrinsic, imaging geometry-invariant features of OWTs, enabling robust detection across multi-seasonal Sentinel-2 imagery. This approach is implemented using Faster R-CNN as the baseline, with three enhanced extensions: (1) direct integration of imaging parameters, where Geowise-Net incorporates solar and view angular information of satellite metadata to improve geometric awareness; (2) implicit geometry learning, where Contrast-Net employs contrastive learning on seasonal image pairs to capture variability in turbine appearance and shadows caused by changes in solar and viewing geometry; and (3) a Composite model that integrates the above two geometry-aware models to utilize their complementary strengths. All four models were evaluated using Sentinel-2 imagery from offshore regions in China. The ablation experiments showed a progressive improvement in detection performance in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. Seasonal tests demonstrated that the proposed models maintained high performance on summer images against the baseline, where turbine shadows are significantly shorter than in winter scenes. The Composite model, in particular, showed only a 0.8% difference in the F1 score between the two seasons, compared to up to 3.7% for the baseline, indicating strong robustness to seasonal variation. By applying our approach to 887 Sentinel-2 scenes from China’s offshore regions (2023.1–2025.3), we built the China OWT Dataset, mapping 7369 turbines as of March 2025.

Graphical Abstract

1. Introduction

Global wind power has been a key clean energy solution, with offshore wind power capacity projected to expand from 35 GW in 2020 to 382 GW by 2030 [1,2]. Updating the locations of newly installed offshore wind turbines (OWTs) is essential for marine governance, ecological protection, and energy planning [3,4]. However, current databases face challenges: commercial sources (e.g., 4C Offshore) have restricted access, while open platforms (e.g., OpenStreetMap) often lack completeness and precision [5].
Satellite remote sensing has emerged as a key approach for rapid detection of OWTs, owing to its extensive spatial coverage, short revisit cycle, and low cost. Existing remote sensing methods can be broadly categorized into two technological phases. Early efforts primarily employed machine learning-based image classification techniques, using spectral features from backscatter signals of radar data or optical imagery to classify turbine targets, with a focus on pixel-level analysis [3,6]. In synthetic aperture radar (SAR) imagery, OWTs, particularly their metallic components, exhibit significantly different backscattering characteristics compared to the surrounding sea surface [7], but this technique may face challenges due to confusion with other objects such as ships or offshore platforms. Adaptive thresholding methods that account for these differences have been developed to identify OWTs from Sentinel-1 imagery [3,6,8]. For optical imagery, such as Sentinel-2 or Landsat, multi-spectral features of OWTs have commonly been used as inputs to machine learning algorithms, such as random forests, for turbine recognition [9]. They are more susceptible to cloud cover and precipitation, but they may offer unique visual features under favorable conditions. These include narrow, elongated shadows and occasionally blurred turbine blades during calm sea states, which are not captured by SAR imagery. However, these pixel-level approaches often result in a number of false positives and require substantial post-processing. Specifically, related pixels must be clustered into coherent OWT objects, and additional filtering is necessary to exclude non-target features such as moving vessels, small islands, rocky reefs, and offshore platforms [3].
More recently, deep learning-based object detection approaches, particularly those using convolutional neural networks (CNNs), have significantly improved the detection of OWTs by capturing both turbine-specific features and contextual information from the surrounding environment [5,8,10]. These advances have demonstrated strong performance in various object detection tasks, such as vehicle identification [11], ship recognition [12,13,14], and wind turbine detection [15,16,17,18,19]. Studies on wind turbine detection often focus on sub-meter high-resolution imagery, where turbine structures and their shadows are clearly visible due to the huge physical size of the turbines. In particular, the elongated shadow features have received considerable attention as important auxiliary cues for turbine recognition [17,19]. For example, LinkNet integrates a shadow detector and a hub detector within a Contrario framework to identify turbines from airborne orthophotos [20]. WTYolo extends the channel dimension in the head of the YOLOv5 model to support detection of turbines at large, medium, and small scales in Google Earth imagery [15]. RSWDet, with a YOLOv5 backbone, was applied in similar work involving GF-2 images, but this approach replaces the standard bounding box head with a point-set detection head that shifts attention from the bounding box to the turbine itself [18].
Although significant progress has been made in wind turbine detection or defect detection using high-resolution imagery, such as Google Earth, GF-2, and even UAV datasets [15,18,21], these efforts have predominantly focused on land-based or onshore turbines. This is largely because high-resolution imagery is typically acquired over densely populated and economically developed regions, whereas offshore areas receive less imaging attention due to the high cost of data acquisition and limited spatial coverage. In contrast, medium-resolution imagery, such as 10 m Sentinel-2 data, has gained increasing attention for OWT detection. Sentinel-2 provides Military Grid Reference System (MGRS)-based tiles covering 100 km × 100 km areas with a revisit cycle of approximately three days, offering extensive spatial coverage and frequent observations over offshore environments. However, in Sentinel-2 imagery, OWTs are represented by small and compact clusters of bright pixels along with elongated shadow patterns. The lack of structural clarity compared to high-resolution imagery poses additional challenges for accurate detection.
This study aims to develop an advanced deep learning approach for OWT detection, specifically designed for medium-resolution Sentinel-2 imagery. The proposed method seeks to enhance detection performance, even when turbine visual features are limited or ambiguous, by incorporating imaging geometry, including the spatial relationships among the turbines, solar incidence angles, and satellite viewing geometry, into the model. Accordingly, we used Faster R-CNN as the baseline model to develop an imaging geometry-aware detection framework, built from three perspectives: (1) explicitly incorporating the solar incidence and satellite viewing angles of satellite metadata at the time of image acquisition; (2) implicitly learning the variability in turbine appearance and shadow lines caused by changes in the solar and viewing geometry using contrastive learning on pairs of satellite images captured at the same location but on different dates; and (3) integrating both explicit and implicit representations of imaging geometry into a unified model to improve the geometric awareness of the detector. The proposed methods were first evaluated against the Faster R-CNN baseline and subsequently applied to 887 Sentinel-2 scenes acquired between January 2023 and March 2025 over offshore regions of China, resulting in the establishment of the China Offshore Wind Turbine Dataset.

2. Study Area and Datasets

2.1. Study Area

China’s offshore areas span from approximately 104° to 123°E in longitude and 19° to 51°N in latitude, covering a coastline of about 18,000 km (Figure 1). The area comprises the South China Sea, East China Sea, and Yellow Sea, transitioning from subtropical to temperate climate zones along a south-to-north gradient.

2.2. Datasets

2.2.1. Offshore Wind Turbines

The advancements in OWT technology have enabled the deployment of turbines in deeper waters and farther offshore to access areas with higher wind energy potential. These developments have been accompanied by the construction of large-scale but cost-effective turbines, with typical hub heights ranging from 79 to 114 m and blade lengths between 40 and 110 m [2]. Additionally, the turbines are commonly coated in white paint to reduce overheating by reflecting sunlight [20].

2.2.2. Sentinel-2 Imagery

Sentinel-2 imagery is organized according to the MGRS within the Universal Transverse Mercator (UTM) coordinate framework. Each scene covers an area of 110 km × 110 km, with a 10 km overlap between adjacent tiles to ensure spatial continuity. In the offshore region of China, a total of 56 MGRS grid cells are required to provide full spatial coverage (Figure 1). Accordingly, 887 Sentinel-2 scenes were collected from the European Space Agency (ESA) Copernicus platform. All scenes were acquired between January 2023 and March 2025 with cloud cover less than 5%.
The visual characteristics of OWTs in satellite imagery are fundamentally governed by the geometric relationships among the satellite’s orbital position, the geographic location of the OWTs, and solar incidence vectors [17,19]. This spatial configuration (Figure 2) leads to considerable variability in OWT appearance across different image acquisitions, influencing their detectability, apparent scale, and radiometric properties. Crucially, the discernible geometric shape of an OWT is determined by the viewing angle between the satellite and the turbines, while the color rendition is shaped by surface reflectance from both the turbine structure and the surrounding sea. These reflectance characteristics are largely influenced by the sun’s position relative to the location and time of image capture.
The spatial resolution of satellite imagery determines the level of detail that can be captured. High-resolution imagery (e.g., Google Earth; Figure 3a) reveals that satellite viewing angles, defined by azimuth and zenith, influence the apparent orientation and visible surfaces of wind turbines. Simultaneously, solar angles, including azimuth and zenith, affect the direction and length of shadows, which vary with season and latitude [22,23]. In contrast, turbines in medium-resolution imagery, such as Sentinel-2 (Figure 3b), typically appear as small bright pixel clusters accompanied by elongated shadow traces. Although structural details are less discernible at this resolution, geometric cues, such as the relative alignment between turbines and their shadows, as well as brightness contrast, can still support reliable detection if appropriately modeled.

2.2.3. Imaging Geometry Metadata

Each Sentinel-2 image is accompanied by auxiliary metadata containing imaging geometry parameters, specifically the solar and satellite viewing angles at the time of acquisition. These angular measurements are provided in a gridded format aligned with the image, typically sampled every 5 km. The angular data show considerable spatial variability within a single scene and temporal variability across acquisition dates and geographic locations.
As illustrated in Figure 4, Sentinel-2 images acquired over the same MGRS grid cells (as shown in Figure 1) but on different dates reveal pronounced seasonal variations in solar zenith angles. For example, in the northern Caofeidian area (~39°N), seasonal differences can reach up to 40°, while in the southern Zhanjiang area (~20°N), the variation is around 20°. Even within the same season, solar zenith angles can differ by more than 20° between scenes captured at northern and southern latitudes. Other geometric parameters, such as satellite viewing angles, also exhibit substantial spatial variation within individual image scenes and across scenes captured at different locations. These variations further affect the visual appearance of offshore wind turbines.

2.2.4. OWT Samples

We utilized the Global Offshore Wind Turbine Dataset as the training data, which is a globally distributed inventory of OWTs originally derived from Sentinel-1 synthetic aperture radar (SAR) time-series data acquired between 2015 and 2019 [3]. The dataset provides georeferenced OWT installation points, defined by longitude and latitude, which were used to generate bounding boxes for model training and to extract corresponding Sentinel-2 imagery at those locations.
To ensure that the training samples accurately represent actual OWT installations and reflect recent developments, we manually updated turbine locations within 15 MGRS grid cells in China’s offshore region using human interpretation of recent Sentinel-2 optical imagery. In total, 2471 OWT samples were identified and extracted for use in our study.

3. Methods

3.1. Model Design

A deep learning approach is proposed to enhance the detection of OWTs specifically from Sentinel-2 imagery, where turbine structures and their shadows often degrade into only a few bright and dark pixels. The method aims to improve detection performance by enabling the model to understand the imaging geometric relationship between turbine features, their shadows, and the corresponding solar illumination and satellite viewing conditions.
To achieve this, the approach incorporates imaging geometry awareness through two complementary strategies: (1) explicit integration of solar and satellite viewing angular information from satellite metadata at the time of acquisition, and (2) implicit learning of variations in turbine and shadow appearances caused by changes in imaging geometry, achieved through contrastive learning using image pairs acquired at the same location but on different dates.
We adopted the widely used Faster R-CNN as the baseline model and extended it to evaluate the effectiveness of our proposed imaging geometry-aware approach. Faster R-CNN is a state-of-the-art (SOTA) two-stage object detection framework [24], comprising three main components: a backbone, a neck, a head and a region proposal network (RPN) (Figure 5). The backbone employs the ResNet-50 architecture for feature extraction. The neck further generates multi-scale features in the form of a pyramid using a feature pyramid network (FPN). The RPN then generates candidate object regions from these feature maps [25]. Finally, the head, based on the Fast R-CNN module, performs region-wise classification and bounding box regression to refine and identify the objects.
Based on this architecture, we designed and implemented three extended network modules to assess the individual and combined contributions of the proposed strategies: (1) GeoWise-Net, which explicitly incorporates imaging metadata; (2) Contrast-Net, which implicitly models geometric variations via contrastive learning; and (3) Composite-Net, which integrates both strategies for synergistic performance. These extensions collectively serve as an ablation study to quantify the impact of explicit and implicit geometric modeling on OWT detection under variable imaging conditions.

3.1.1. Geowise-Net

GeoWise-Net, which serves as a substitute for the FPN in Faster R-CNN, is designed to enhance the representation of OWT features by integrating multi-scale feature maps generated by the FPN with solar incidence and satellite viewing angle information from satellite metadata. This integration generates imaging geometry-aware multi-scale feature representations that are more informative and discriminative for OWT detection than the original FPN outputs. The network architecture comprises three key components: a geo-encoder, a feature-wise linear modulation (FiLM), and a residual addition (Figure 6).
The geo-encoder is a two-layer multilayer perceptron (MLP) with a FC–ReLU–FC structure. The first fully connected (FC) layer performs a linear transformation, followed by rectified linear unit (ReLU) activation to introduce non-linearity. The second FC layer generates the final outputs: channel-wise scaling (γ) and bias (β) parameters, which are used to modulate the multi-scale feature maps produced by the neck.
The feature modulation stream incorporates a feature-wise linear modulation (FiLM) mechanism, where the scaling and shifting parameters (γ, β) are used to modulate feature maps at levels P2 to P5 from the FPN. This modulation enhances key semantic responses by capturing the geometric relationship between geospatial angular information and the visual characteristics of turbines and their shadows. Among the multi-scale feature maps P1 through P5, P1 retains the highest spatial resolution and richest detail and is forwarded without modification. Feature maps at levels P2 to P5 are modulated to support the detection of small, medium, and larger targets, respectively. The modulation is performed as follows:
FusedFeature = γ⋅OriginalFeature + β
A residual connection is then added to preserve useful information learned in the original features and prevent degradation caused by modulation. This balances new and original features, stabilizes gradient flow, and improves training convergence. The final form is as follows:
FusedFeature += OriginalFeature

3.1.2. Contrast-Net

Contrast-Net, proposed as an additional contrastive loss applied to the ResNet-50 backbone during the training phase, is designed to learn the variations in turbine and shadow appearances caused by changes in imaging geometry using image pairs acquired at the same location but on different dates. Assuming that no substantial changes occur in the physical scene over short time intervals, the primary differences between such image pairs are attributed to variations in solar illumination, satellite viewing angles, and environmental or atmospheric conditions, while the underlying OWT signals remain unchanged.
By training the model to minimize the representational distance between corresponding features of these temporally separated image pairs, Contrast-Net learns to suppress imaging geometry-induced appearance differences. As a result, it emphasizes and preserves the intrinsic, geometry-invariant characteristics of OWTs. This facilitates more robust and generalizable OWT detection under diverse observation conditions.
The proposed Contrast-Net also focuses on enhancing the multi-scale feature maps generated by the FPN module. Its contrastive learning process comprises three key steps:
  • Positive and negative image pair selection: Two images from the same geographic location but at different times are treated as positive pairs, while images from nearby but distinct locations are treated as negative pairs. This setup enables the model to learn turbine representations that are invariant to seasonal, illumination, and shadow-related changes by maximizing feature similarity in positive pairs and dissimilarity in negative pairs.
  • Feature extraction and comparison: After passing the image pairs through the backbone and FPN, contrastive learning is applied to feature maps from levels P2 to P5, which correspond to small, medium, and large scales. The highest-resolution layer, P1, is excluded due to its dominance of low level texture and details, which may hinder temporal feature alignment. For each selected level, the similarity between feature representations of image pairs, either positive or negative, is computed to guide the learning process.
  • Contrastive loss computation: Contrastive loss is a loss function used in deep learning to learn representations that bring positive pairs (augmentations of the similar images) closer together while pushing all other images (negatives) apart in the embedding space. The loss function of Information Noise-Contrastive Estimation (InfoNCE) from the Simple Framework for Contrastive Learning of Visual Representations (SimCLR) is adopted as follows:
  • Given an image pair (xi, xj), with their features denoted as (zi, zj), the cosine similarity of the two features at the layer (l) can be calculated as follows:
    s i m ( z i ( l ) , z j ( l ) ) = z i ( l ) · z j ( l ) z i ( l ) z j ( l )
    where z i ( l ) denotes the lth feature, l 2,3 , 4,5 .
Based on the similarity calculated over all positive and negative pairs, the InfoNCE loss is computed as follows:
L c o n t ( l ) = 1 N k = 1 M log e x p 1 P i = 1 P s i m ( z m k ( l ) , z p i , k ( l ) ) / τ e x p 1 P i = 1 P s i m ( z m k ( l ) , z p i , k ( l ) ) / τ + e x p 1 N j = 1 N s i m ( z m k ( l ) , z n j , k ( l ) ) / τ
where τ is the temperature parameter controlling the smoothness of the similarity distribution; z m k ( l ) represents the kth sample pair in lth feature; z p i ( l ) is the lth feature on the ith positive samples, z n j ( l ) is the lth feature on the jth negative sample; M is the number of Sentinel-2 images, P is the number of positive pairs, and N is the number of negative pairs.
Total loss based on all positive and negative pairs is as follows:
L t o t a l c o n t = l = 2 5 w l × L c o n t ( l )
where w l is the weighting coefficient for the lth layer, typically set to (0.2, 0.3, 0.3, 0.2) by default.

3.1.3. Composite Model

The Composite model integrates Geowise-Net and Contrast-Net to take advantage of the complementary strengths of both explicit and implicit imaging geometry-aware methods. By combining these two strategies, the Composite model directly incorporates imaging geometry information, such as solar incidence and satellite viewing angles of Sentinel-2 metadata. At the same time, it learns imaging geometry-invariant representations through contrastive learning applied to image pairs acquired at the same geographic location but across different seasons.
This dual-pathway design enables the model to generate more robust and semantically consistent multi-scale feature maps, reducing the influence of seasonal, latitudinal, and environmental variations. As a result, the Composite model improves the reliability and generalizability of OWT detection under diverse imaging conditions.

3.2. Experimental Design

3.2.1. Construction of OWT Training and Testing Samples

The full set of 2471 OWT samples was divided into two subsets: 1819 samples from 10 MGRS grid cells were used for model training, while the remaining 652 samples from 5 distinct grid cells were reserved for testing (Figure 7). In total, 887 Sentinel-2 scenes covering 56 MGRS grid cells were used for OWT mapping across the offshore region of China.
Originally, these OWT samples were represented as point-based vector data. To facilitate deep learning model training, which commonly relies on intersection-over-union (IoU) metrics to evaluate the similarity between predicted and ground-truth objects, each point was converted into a rectangular bounding box measuring 200 m in width and 300 m in height. This bounding box size was selected to fully encompass both the bright turbine structures and their corresponding shadow regions, as observed in Sentinel-2 imagery.

3.2.2. Model Training and Evaluation

All four models utilized the same pre-trained Faster R-CNN backbone architecture (faster_rcnn_r50-caffe_fpn_1x_coco) and were fine-tuned on an identical sample dataset to ensure fair and consistent comparisons. Implementation was conducted using the open-source MMDetection 3.3.0 framework (OpenMMLab).
During model training, the total loss used to monitor convergence primarily comprised four components: classification losses and bounding box regression losses for both the RPN module and the region of interest (ROI) head. Classification losses were computed using cross-entropy, while bounding box regression losses were calculated using Smooth L1 loss.
During model testing, the standard intersection-over-union (IoU) metric was used to evaluate detection performance. The evaluation followed the COCO-style protocol, which applies ten IoU thresholds ranging from 0.50 to 0.95 in increments of 0.05 (denoted as IoU = 0.50:0.95). A predicted object was considered a correct detection if its IoU with the ground truth bounding box exceeded the respective threshold. The final performance was assessed by averaging the results across all ten IoU thresholds.
In addition, the detection results were evaluated using point-based OWT samples from single-date Sentinel-2 imagery, while the merged detection results from all Sentinel-2 scenes across the entire China offshore area were assessed for overall mapping accuracy.

4. Results

4.1. Quantitative Evaluation

4.1.1. Evaluation with Bounding Boxes of OWT Samples

All four models in this study were trained for 18 epochs using the same training set and test set, with training loss converging around the 12th epoch. The proposed models consistently achieved lower training losses than the baseline Faster R-CNN on all components (Figure 8a). On the test set, evaluation was performed using IoU metrics applied to the predicted bounding boxes, with particular emphasis on small object sizes. The results, reported in terms of average precision (AP), average recall (AR), and F1 score, demonstrated that the proposed models outperformed the baseline Faster R-CNN. Geowise-Net, Contrast-Net, and the composite model achieved progressive improvements of approximately 1.0%, 1.5%, and 2.0%, respectively (Figure 8b).

4.1.2. Evaluation with OWT Sample Points

In addition to the above bounding box-based evaluation, a sample point-based assessment was conducted on the test set. Predicted bounding boxes of OWTs were converted into point representations by applying the averaged spatial shifts calculated between the bottom-right corner of each predicted bounding box and the ground truth location of the corresponding OWT. According to statistics from the test set, the nearest distances between OWTs ranged from 300 to 600 m. A prediction was considered a true positive (TP) if its converted point fell within a 50 m threshold of a ground truth point; otherwise, it was classified as a false positive (FP). Ground truth points with no corresponding predictions within the threshold were counted as false negatives (FNs).
As shown in Table 1, all three proposed models consistently outperformed the baseline Faster R-CNN in terms of precision, recall, and F1 score. The model performances were ranked in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite, with the proposed models exhibiting improvements of approximately 1.1%, 1.4%, and 1.9% on the full test set.
The results across total training loss, IoU-based evaluation, and point-based assessment consistently demonstrate the superior performance of the proposed models. Among them, the Composite model, which explicitly incorporates Sentinel-2 angular metadata and implicitly captures imaging geometry through contrastive learning, achieved the best overall performance. These findings suggest that the satellite imaging geometry-aware models significantly enhance the accuracy of OWT detection.

4.2. The Impact of Seasonal Variation on OWT Detection

Assuming that an ideal detection model has effectively learned the intrinsic imaging geometry of satellite data, it should be capable of maintaining consistent detection performance across seasonal imagery, despite significant differences in turbine shadow orientation and length caused by seasonal changes or geographic variability.
The solar incidence angle reaches its maximum in December, resulting in the longest shadows, and its minimum in June, producing the shortest shadows. Based on this seasonal variation, the twelve calendar months were grouped into two periods: the cold season (January to March and October to December), characterized by large solar zenith angles and long shadows (winter-like conditions), and the hot season (April to September), associated with small solar zenith angles and short shadows (summer-like conditions).
Accordingly, the performance of all four models based on the point samples was evaluated separately for the cold (Table 2) and hot (Table 3) seasons by comparing their OWT detections against ground truth data within each seasonal group. Across both seasons, these models followed the same F1 score ranking as observed in Table 1, with the proposed models consistently outperforming the benchmark Faster R-CNN.
Moreover, it is also evident that all models achieved higher detection accuracy during the cold season (Table 2), likely due to longer shadows that enhance the visibility of OWT features compared to the hot season (Table 3). However, the proposed models exhibited much smaller differences in the F1 score between the two seasonal groups than the benchmark model, with the differences as follows: Faster R-CNN (3.7%) > Geowise-Net (1.9%) > Contrast-Net (1.5%) > Composite (0.8%). This statistic effectively demonstrates the robustness of the proposed models in capturing the variability in turbine appearance under diverse imaging conditions. Notably, the Composite model showed minimal sensitivity to seasonal variation, highlighting its strong generalization capability.

4.3. Grad-CAM Highlight of OWT Features

Gradient-weighted class activation mapping (Grad-CAM) was employed to visualize the hotspot maps of OWTs based on the multiscale feature representations learned by the four models [26]. Grad-CAM computed the gradient of the OWT prediction score with respect to each feature map to determine its relative importance (i.e., weight). A weighted sum of these feature maps was then passed through a ReLU activation function to generate heatmaps, highlighting the image regions that most strongly influenced the model’s predictions.
As illustrated in Figure 9, all four models exhibited similar Grad-CAM heatmap patterns, characterized by low activation in background areas, such as islands and open water, and high activation in turbine regions. However, Geowise-Net, Contrast-Net, and the Composite model showed more concentrated and spatially aligned activations with OWT locations compared to the baseline Faster R-CNN. Notably, the brightness and the contrast of activation intensity for an OWT within its bounding boxes increased in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. This progression indicates that our proposed models more effectively localize turbine features, suggesting that perturbations in these focused regions would significantly influence the final classification scores.

4.4. Multi-Temporal OWT Mapping

4.4.1. Visualization of Single-Scene Detection

A total of 887 Sentinel-2 scenes covering 56 MGRS grid cells were used for OWT detection and mapping. Each scene was divided into 64 tiles to meet the input size requirements of the proposed models. The models were then applied to each tile individually, and the resulting detections were reassembled to reconstruct the full scene. This process generated four sets of detection results per scene, corresponding to the outputs of the four different models.
Visual inspection confirmed that all four models achieved satisfactory results in two representative offshore contexts: (1) OWTs situated in proximity to coastal island areas (Figure 10a), and (2) OWTs deployed in open-water areas without landmasses (Figure 10b). Even under challenging conditions, such as variable lighting, atmospheric disturbances, and differences in surrounding context, the models demonstrated a consistent ability to detect OWTs, despite their small size in medium-resolution imagery.
The contrastive learning approach appeared to perform better over open-water areas compared to island or coastal regions. In both positive and negative sample pairs, the imagery from open-water areas often shared similar sea surface conditions and nearly identical backgrounds rather than those from island areas, differing primarily in the presence and location of OWTs. This visual distinctiveness of OWT features enables the model to more effectively learn and distinguish them from the surrounding open-water surfaces.
In addition, the same OWT instances identified by Contrast-Net were assigned higher confidence scores in most cases compared to those detected by Geowise-Net. This observation may indicate that the implicit geometric features captured through contrastive learning on seasonal image pairs encode more discriminative spatial patterns than the explicit imaging angular information provided by the satellite metadata. However, further controlled experiments would be necessary to isolate and quantify the specific contributions of each component.

4.4.2. Generation of Unified OWT Dataset from Multi-Seasonal Sentinel-2 Imagery

China’s offshore area is covered by 56 MGRS grid cells, each of which contains multiple Sentinel-2 scenes acquired on different dates. To generate a unified detection dataset for each grid cell, post-processing was required to merge duplicated OWT detections across overlapping scenes. This step ensured that turbines detected multiple times in different images were consolidated into a single, consistent set of geolocated observations per grid cell. Finally, the merged results from all 56 MGRS cells were assembled to form the latest China offshore wind turbine dataset as of March 2025.
The four OWT detection datasets covering China’s entire offshore area were evaluated using the point samples from the test set (Table 4). All four models achieved precision values exceeding 0.95, and their performance rankings remained consistent with previous evaluations: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. Compared to the results from single-image evaluations (Table 1), overall accuracy showed varying degrees of improvement. This highlights the effectiveness of the post-processing approach in reducing detection errors from individual imagery by leveraging multi-seasonal imagery [9]. Moreover, compared to the benchmark Faster R-CNN, all three proposed models showed improvements across all three metrics, with the F1 scores increasing by 1.0%, 1.1%, and 1.3%.

4.4.3. Mapping of China Offshore Wind Turbine Dataset

All four OWT detection datasets demonstrated good accuracy. To leverage their complementary strengths, we integrated them to generate a more precise OWT dataset. Specifically, OWTs detected by a greater number of models were considered more reliable, while those identified by fewer models were treated as potential outliers.
We first merged the four OWT point datasets into a single combined dataset. Density-based spatial clustering was then applied to group points representing the same physical turbine, with each resulting cluster considered as one unique OWT. To ensure the accuracy of uncertain detections, clusters containing only one or two points were manually verified by overlaying them on Sentinel-2 imagery.
As a result, a total of 7369 OWTs were identified across China’s offshore areas by March 2025 (Figure 11). These OWTs are distributed along approximately 18,000 km of China’s coastline, with the majority located in the southeastern provinces of Zhejiang, Fujian, and Guangdong, which are important drivers of the country’s economic growth.

5. Discussion

The proposed approach, which implements a satellite imaging geometry-aware deep learning model for OWT detection, effectively improves detection accuracy and enhances model robustness across different seasons. In the current method, these improvements are applied broadly to the multi-scale feature maps generated by the feature pyramid network (FPN). Future work will focus on refining this strategy by concentrating on enhancements specifically in the candidate regions proposed by the region proposal network (RPN). This targeted approach is expected to increase the model’s attention to object-specific areas, thereby improving its ability to distinguish turbines from surrounding structures with greater precision.
Moreover, this study did not thoroughly examine how satellite viewing angles affect the accurate localization of OWT installations. In Sentinel-2 imagery, wind turbines typically appear as clusters of bright pixels corresponding to the turbine tower and blades. However, due to oblique viewing angles during satellite acquisition, the centroid of these bright regions does not necessarily align with the turbine’s true installation position. In future work, we aim to design new network modules that explicitly model the relationship between the observed turbine pixels, their associated shadows, the satellite viewing geometry, and the actual turbine locations. This effort seeks to improve the precision of bounding box placement for offshore wind turbines.

6. Conclusions

To harness the extensive ocean coverage and high revisit frequency of Sentinel-2 imagery for large-scale offshore wind turbine detection and database updating, this study developed a novel deep learning approach that learns the intrinsic imaging geometry of satellite data and thus enables the model to capture variations in turbine appearance under diverse imaging conditions across seasons and geographic locations, ensuring robust performance across temporal and spatial domains.
Our approach builds upon Faster R-CNN as the baseline and introduces three synergistic enhancements: (1) Geowise-Net, which explicitly incorporates solar and satellite angular metadata to improve geometric awareness; (2) Contrast-Net, which employs contrastive learning on seasonal image pairs to implicitly capture appearance and shadow variability caused by changing imaging geometries; and (3) a Composite model that integrates both methods to leverage their complementary strengths in explicit and implicit geometry-aware learning.
The evaluation results demonstrate that all proposed models outperformed the baseline on single-date imagery, with consistent F1 score improvements, ranked in the following order: Faster R-CNN (0.947) < Geowise-Net (0.958) < Contrast-Net (0.961) < Composite (0.966). Moreover, the approach significantly enhanced robustness and generalizability across seasonal and latitudinal variations, as shown by the reduced difference in performance between summer and winter imagery, ranked in the following order: Faster R-CNN (3.7%) > Geowise-Net (1.9%) > Contrast-Net (1.5%) > Composite (0.8%). By further applying the approach to 887 Sentinel-2 scenes with less than 5% cloud cover, acquired between January 2023 and March 2025, we detected 7369 OWTs along China’s coastlines, resulting in an up-to-date OWT dataset.

Author Contributions

Conceptualization, Z.L. and X.S.; Methodology, Z.L. and X.S.; Software, X.S.; Validation, X.S.; Formal analysis, X.S.; Investigation, Z.L. and X.S.; Resources, Z.L. and X.S.; Writing—original draft, X.S.; Writing—review and editing, Z.L. and X.S.; Visualization, X.S.; Supervision, Z.L.; Project administration, Z.L.; Funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the National Key Research and Development Program of China (No. 2021YFC3000300).

Data Availability Statement

This manuscript includes information on the original imagery data sources used in the study. The derived OWT data will be made available on request.

Acknowledgments

The authors would like to thank the initiative of the European Union (EU)–Copernicus Programme (https://dataspace.copernicus.eu/, accessed on 10 March 2025) for its contribution in making the Sentinel-2 data accessible.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OWTOffshore Wind Turbine
GWGigawatt
ESAEuropean Space Agency
MGRSMilitary Grid Reference System
UTMUniversal Transverse Mercator
MLPMultilayer Perceptron
FCFully Connected
CNNConvolutional Neural Network
ReLURectified Linear Unit
FPNFeature Pyramid Network
RPNRegion Proposal Network
ROIRegion of Interest
FiLMFeature-wise Linear Modulation
Grad-CAMGradient-weighted Class Activation Mapping
InfoNCEInformation Noise-Contrastive Estimation
SimCLRSimple Framework for Contrastive Learning of Visual Representation
IoUIntersection over Union
APAverage Precision
ARAverage Recall
TPTrue Positive
FPFalse Positive
FNFalse Negative
SOTAState of the Art

References

  1. Bilgili, M.; Alphan, H. Technological and dimensional improvements in onshore commercial large-scale wind turbines in the world and Turkey. Clean Technol. Environ. Policy 2023, 25, 3303–3317. [Google Scholar] [CrossRef]
  2. Bilgili, M.; Alphan, H. Global growth in offshore wind turbine technology. Clean Technol. Environ. Policy 2022, 24, 2215–2227. [Google Scholar] [CrossRef]
  3. Zhang, T.; Tian, B.; Sengupta, D.; Zhang, L.; Si, Y. Global offshore wind turbine dataset. Sci. Data 2021, 8, 191. [Google Scholar] [CrossRef] [PubMed]
  4. Dunnett, S.; Sorichetta, A.; Taylor, G.; Eigenbrod, F. Harmonised global datasets of wind and solar farm locations and power. Sci. Data 2020, 7, 130. [Google Scholar] [CrossRef] [PubMed]
  5. Hoeser, T.; Feuerstein, S.; Kuenzer, C. DeepOWT: A global offshore wind turbine data set derived with deep learning from Sentinel-1 data. Earth Syst. Sci. Data 2022, 14, 4251–4270. [Google Scholar] [CrossRef]
  6. Wang, K.; Xiao, W.; He, T.; Zhang, M. Remote sensing unveils the explosive growth of global offshore wind turbines. Renew. Sustain. Energy Rev. 2024, 191, 114186. [Google Scholar] [CrossRef]
  7. Marino, A.; Velotto, D.; Nunziata, F. Offshore Metallic Platforms Observation Using Dual-Polarimetric TS-X/TD-X Satellite Imagery: A Case Study in the Gulf of Mexico. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4376–4386. [Google Scholar] [CrossRef]
  8. Wong, B.A.; Thomas, C.; Halpin, P. Automating offshore infrastructure extractions using synthetic aperture radar & Google Earth Engine. Remote Sens. Environ. 2019, 233, 111412. [Google Scholar]
  9. He, T.; Hu, Y.; Li, F.; Chen, Y.; Zhang, M.; Zheng, Q.; Jin, Y.; Ren, H. Mapping land- and offshore-based wind turbines in China in 2023 with Sentinel-2 satellite data. Renew. Sustain. Energy Rev. 2025, 214, 115566. [Google Scholar] [CrossRef]
  10. Kang, M.; Ji, K.; Leng, X.; Lin, Z. Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection. Remote Sens. 2017, 9, 860. [Google Scholar] [CrossRef]
  11. Berwo, M.A.; Khan, A.; Fang, Y.; Fahim, H.; Javaid, S.; Mahmood, J.; Abideen, Z.U.; Syam, M.S. Deep Learning Techniques for Vehicle Detection and Classification from Images/Videos: A Survey. Sensors 2023, 23, 4832. [Google Scholar] [CrossRef] [PubMed]
  12. Yildirim, E.; Kavzoglu, T. Deep convolutional neural networks for ship detection using refined DOTA and TGRS-HRRSD high-resolution image datasets. Adv. Space Res. 2025, 75, 1871–1887. [Google Scholar] [CrossRef]
  13. Liu, S.; Kong, W.; Chen, X.; Xu, M.; Yasir, M.; Zhao, L.; Li, J. Multi-Scale Ship Detection Algorithm Based on a Lightweight Neural Network for Spaceborne SAR Images. Remote Sens. 2022, 14, 1149. [Google Scholar] [CrossRef]
  14. Lu, Z.; Wang, P.; Li, Y.; Ding, B. A New Deep Neural Network Based on SwinT-FRM-ShipNet for SAR Ship Detection in Complex Near-Shore and Offshore Environments. Remote Sens. 2023, 15, 5780. [Google Scholar] [CrossRef]
  15. Zhai, Y.; Chen, X.; Cao, X.; Cui, X. Identifying wind turbines from multiresolution and multibackground remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103613. [Google Scholar] [CrossRef]
  16. Liu, L.; Wu, M.; Zhao, J.; Bing, L.; Zheng, L.; Luan, S.; Mao, Y.; Xue, M.; Liu, J.; Liu, B. Deep learning-based monitoring of offshore wind turbines in Shandong Sea of China and their location analysis. J. Clean. Prod. 2024, 434, 140415. [Google Scholar] [CrossRef]
  17. Manso-Callejo, M.-Á.; Cira, C.-I.; Alcarria, R.; Arranz-Justel, J.-J. Optimizing the Recognition and Feature Extraction of Wind Turbines through Hybrid Semantic Segmentation Architectures. Remote Sens. 2020, 12, 3743. [Google Scholar] [CrossRef]
  18. Xie, J.; Tian, T.; Hu, R.; Yang, X.; Xu, Y.; Zan, L. A Novel Detector for Wind Turbines in Wide-Ranging, Multiscene Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 17725–17738. [Google Scholar] [CrossRef]
  19. Sun, G.; Huang, H.; Weng, Q.; Zhang, A.; Jia, X.; Ren, J.; Sun, L.; Chen, X. Combinational shadow index for building shadow extraction in urban areas from Sentinel-2A MSI imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 53–65. [Google Scholar] [CrossRef]
  20. Mandroux, N.; Dagobert, T.; Drouyer, S.; von Gioi, R.G. Single Date Wind Turbine Detection on Sentinel-2 Optical Images. Image Process. Line 2022, 12, 198–217. [Google Scholar] [CrossRef]
  21. Zhang, S.; He, Y.; Gu, Y.; He, Y.; Wang, H.; Wang, H.; Yang, R.; Chady, T.; Zhou, B. UAV based defect detection and fault diagnosis for static and rotating wind turbine blade: A review. Nondestruct. Test. Eval. 2024, 40, 1691–1729. [Google Scholar] [CrossRef]
  22. Zhou, G.; Song, C.; Simmers, J.; Cheng, P. Urban 3D GIS From LiDAR and digital aerial images. Comput. Geosci. 2004, 30, 345–353. [Google Scholar] [CrossRef]
  23. Fraser, C.S. Network Design Considerations for Non-Metric Cameras. Photogramm. Eng. Remote Sens. 1984, 50, 1115–1126. [Google Scholar]
  24. Deng, Z.; Sun, H.; Zhou, S.; Zhao, J.; Lei, L.; Zou, H. Multi-scale object detection in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2018, 145, 3–22. [Google Scholar] [CrossRef]
  25. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed]
  26. Sammani, F.; Joukovsky, B.; Deligiannis, N. Visualizing and Understanding Contrastive Learning. arXiv 2022, arXiv:2206.09753v3. [Google Scholar] [CrossRef] [PubMed]
Figure 1. MGRS grid showing Sentinel-2 coverage along the offshore region of China. OWT visibility is compared using two types of imagery: 10 m Sentinel-2 (Gaofeidian and Zhanjiang) and 0.15 m Google Earth (Diaoyu Port and Pingtan). The north-to-south distribution of these four locations highlights how OWT appearance varies with solar incidence, influenced by latitude and image acquisition time, as well as differences in image resolution.
Figure 1. MGRS grid showing Sentinel-2 coverage along the offshore region of China. OWT visibility is compared using two types of imagery: 10 m Sentinel-2 (Gaofeidian and Zhanjiang) and 0.15 m Google Earth (Diaoyu Port and Pingtan). The north-to-south distribution of these four locations highlights how OWT appearance varies with solar incidence, influenced by latitude and image acquisition time, as well as differences in image resolution.
Remotesensing 17 02482 g001
Figure 2. Imaging geometry illustrating a 3D scenario in which the appearance of a wind turbine and its shadow varies with solar illumination conditions, while the satellite sensor captures a 2D projection of the scene.
Figure 2. Imaging geometry illustrating a 3D scenario in which the appearance of a wind turbine and its shadow varies with solar illumination conditions, while the satellite sensor captures a 2D projection of the scene.
Remotesensing 17 02482 g002
Figure 3. Variations in OWT visual characteristics resulting from oblique viewing angles, solar illumination conditions, and differences in spatial resolution across various locations and acquisition times. (a) Google Earth imagery with a spatial resolution of 0.15 m; (b) Sentinel-2 imagery with a spatial resolution of 10 m.
Figure 3. Variations in OWT visual characteristics resulting from oblique viewing angles, solar illumination conditions, and differences in spatial resolution across various locations and acquisition times. (a) Google Earth imagery with a spatial resolution of 0.15 m; (b) Sentinel-2 imagery with a spatial resolution of 10 m.
Remotesensing 17 02482 g003
Figure 4. Angular grids of sun incidence and satellite viewing angles spaced at 5 km intervals in Sentinel-2 imagery. These angular variations result from differences in sensing time, latitude, and the satellite’s orbital position.
Figure 4. Angular grids of sun incidence and satellite viewing angles spaced at 5 km intervals in Sentinel-2 imagery. These angular variations result from differences in sensing time, latitude, and the satellite’s orbital position.
Remotesensing 17 02482 g004
Figure 5. Architecture of the Faster R-CNN model, consisting of four main components: a ResNet-50 backbone for feature extraction, a feature pyramid network (FPN) neck for multi-scale feature fusion, a region proposal network (RPN) for generating candidate object regions, and a region of interest (RoI) head for final object classification and bounding box regression.
Figure 5. Architecture of the Faster R-CNN model, consisting of four main components: a ResNet-50 backbone for feature extraction, a feature pyramid network (FPN) neck for multi-scale feature fusion, a region proposal network (RPN) for generating candidate object regions, and a region of interest (RoI) head for final object classification and bounding box regression.
Remotesensing 17 02482 g005
Figure 6. Diagram of the proposed Geowise-Net, which enhances the feature maps of the original FPN by incorporating solar incidence and satellite view angles using FiLM. This module extends the FPN neck within the Faster R-CNN architecture.
Figure 6. Diagram of the proposed Geowise-Net, which enhances the feature maps of the original FPN by incorporating solar incidence and satellite view angles using FiLM. This module extends the FPN neck within the Faster R-CNN architecture.
Remotesensing 17 02482 g006
Figure 7. Sentinel-2 imagery segmented by MGRS grids over the offshore region of China, with training and testing sets marked for the OWT detection experiment.
Figure 7. Sentinel-2 imagery segmented by MGRS grids over the offshore region of China, with training and testing sets marked for the OWT detection experiment.
Remotesensing 17 02482 g007
Figure 8. Performance comparison between the proposed models and the benchmark Faster R-CNN. (a) Training loss curves for the RPN, RoI head, and contrastive learning module throughout the optimization process. These curves reflect the model’s ability to fit the training data, where decreasing trends indicate effective learning and convergence. (b) IoU-based evaluation metrics computed periodically on the testing set to assess detection performance on unseen data. Increasing trends indicate improved generalization capability and robustness of the model.
Figure 8. Performance comparison between the proposed models and the benchmark Faster R-CNN. (a) Training loss curves for the RPN, RoI head, and contrastive learning module throughout the optimization process. These curves reflect the model’s ability to fit the training data, where decreasing trends indicate effective learning and convergence. (b) IoU-based evaluation metrics computed periodically on the testing set to assess detection performance on unseen data. Increasing trends indicate improved generalization capability and robustness of the model.
Remotesensing 17 02482 g008
Figure 9. Heatmap of gradient-weighted class activation mapping (Grad-CAM) over feature maps, with OWT bounding boxes shown in purple. White indicates high activation intensity, while dark gray represents low activation intensity.
Figure 9. Heatmap of gradient-weighted class activation mapping (Grad-CAM) over feature maps, with OWT bounding boxes shown in purple. White indicates high activation intensity, while dark gray represents low activation intensity.
Remotesensing 17 02482 g009aRemotesensing 17 02482 g009b
Figure 10. The detection of OWTs under two distinct marine environments. (a) Island areas and (b) open-water areas. The performance of four models varied across these two scenarios, highlighting differences in detection accuracy in relation to the surrounding context.
Figure 10. The detection of OWTs under two distinct marine environments. (a) Island areas and (b) open-water areas. The performance of four models varied across these two scenarios, highlighting differences in detection accuracy in relation to the surrounding context.
Remotesensing 17 02482 g010
Figure 11. Distribution of OWTs in the offshore region of China as of March 2025. A total of 7369 OWTs were detected from 887 Sentinel-2 scenes covering approximately 18,000 km of coastline. Numbered areas 1–12 correspond to zoomed-in regions.
Figure 11. Distribution of OWTs in the offshore region of China as of March 2025. A total of 7369 OWTs were detected from 887 Sentinel-2 scenes covering approximately 18,000 km of coastline. Numbered areas 1–12 correspond to zoomed-in regions.
Remotesensing 17 02482 g011
Table 1. Averaged accuracy of wind turbine detection on all Sentinel-2 imagery.
Table 1. Averaged accuracy of wind turbine detection on all Sentinel-2 imagery.
ModelPrecisionRecallF1 Score
Faster R-CNN0.9550.9440.947
Geowise-Net0.9710.9470.958
Contrast-Net0.9760.9490.961
Composite0.9810.9530.966
Table 2. Averaged accuracy of wind turbine detection on Sentinel-2 imagery during the cold season.
Table 2. Averaged accuracy of wind turbine detection on Sentinel-2 imagery during the cold season.
ModelPrecisionRecallF1 Score
Faster R-CNN0.9700.9480.957
Geowise-Net0.9780.9510.963
Contrast-Net0.9810.9520.965
Composite0.9840.9540.968
Table 3. Averaged accuracy of wind turbine detection on Sentinel-2 imagery during the hot season.
Table 3. Averaged accuracy of wind turbine detection on Sentinel-2 imagery during the hot season.
ModelPrecisionRecallF1 Score
Faster R-CNN0.9180.9320.920
Geowise-Net0.9530.9380.944
Contrast-Net0.9610.9430.950
Composite0.9740.9480.960
Table 4. Accuracy of wind turbine detection on test set.
Table 4. Accuracy of wind turbine detection on test set.
ModelPrecisionRecallF1 Score
Faster R-CNN0.9770.9250.951
Geowise-Net0.9900.9330.961
Contrast-Net0.9920.9350.962
Composite0.9940.9360.964
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, X.; Li, Z. Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning. Remote Sens. 2025, 17, 2482. https://doi.org/10.3390/rs17142482

AMA Style

Song X, Li Z. Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning. Remote Sensing. 2025; 17(14):2482. https://doi.org/10.3390/rs17142482

Chicago/Turabian Style

Song, Xike, and Ziyang Li. 2025. "Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning" Remote Sensing 17, no. 14: 2482. https://doi.org/10.3390/rs17142482

APA Style

Song, X., & Li, Z. (2025). Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning. Remote Sensing, 17(14), 2482. https://doi.org/10.3390/rs17142482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop