Next Article in Journal
Mechanisms and Predictability of Beaufort Sea Ice Retreat Revealed by Coupled Modeling and Remote Sensing Data
Previous Article in Journal
Outlier Correction in Remote Sensing Retrieval of Ocean Wave Wavelength and Application to Bathymetry
Previous Article in Special Issue
On the Hybrid Algorithm for Retrieving Day and Night Cloud Base Height from Geostationary Satellite Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks

College of Meteorology and Oceanography, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(19), 3285; https://doi.org/10.3390/rs17193285
Submission received: 2 August 2025 / Revised: 11 September 2025 / Accepted: 23 September 2025 / Published: 24 September 2025

Abstract

Highlights

What are the main findings?
  • A new nighttime sea fog detection dataset was constructed based on VIIRS/DNB imagery data.
  • The proposed SEGAN trained on the VIIRS/DNB dataset effectively mitigates false alarms that often affect traditional threshold-based algorithms.
What are the implications of the main findings?
  • SEGAN emphasizes semantic consistency in its output, endowing it with enhanced robustness across varying sea fog concentrations.
  • SEGAN achieves high detection accuracy with relatively low computational cost.

Abstract

Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In contrast, the VIIRS Day/Night Band (DNB) offers exceptional nighttime visible-like cloud imaging capabilities, offering a new solution to alleviate the overestimation issues inherent in infrared detection algorithms. Recent advances in artificial intelligence have further addressed the threshold selection problem in traditional detection methods. Leveraging these developments, this study proposes a novel generative adversarial network model incorporating attention mechanisms (SEGAN) to achieve accurate nighttime sea fog detection using DNB data. Experimental results demonstrate that SEGAN achieves satisfactory performance, with probability of detection, false alarm rate, and critical success index reaching 0.8708, 0.0266, and 0.7395, respectively. Compared with the operational infrared detection algorithm, these metrics show improvements of 0.0632, 0.0287, and 0.1587. Notably, SEGAN excels at detecting sea fog obscured by thin cloud cover, a scenario where conventional infrared detection algorithms typically fail. SEGAN emphasizes semantic consistency in its output, endowing it with enhanced robustness across varying sea fog concentrations.

1. Introduction

Sea fog is a common hazardous weather phenomenon, typically occurring in the atmosphere near the sea surface. During the warm season, sea fog tends to form over sea surface temperature (SST) minima in shallow water areas, under conditions of a stable atmosphere [1]. In regions affected by sea fog, the horizontal visibility is reduced to less than 1 km [2], posing serious threats to the safety of coastal transportation and maritime navigation. The atmosphere near the sea surface is more stable at night than during daytime, and the absence of solar radiation facilitates both the formation and prolonged persistence of sea fog. Therefore, research on nighttime sea fog detection methods is of great significance. Observational data shows that sea fog occurs most frequently in the northwestern Pacific during summer. Notably, the Bohai Sea and the Yellow Sea experience frequent sea fog events during spring and summer [1,3], significantly impacting maritime safety in China’s coastal waters. Traditional sea fog detection relies on in situ observational data from limited coastal stations, ships, buoys, and other platforms [1,2,4,5]. These data are spatially scattered and insufficient for large-scale sea fog detection requirements. Advances in satellite remote sensing technology have facilitated the development of sea fog detection methods, which have now become the primary means for sea fog detection.
Traditional nighttime sea fog detection methods primarily utilize the dual-channel brightness temperature difference technique (DCD). Hunt et al. [6] first demonstrated in 1973 that fog and low clouds exhibit lower emissivity in the Mid-Infrared (MIR) band than in the Thermal Infrared (TIR) band, laying the foundation for satellite-based remote sensing of sea fog. Eyre et al. pioneered a nighttime fog detection method using 3.7 μm and 10.8 μm infrared band data from the Advanced Very High Resolution Radiometer (AVHRR). Subsequently, Wu et al. [7] validated the effectiveness of the dual-infrared channel brightness temperature difference method using Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data. However, the DCD-based methods depend on manual threshold selection, and the high variability of sea fog makes it challenging to determine appropriate thresholds. Ellrod et al. [8] pointed out the poor detection performance of the DCD method for thin fog and stratus. Subsequent researchers have proposed various refinements. Cermak et al. [9] and Chaurasia et al. [10] incorporated the local standard deviation of the TIR band into the infrared brightness temperature difference method, further enhancing the algorithm’s performance. Amani et al. [11] added the difference between TIR brightness temperature and sea surface temperature (SST), improving the accuracy of nighttime sea fog detection. Despite these substantial improvements, traditional infrared detection methods still face difficulties with manual threshold selection, which remains a major constraint on further advancing detection accuracy.
On the other hand, Miller et al. [12] utilized Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) and Visible Infrared Imaging Radiometer Suite (VIIRS)/Day-Night Band (DNB) data to demonstrate that traditional DCD-based methods are prone to false alarms for low clouds/fog under certain conditions. Specifically, in the presence of a temperature inversion near the sea surface (characterized by cooler sea surface temperatures overlain by a warm, moist atmospheric layer), DCD-based methods tend to overestimate sea fog coverage. This issue is prevalent in coastal upwelling zones, river estuaries, and oceanic frontal regions, significantly undermining the reliability of DCD-based methods [12]. The VIIRS/DNB sensor, with its high spatial resolution and accurate radiometric calibration, provides visible-like imagery during nighttime, offering new opportunities for improving sea fog detection. Miller et al. [12] found that incorporating VIIRS/DNB data can help mitigate false alarms in traditional infrared detection methods during temperature inversion events. Given these advantages, several researchers have incorporated DNB data into fog detection, developing multi-channel threshold detection algorithms [13,14,15]. However, the fog cases selected in these studies were relatively limited, and the focus has been predominantly on land fog rather than sea fog. Furthermore, the highest average probability of detection achieved was merely 0.86 [14]. In summary, the current state of traditional nighttime sea fog detection research can be outlined as follows: (1) Infrared data-based sea fog detection methods have achieved considerable success through years of development. Furthermore, compared to VIIRS/DNB data, infrared data offer broader data availability and higher temporal resolution. However, their accuracy is considerably compromised in the presence of sea surface temperature inversions. (2) The VIIRS/DNB’s unique capability to provide visible-like imagery at night shows great potential in mitigating false alarms associated with DCD techniques, offering significant advantages for sea fog detection. Nevertheless, related research remains limited, indicating substantial research significance and potential. This study aims to develop a novel nighttime sea fog detection method leveraging VIIRS/DNB data to address these existing gaps.
In recent years, the rapid advancement of artificial intelligence (AI) technology has provided new opportunities for improving sea fog detection. Its outstanding nonlinear fitting capabilities offer a promising approach to address the threshold selection problem inherent in DCD-based methods and increase sea fog detection accuracy. In both infrared and visible satellite imagery, sea fog typically presents as a smooth, homogeneous gray-white cloud layer that AI models can effectively learn to achieve accurate identification. Several studies have begun exploring AI-based approaches for sea fog detection. Hu et al. [16] utilized Himawari-8 satellite data and neural networks to achieve effective classification of sea fog and clouds in China’s Bohai Sea and Yellow Sea regions, although their method could not effectively detect the spatial extent of sea fog. Yi et al. [17] applied a fully convolutional neural network to FY-4A infrared data for sea fog detection during the dawn period, but the label data were still derived from the DCD algorithm, potentially decreasing detection accuracy. Considering the impact of label data on sea fog detection accuracy, some researchers have adopted weakly supervised or unsupervised AI methods to reduce reliance on pixel-level labels. Among these, Shin and Kim et al. [18] implemented sea fog identification using the Expectation-Maximization algorithm, based on infrared channel data from the Communication, Ocean and Meteorological Satellite (COMS) and the Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) sea surface temperature dataset. Huang et al. [19] proposed a novel sea fog detection method using Class Activation Mapping with Himawari-8 data, effectively alleviating the labor-intensive manual labeling requirements for traditional supervised methods. These studies demonstrate that AI methods can effectively eliminate the dependence on fixed thresholds for sea fog detection. These methods have achieved excellent results in infrared data-based sea fog detection, significantly improving detection accuracy. Considering the distinct textural characteristics of sea fog in DNB imagery, this study introduces an AI approach for nighttime sea fog detection using VIIRS/DNB data.
This study aims to develop a novel deep learning model based on Generative Adversarial Networks (GANs) for detecting nighttime sea fog using VIIRS/DNB data. The VIIRS/DNB exhibits high radiometric calibration accuracy, enabling exceptional capabilities in generating visible-light cloud imagery under low-illumination conditions such as nighttime. This capability effectively mitigates false alarms caused by sea surface temperature inversion in traditional infrared detection algorithms [12]. Furthermore, the VIIRS/DNB channel offers unique advantages in detecting sea fog beneath thin clouds. Thin cirrus clouds are optically thick in the thermal infrared, preventing satellite infrared channels from detecting underlying sea fog. In contrast, lunar reflectance exhibits stronger scattering characteristics within the visible light spectrum. Consequently, the VIIRS/DNB satellite channel can effectively penetrate semi-transparent thin clouds for sea fog detection [20], highlighting its significant potential for sea fog detection applications. Unlike other AI methods (e.g., convolutional neural networks, Vision Transformer) that utilize pixel-wise loss function calculations, GANs emphasize structural and textural consistency between generated outputs and ground truth labels. This leads to segmentation results with improved structural integrity and semantic coherence. Sea fog, composed of spatially continuous distributed micro-droplets, exhibits more homogeneous grayscale and textural characteristics in VIIRS/DNB imagery. These distinctive characteristics make GANs particularly suitable for image segmentation tasks in sea fog detection. In summary, this study proposes training a GANs model using VIIRS/DNB data to achieve effective detection of nighttime sea fog.
The remainder of this paper is organized as follows. Section 2 introduces the study area, satellite data, and proposed method. Section 3 presents the experimental results and provides a detailed analysis. Section 4 discusses relevant aspects of this research. Finally, Section 5 summarizes the findings of this study.

2. Data and Methods

2.1. Study Area

The northwestern Pacific region exhibits one of the highest frequencies of sea fog occurrence globally, with the Sea of Japan and the Yellow Sea representing the two regions with the highest frequency [1,2]. This study focuses on the coastal waters of China and the Sea of Japan, corresponding to the areas marked with yellow circles in Figure 1. Within this area, sea fog over the Yellow Sea and East China Sea predominantly manifests as advection fog. Continental shelf areas cooled by tidal mixing generate advection fog when warm air masses flow over the Yellow Sea and adjacent waters during summer, leading to the frequent occurrence of sea fog in this region [21].

2.2. The Visible Infrared Imaging Radiometer Suite

The Visible Infrared Imaging Radiometer Suite is a scanning imaging radiometer developed by the National Oceanic and Atmospheric Administration (NOAA). It is one of five instruments aboard the Suomi National Polar-orbiting Partnership (Suomi NPP) satellite platform, launched on 28 October 2011. Suomi-NPP serves as a transitional satellite between the Earth Observing System (EOS) satellites and the next-generation Joint Polar Satellite System (JPSS) series, a collaborative program between NASA and NOAA. Currently, three VIIRS instruments operate in orbit: Suomi-NPP, NOAA-20, and NOAA-21. The VIIRS sensor represents an extension and improvement over its predecessors, the Advanced Very High Resolution Radiometer (AVHRR), the Moderate Resolution Imaging Spectroradiometer (MODIS), and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS). It collects visible and infrared imagery and radiometric data across land, atmosphere, and ocean domains to measure cloud and aerosol properties, ocean color, sea and land surface temperatures, and Earth’s albedo. Designed as a broom radiometer, VIIRS comprises 22 spectral bands covering wavelengths from 0.41 μm to 12.01 μm (as detailed in Table 1). These wavelengths include the visible, near-infrared, shortwave infrared, and thermal infrared. Among these, five bands are high-resolution imagery bands (I-bands), while 16 are moderate-resolution bands (M-bands). VIIRS also features a unique panchromatic Day/Night Band (DNB) with a spatial resolution of 742 m and high radiometric calibration accuracy. This enables visible-like cloud imaging during dawn, dusk, and nighttime conditions. DNB provides a reliable data source for nighttime sea fog detection, and this study utilizes this data for its research objectives.
In this study, we utilized Sensor Data Records (SDRs) provided by NOAA, encompassing the DNB, M12, and M15 channels. The DNB channel data were employed to train our proposed SEGAN model. DNB channel data provides measured top-of-atmosphere radiance, quality flags, and gain information. M12 and M15 channels provide measured brightness temperatures, radiances, and gain information. The corresponding geolocation files provide essential information such as time, latitude/longitude, solar/lunar zenith angles, and solar/lunar azimuth angles [22].

2.3. Sea Fog Dataset Construction

For the construction of the dataset, sea fog events were first identified based on records from The Marine Weather Review [23] and the corresponding VIIRS/DNB data were collected. These processed data were then submitted to meteorological experts for manual annotation of the sea fog extent. The Marine Weather Review documents sea fog events occurring in China’s coastal waters from 2017 to the present, all recorded events between 2017 and 2024 were collected in this study. Corresponding VIIRS satellite data were collected and processed according to the user guide. During data processing, satellite data from different channels were spatiotemporally matched using a criterion of maximum two-minute time difference and aligned via nearest-neighbor interpolation. Subsequently, the satellite data underwent rigorous quality control: samples with storage anomalies or complete obscuration by high-level clouds were removed. It is particularly important to emphasize that when the moonlight fraction is low, the limited instrument sensitivity and low lunar radiance cause the top-of-atmosphere reflectance to be highly similar to noise, making cloud identification difficult. As shown in Figure 2a–c, image quality degrades significantly under insufficient illumination. When the lunar illumination fraction falls below 70%, a significant degradation in image quality can be visually observed, rendering effective identification of cloud features unattainable. Therefore, only cases with a lunar illumination fraction of at least 70% were retained. After applying these filters, a total of 452 distinct sea fog cases were selected from the study regions for subsequent analysis.
Finally, the screened data were submitted to meteorological experts for manual annotation. The detailed procedure for sea fog annotation is given as follows. (1) Preliminary sea fog labels were generated through an inversion process using VIIRS infrared channel data and the latest operational threshold algorithm [11]. (2) Three meteorological experts were invited to refine the initial labels produced in step 1. Each expert independently annotated sea fog coverage based on VIIRS/DNB imagery, with partial validation against meteorological station data. (3) Final sea fog labels were determined through cross-validation and voting among the three experts. The operational threshold algorithm has been quantitatively validated and provided a reliable reference for initial label generation. Each expert independently annotated sea fog based on VIIRS/DNB visible-band imagery and professional experience, followed by cross-validation to ensure the consistency and reliability of the final labels.

2.4. SEGAN Method

In this study, we propose a generative adversarial network (SEGAN) enhanced by incorporating an attention mechanism. SEGAN comprises a generator and a discriminator. The generator receives DNB image data as input and produces a sea fog mask of the same size as the input image. The discriminator, on the other hand, takes both the generated sea fog mask and the annotated sea fog label as input to determine whether the mask originates from human expert annotation or is generated by the generator. While the discriminator strives to differentiate between the labeled image and the generator-produced image, the generator attempts to generate outputs as realistic as the labeled image, thereby making it difficult for the discriminator to distinguish between them. The overall architecture of the proposed SEGAN in this study is illustrated in Figure 3.
For the generator, we adopted the U-Net architecture [24] with skip connections and incorporated a spatial attention mechanism to enhance feature extraction. The skip connection structure preserves substantial low-level semantic information during encoding and facilitates accurate boundary feature identification during decoding, effectively improving sea fog detection performance in this study. The spatial attention module operates as follows: given an input feature map F H × W × C of size H × W × C, it is processed through three separate convolutional layers to produce new features {B, C, D}   R H × W × C . These three feature matrices are reshaped to size C × N, where N = H × W. The transpose of matrix C is multiplied by matrix B, and a spatial attention map S N × N is obtained by applying a SoftMax function. The transpose of matrix S N × N is multiplied by matrix D. The resulting matrix is multiplied by a scaling coefficient α and then reshaped back to size H × W × C. Finally, the scaled result is added to the original feature matrix A to produce the final output E. The coefficient α is initialized to 0 and gradually learns to assign appropriate weights. Each element S i , j N × N in the spatial attention map S N × N , calculated as shown in Equation (1), represents the impact of position i on position j.
S j i = e x p ( B i C j ) i = 1 N e x p ( B i C j )
Within the main structure of the generator (as shown in Figure 4.), we introduced a lightweight channel attention mechanism module (Squeeze-and-Excitation Network) [25]. This module explicitly models inter-channel dependencies within convolutional features. During training, it automatically learns to transform these dependencies into feature weights, enabling adaptive recalibration of feature channel importance and thereby strengthening the generator’s feature extraction capability. Squeeze-and-Excitation Network (SE-Net) first performs a squeeze operation. It compresses the input feature map U H × W × C along the spatial dimensions using global average pooling, yielding a feature vector Z 1 × C . Global average pooling collapses each two-dimensional feature channel into a single real number, which implicitly captures global contextual information. Subsequently, the excitation operation is performed. Two fully connected layers and a nonlinear activation function are employed to learn weights for each channel, thereby recalibrating the feature map U H × W × C . In practice, the feature dimension is first reduced to 1/16 of the input channels, then activated by a ReLU function, and finally increased back to the original dimension C via another FC layer, producing the final channel-wise weight vector Z * 1 × C . The input feature map U H × W × C is then recalibrated using Z * 1 × C to produce the final output feature map U * H × W × C . This design offers advantages over directly using a single FC layer. (1) It incorporates greater nonlinearity, enabling better modeling of complex interdependencies among channels. (2) It reduces parameter count and computational cost. Through training, the SE module can adaptively select and emphasize informative features, helping the network focus on significant feature channels to enhance its feature discriminability.
The discriminator consists of five convolutional blocks, one average pooling layer, and one fully connected layer. Each convolutional block contains one convolutional layer with a 3 × 3 kernel size, one BatchNorm layer, one ReLU activation layer, and one MaxPool layer with a 2 × 2 size. The final fully connected layer is followed by a Sigmoid activation function, outputting the ultimate discrimination result. Let the generator G learn the mapping from the input DNB image data x to the mask label data y . The discriminator D then maps a pair of input values { x , y } to a binary classification label {0, 1}, where 0 and 1 represent whether y is generated by the generator or human-annotated, respectively. The objective function of the generative adversarial network can be expressed by Equation (2).
L G A N = E x , y p d a t a ( x , y ) [ log D ( x , y ) ] + E x p d a t a ( x ) [ log ( 1 D ( x , G ( x ) ) ) ]
The adversarial nature of the GANs framework dictates that the discriminator aims to maximize D ( x , y ) , while the generator aims to minimize D ( x , G ( x ) ) . In contrast, the generator’s objective is to produce outputs indistinguishable from real labels. The goal is to obtain approximately realistic outputs. Therefore, the GANs objective function is defined by Equation (3).
G = arg min G max D E x , y p d a t a ( x , y ) [ log D ( x , y ) ] + E x p d a t a ( x ) [ log ( 1 D ( x , G ( x ) ) ) ]
Inspired by Son et al. [26] and considering sea fog monitoring as an image segmentation task, we incorporated a Binary Cross-Entropy Loss into the loss function to quantify the discrepancy between the generated mask and the ground truth label. The final loss function can be expressed by Equation (4). Here, λ is a weighting parameter used to balance the two objective terms, following the study by Son et al. [26], it was set to 10.
L = arg min G max D L G A N ( G , D ) + λ L S E G ( G )

3. Experiments and Results

Based on the constructed sea fog detection dataset, this study proposes a novel generative adversarial network model. To comprehensively evaluate the performance of the proposed model, we reproduced the latest operational infrared detection algorithm for comparison. Sea fog detection can also be considered a specialized image segmentation task. To evaluate the performance of the proposed model, we selected several widely adapted deep learning models in related fields for comparative analysis, including R2U-Net [27], Attention R2U-Net, and DA-TransUNet [28]. Among these, Attention R2U-Net is a variant of R2U-Net [27] that incorporates Attention Gate [29] into its skip connections. DA-TransUNet [28] employs a hybrid architecture integrating convolutional neural networks with Vision Transformer [30], which has achieved remarkable performance in the field of image segmentation. Additionally, to examine the contribution of individual modules within the proposed method, we designed three corresponding ablation experiments. This section will provide a detailed description of the experimental setup and present a specific analysis of selected sea fog case study results.

3.1. Experimental Setup

This study first partitioned the dataset, which contains sea fog data collected from 2017 to 2024. Given that the sea fog cases acquired in 2023 are more comprehensive and temporally cover the peak seasons of sea fog occurrence (spring and summer), thus being more representative, we designated the 2023 sea fog data as the test set, with the remaining data serving as the training set. Random flipping was applied for image augmentation during the training process.
The SEGAN model was implemented based on the PyTorch (torch 2.4.0 + cuda 11.8) library and trained and tested on the aforementioned sea fog dataset. During training, the generator and discriminator were trained alternately for multiple epochs until final convergence was achieved. In practice, the Adam optimizer with an initial learning rate of 0.0029 was utilized. All other deep learning models were similarly trained within the PyTorch framework, with consistent training environment configurations to ensure a fair comparison with SEGAN. To evaluate the performance of SEGAN, this study reproduced the state-of-the-art operational infrared detection algorithm for comparison, employing Amani et al.’s [11] method (Auto-DCD) as the benchmark algorithm. Notably, while the GEOS-16 infrared channel data used by Amani et al. has a spatial resolution of 2 km, the data used in this study has a spatial resolution of 750 m. To adapt the Auto-DCD method to the spatial resolution of the VIIRS channels, we modified the 3 × 3 kernel used by Amani et al. to a 9 × 9 kernel. Infrared data from the VIIRS M12 and M15 bands corresponding to a total of 162 sea fog cases were statistically processed to derive the final Auto-DCD test results. Fully connected Conditional Random Fields (Dense CRFs) constitute a statistical modeling method employed for pixel-level prediction tasks. To enhance the detection accuracy of the output results from the proposed model, this study incorporated Dense CRFs for post-processing. The approximate inference algorithm proposed by Philipp et al. [31] was used to accelerate the inference process. Specifically, the confidence of the initial segmentation result was set to 0.9, and 12 iterations of inference were performed to obtain the final output.

3.2. Evaluation Metrics

To quantitatively evaluate the detection performance of the proposed method, we employed six widely adopted evaluation metrics for quantitative analysis of the test results: probability of detection (POD), false alarm rate (FAR), Precision, F1-score, critical success index (CSI), and mean Intersection over Union (mIoU). These metrics assess the detection results from different perspectives, and their corresponding calculation formulas are as follows.
P O D = T P T P + F N
F A R = F P T N + F P
P r e c i s i o n = T P T P + F P
F 1 = 2 × P r e c i s i o n × P O D P r e c i s i o n + P O D
C S I = T P T P + F P + F N
m I o U = C S I + b a c k I o U 2 b a c k I o U = T N T N + F P + F N
True Positives (TP), False Negatives (FN), False Positives (FP), and True Negatives (TN) are determined based on the test results, with their actual definitions as follows.
(1)
TP: Sea fog occurs and is detected.
(2)
FN: Sea fog occurs but is not detected.
(3)
FP: Sea fog does not occur but is detected.
(4)
TN: Sea fog does not occur and is not detected.

3.3. Experimental Results

This section presents a comparative analysis between the proposed SEGAN model, the conventional infrared threshold-based method (Auto-DCD), and several established image segmentation models (including R2U-Net, Attention R2U-Net, and DA-TransUNet). The evaluation encompasses comprehensive statistical results across all selected metrics, along with detailed case studies of specific sea fog events. Through both quantitative and qualitative analyses, a comprehensive assessment of the superior performance of the proposed method is conducted. Specific test results are shown in Table 2.
We constructed a novel generative adversarial network model that effectively leverages the textural features of sea fog within VIIRS/DNB visible-light imagery data to accurately identify sea fog. Table 2 presents a comparison between SEGAN and the latest operational infrared threshold-based detection method (Auto-DCD), as well as three other deep learning models from related fields. Comprehensive evaluation metrics demonstrate that our proposed SEGAN model significantly outperforms other comparative methods. Regarding individual metrics, SEGAN improves the POD by 0.0632 and Precision by 0.1658 compared to Auto-DCD, while reducing the false alarm rate by 0.0287. In terms of composite metrics, SEGAN attained an F1-score and mIoU both exceeding 0.83, and a CSI surpassing 0.73, highlighting its superior performance. Collectively, these metrics demonstrate that the proposed method achieves a significant enhancement in overall detection accuracy compared to conventional infrared-based approaches.
In addition, comparative analyses were conducted with several widely adapted deep learning models in the image segmentation domain. As shown in Table 2, SEGAN outperforms the other three deep learning models across three comprehensive evaluation metrics (F1-score, CSI, and mIoU) as well as in POD, which indicates that our specifically designed SEGAN model is better suited for sea fog detection tasks compared to general-purpose models. Moreover, SEGAN surpasses DA-TransUNet in detection performance, achieving high-precision sea fog detection with a lightweight network architecture, thereby fully exhibiting its exceptional capabilities. To comprehensively evaluate the detection performance of SEGAN, we now present a detailed analysis of selected sea fog case studies.
Figure 5 illustrates a sea fog event recorded by the Suomi NPP satellite on 10 March 2023. The DNB image data reveal that portions of the sea fog were obscured by thin cloud cover. Conventional infrared threshold-based methods cannot identify such sub-cloud sea fog, leading to partial detection omissions. This limitation arises because clouds exhibit high optical thickness in infrared bands, preventing the detection of underlying sea fog. Conversely, moonlight reflectance possesses stronger scattering properties in the visible spectrum. Consequently, partial sea fog information beneath thin clouds is observable in DNB imagery (as seen in the raw image of Figure 5). Leveraging this inherent advantage of DNB data, SEGAN effectively learns the continuous textural features of extensive sea fog within visible-light imagery, accurately identifying its spatial distribution. This enhances detection accuracy under thin cloud cover conditions. Notably, SEGAN also demonstrates effective detection of thin fog in the lower portion of the Figure 5 case study. These findings fully demonstrate that, compared to traditional infrared threshold methods, SEGAN’s utilization of DNB imagery significantly enhances sea fog detection accuracy in the presence of thin cloud cover.
As illustrated in Figure 5, both SEGAN and the other three deep learning models exhibit varying degrees of missed detection in this case study. Specifically, R2U-Net, and Attention R2U-Net all demonstrate a POD notably lower than 0.7 (as shown in Table 3.). Closer examination reveals that the missed detection areas predominantly occur in the upper-left region of the case, where the sea fog concentration is relatively low. This phenomenon may be attributed to the low concentration of sea fog in the missed areas, resulting in less distinct grayscale features in the imagery data that consequently impair model detection performance. Despite these challenges, SEGAN outperforms all other models across three comprehensive evaluation metrics—F1-score, CSI, and mIoU, achieving relatively superior detection results. Unlike conventional models that optimize pixel-wise loss functions, SEGAN’s adversarial training paradigm emphasizes semantic coherence in its output, rendering it particularly suitable for sea fog detection tasks and endowing it with enhanced robustness across varying sea fog concentrations.
Figure 6 presents a sea fog event recorded by the NOAA-20 satellite on 4 March 2023. The Auto-DCD method exhibited partial detection omissions in this case. This is likely attributable to suboptimal threshold selection within the Auto-DCD algorithm, leading to missed detections in certain areas and consequently reduced sea fog detection accuracy. In contrast, SEGAN accurately identified most of the sea fog extent. Despite SEGAN’s commendable performance, our analysis revealed remaining limitations. Within the regions marked by red boxes in Figure 6 and Figure 7, the SEGAN model demonstrated lower detection probability for spatially limited, isolated sea fog patches compared to large, continuous sea fog areas. Conversely, the Auto-DCD method performed better than SEGAN in these regions. This suggests that the infrared brightness temperature difference method possesses unique advantages in such scenarios. Additionally, the two cases in Figure 6 and Figure 7 indicate that SEGAN underperforms Attention R2U-Net in detecting scattered patches marked by rectangular boxes. This suggests that while SEGAN prioritizes global semantic coherence, it remains less effective in capturing localized features within small, fragmented regions of the imagery data. Analysis indicates that the proposed SEGAN model effectively learns the textural features of sea fog within visible-light imagery, enabling accurate identification. Its AI-based approach eliminates the need for manual threshold selection, thereby improving overall sea fog detection accuracy. Crucially, SEGAN’s utilization of DNB imagery significantly enhances detection accuracy under partial thin cloud cover conditions. Although SEGAN yields promising results, its detection efficacy for certain isolated, small-scale sea fog patches requires further improvement. Future improvements will focus on two key aspects: (1) optimizing the model architecture to enhance its feature extraction capacity, (2) incorporating infrared brightness temperature difference as auxiliary input data to enrich the model’s informational context and improve detection accuracy.

3.4. Ablation Experimental Results

To comprehensively evaluate the performance of the proposed model, we designed three ablation experiments. This subsection provides a detailed description of the experimental specifics. First, the settings for ablation experiment 1 are introduced. The generator of SEGAN is proposed based on U-Net [24]. Its skip connection structure preserves sufficient low-level features for the image segmentation task within the decoder path. However, preserving excessive low-level features may lead to information redundancy, potentially impairing the model’s overall performance. To fully exploit the model’s superior capabilities, we first designed ablation experiment 1 to investigate the number of skip connections. Specifically, while the original generator incorporates four skip connections, we incrementally reduced this number for testing, ultimately selecting the optimal parameter configuration. Given that high-level features contain richer semantic information and should be retained, we sequentially removed the skip connection modules closest to the input layer. Each configuration was trained until convergence, and the best test results were selected for comparison. The experimental results are presented in Table 4.
A comprehensive analysis of the evaluation metrics in Table 4 reveals that the model performance reached its optimal level when the number of skip connections was set to 2. At this configuration, the model also maintained a relatively moderate parameter count. Consequently, a skip connection count of 2 was ultimately selected. Further analysis of Table 4 indicates that model test performance was nearly identical when the skip connection count was less than 3, with particularly similar results observed between configurations of 2 and 3 skip connections. However, a relatively significant decline in POD occurred when the skip connection count was increased to 4. This decline may be attributed to the increased model parameter size coupled with information redundancy, collectively impairing model performance. Therefore, within this study, the generative adversarial model configuration with 2 skip connections delivered the best performance. As illustrated in Figure 8, selected test cases were analyzed to qualitatively assess model performance. The figure visually demonstrates the output results of the different model configurations. While the test results were generally similar across models, the configuration with 2 skip connections yielded the best overall detection effectiveness, consistent with the conclusions drawn from the quantitative evaluation metrics.
To enhance feature information fusion, we incorporated a spatial attention mechanism into the skip connection structure. To evaluate the efficacy of the spatial attention mechanism within SEGAN, we conducted a comparative experiment using the Hybrid Attention Mechanism (HAM) [32], a widely adopted approach in the neural network domain. The experimental results are presented in Table 5.
A comprehensive analysis of Table 5 reveals that the generative adversarial model incorporating HAM exhibited a certain degree of reduction in the false alarm rate (FAR) and an improvement in Precision compared to SEGAN. However, its probability of detection (POD) showed a significant decline, indicating a marked decrease in sea fog detection performance. Furthermore, based on composite metrics such as the critical success index (CSI), F1-score, and mean Intersection over Union (mIoU), the detection performance of the proposed SEGAN model is demonstrably superior to the variant incorporating HAM. Figure 9 presents results from selected test cases, clearly demonstrating that the variant with HAM exhibited partial detection omissions, whereas SEGAN achieved more accurate detection of sea fog.
To enhance the model’s feature extraction capability, this study introduced the SE-Net module into the generator. Ablation experiment 3 was designed to validate the contribution of the SE-Net module. Specifically, we trained both SEGAN and a variant excluding the SE-Net module, ensuring both reached convergence. The best test results corresponding to their respective optimal hyperparameters were selected for comparison. The results are presented in Table 6.
SEGAN surpassed the variant without the SE-Net module across key metrics including probability of detection (POD), F1-score, critical success index (CSI), and mean Intersection over Union (mIoU). Notably, POD increased by 0.0557, demonstrating that the SE-Net module effectively enhances the model’s feature extraction capability. As shown in Figure 10, selected sea fog cases reveal that SEGAN’s sea fog identification performance is superior to that of the variant without the SE-Net module, particularly in regions characterized by thin fog patches. The SE-Net module improves feature extraction by modeling dependencies among feature channels, thus explaining SEGAN’s enhanced sea fog detection effectiveness.

4. Discussion

This study proposes a novel generative adversarial network model (SEGAN) that utilizes VIIRS/DNB data to detect nighttime sea fog. Compared to conventional operational infrared detection methods, the proposed approach eliminates the need for complex threshold selection processes while improving detection accuracy. Although SEGAN demonstrates favorable detection performance, certain limitations remain. These limitations will be discussed in detail in this section.

4.1. Limitations of VIIRS Data

VIIRS/DNB data possess exceptional capability for nighttime visible-light cloud imaging. However, the actual imaging performance is influenced by lunar phase conditions. Under low moonlight fraction, the limited intensity of the lunar radiation source results in detected reflectance values being indistinguishable from noise, preventing effective cloud discrimination. Therefore, in this study, the proposed method is applicable only under conditions where the moonlight fraction is ≥70. Additionally, since DNB measures top-of-atmosphere reflectance, it cannot detect features beneath cloud layers. DNB operates within a wavelength range of 0.5 to 0.9 μm. Thick clouds significantly attenuate its optical signals, thereby limiting the detection capability primarily to radiance reflected from cloud tops. Consequently, sea fog completely obscured by high-level clouds remains undetectable.

4.2. Limited Applicability to Chinese Offshore Waters and the Sea of Japan

The SEGAN method demonstrates effective performance in the target maritime regions (China’s coastal waters and the Sea of Japan). However, its generalizability to other sea regions remains unverified due to insufficient data. Limited by the scarcity of meteorological records for sea fog events, this study collected sea fog occurrences only from China’s coastal waters and the Sea of Japan between 2017 and 2024. Furthermore, sea fog exhibits dynamic complexity and significant spatial heterogeneity. This limitation arises from variations in fundamental sea surface conditions (such as sea surface temperature and coastal topography), atmospheric environmental factors (such as surface wind conditions and humidity profiles), and regional meteorological regimes all influence the formation and persistence of sea fog. These variations may result in distinct sea fog characteristics across different sea regions, consequently limiting the generalization capability of the proposed model. To comprehensively evaluate the detection performance of the model, future work should involve collecting additional sea fog data from other maritime regions (e.g., Grand Banks, Newfoundland) for further validation.

4.3. Missed Detection in Certain Areas

As demonstrated by the test cases in Figure 6 and Figure 7, SEGAN exhibits detection omissions for fragmented sea fog patches. The detection probability of SEGAN in these small-scale areas is statistically lower than in extensive sea fog regions, indicating inadequate learning of textural features characterizing small-scale sea fog. The texture patterns of small-scale sea fog in visible-light imagery lack the homogeneity observed in large-scale counterparts. This inherent complexity, further compounded by the limited dataset volume, likely prevented SEGAN from effectively learning the discriminative textural features of small-scale sea fog.

4.4. Comparison of Average Inference Time

To evaluate the inference performance of the proposed model, we recorded the average time required for different methods to process individual sea fog cases (as shown in Table 7). All deep learning models were inferred using a single NVIDIA GeForce RTX 4090 GPU with 24 GB VRAM, while the traditional infrared threshold-based method (Auto-DCD) was computed on a single CPU (Intel Core i9-14900HX). The results presented in Table 7 were obtained under these configurations. As evidenced in Table 7, the average inference time of deep learning models is less than 5 s, significantly reducing the computational time compared to Auto-DCD’s 1199.0961 s. Furthermore, when combined with the inference times presented in Table 7, it is evident that SEGAN achieves inference speeds comparable to R2U-Net and its variants, while reducing inference time by 1.1779 s compared to DA-TransUNet. This demonstrates that our lightweight architectural design effectively minimizes computational overhead and accelerates the inference process. This efficiency enables SEGAN to be integrated with satellite data for rapid detection in practical scenarios, delivering real-time and reliable sea fog information to support ship navigation. Furthermore, SEGAN’s inference time is comparable to that of R2U-Net, indicating that the lightweight network architecture adopted in this study demands relatively few computational resources and can deliver inference results rapidly.

4.5. Future Research Direction

In future work, we will enhance the sea fog detection performance of the proposed methodology through the following aspects. (1) Dataset Expansion: The sea fog dataset will be extended by acquiring global sea fog records and collecting corresponding VIIRS/DNB satellite data. This expansion aims to improve the model’s generalization capability. (2) Integration of Multi-source Meteorological Data: Diverse meteorological data sources, such as infrared channel data, lidar data, will be incorporated. This integration seeks to enhance applicability across diverse scenarios. As SEGAN currently utilizes only single-channel satellite imagery, detecting sea fog beneath cloud layers remains challenging. Lidar, providing vertically resolved high-resolution water vapor profiles throughout the atmospheric column, can effectively identify low clouds and sea fog. Future efforts could explore leveraging lidar data as auxiliary information to guide the proposed model towards detecting sub-cloud sea fog. (3) Incorporation of Sea Surface Temperature (SST): SST information will be considered to further boost detection capability. Given sea fog’s proximity to the sea surface, its temperature closely resembles the SST. Utilizing the brightness temperature difference between the top of the atmosphere and the SST could aid in distinguishing high-level clouds, thereby improving sea fog detection. (4) Model Architecture Optimization: The model structure will be optimized to enhance feature learning capacity, specifically targeting improved detection capability for small-scale sea fog patches.

5. Conclusions

Sea fog is a relatively common hazardous weather that significantly impacts ship navigation and carrier-based aircraft takeoff and landing. The persistence of sea fog within the nighttime marine boundary layer is prolonged, resulting in greater impacts compared to daytime occurrences. However, existing detection methods face challenges in threshold selection and suffer from false alarms concerning spatial extent. To address these issues, this study developed a novel generative adversarial network model (SEGAN) by integrating the SE-Net module [25] and a spatial attention mechanism based on the U-Net architecture [24]. SEGAN is trained on the constructed VIIRS/DNB dataset. To validate the model’s performance, the latest operational infrared sea fog detection algorithm (Auto-DCD) and three widely adapted image segmentation models (R2U-Net, Attention R2U-Net, and DA-TransUNet) were selected for comparison. The results demonstrate that SEGAN achieved a detection probability of 0.8708, representing an improvement of 0.0632 over the Auto-DCD method. Simultaneously, the false alarm rate decreased by 0.0287, and the CSI increased by 0.1587, fully illustrating the superior detection performance of SEGAN. Furthermore, SEGAN reduces inference time by 1.1779 s compared to DA-TransUNet. Moreover, SEGAN surpasses DA-TransUNet in detection performance, achieving high-precision sea fog detection with a lightweight network architecture, thereby fully exhibiting its exceptional capabilities. Several case studies were selected for qualitative analysis, confirming that SEGAN’s detection performance is overall better than that of comparative methods. Notably, in cases where sea fog is partially obscured by thin cloud layers, SEGAN exhibits more reliable detection compared to Auto-DCD, effectively improving detection accuracy. Unlike conventional models that rely on pixel-wise loss optimization, SEGAN’s adversarial training paradigm emphasizes semantic coherence in its output, rendering it particularly suitable for sea fog detection tasks and endowing it with enhanced robustness across varying sea fog concentrations.
SEGAN holds significant importance for ensuring the safety of ship navigation and carrier-based aircraft operations. By accurately identifying the spatial extent and distribution of sea fog, it provides reliable support for ship route planning, enabling vessels to avoid fog-affected areas effectively. Furthermore, sea fog can be utilized to conceal the movements of aircraft carriers. From a scientific perspective, sea fog is a crucial component of boundary layer clouds and holds a significant influence on the global radiation budget. nighttime sea fog events exhibit relatively high frequency and prolonged duration, exerting considerable influence on radiative processes. Accurate spatial distribution information of sea fog can provide reliable data for theoretical research on the global radiation balance, thereby contributing to the advancement of studies on global climate change.
Although SEGAN has achieved promising results, further analysis reveals certain limitations. First, the quality of DNB data is affected by lunar phase conditions. SEGAN can only be used to detect sea fog under lunar fraction conditions of 70 or higher. Second, the collected sea fog data are currently limited to offshore areas of China, and the lack of data from other sea areas may constrain the model’s generalization capability. Third, the detection of scattered small-scale sea fog regions remains problematic, with the current detection probability for such areas being relatively low. To address these issues, we plan to implement improvements in the following aspects: (1) Collecting global sea fog data to expand the dataset and enhance the model’s generalization ability. (2) Incorporating multi-source input data such as lidar, satellite infrared channels, and sea surface temperature reanalysis data to broaden the applicability of the proposed method to diverse scenarios. (3) Optimizing the model structure to improve its feature extraction capabilities.

Author Contributions

Conceptualization, W.Q., X.C. and S.M.; methodology, W.Q., X.C. and S.M.; software, W.Q. and S.M.; validation, X.C. and S.M.; formal analysis, W.Q., X.C. and S.M.; investigation, W.Q.; resources, X.C. and S.M.; data curation, W.Q. and S.M.; writing—original draft preparation, W.Q.; writing—review and editing, X.C. and S.M.; visualization, W.Q.; supervision, X.C. and S.M.; project administration, X.C. and S.M.; funding acquisition, X.C. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data available on request from the authors. We will release all data after the follow-up work is complete.

Acknowledgments

Thanks to NOAA for providing data support. All data used in this study can be downloaded from the following URL: https://www.aev.class.noaa.gov/saa/products/welcome (accessed on 2 June 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VIIRSVisible Infrared Imager Radiometer Suite
DNBDay/Night Band
SDRsSensor Data Records
NOAANational Oceanic and Atmospheric Administration
MIRMid-infrared
TIRThermal infrared
AVHRRAdvanced Very High Resolution Radiometer
MODISModerate Resolution Imaging Spectroradiometer
Suomi NPPSuomi National Polar-orbiting Partnership
EOSEarth Observation System
PODProbability of detection
FARFalse alarm rate
CSICritical success index
mIoUMean Intersection over Union
SSTSea surface temperature
DCDDual-channel brightness temperature difference technique
CALIPSOCloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations
AIArtificial intelligence
COMSCommunication, Ocean and Meteorological Satellite
OSTIAOperational Sea Surface Temperature and Sea Ice Analysis
JPSSJoint Polar Satellite System
Dense CRFsFully connected Conditional Random Fields

References

  1. Dorman, C.E.; Mejia, J.; Koračin, D.; McEvoy, D. World Marine Fog Analysis Based on 58-Years of Ship Observations. Int. J. Climatol. 2020, 40, 145–168. [Google Scholar] [CrossRef]
  2. Marine Fog: Challenges and Advancements in Observations, Modeling, and Forecasting; Koračin, D., Dorman, C.E., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 153–244. ISBN 978-3-319-45229-6. [Google Scholar]
  3. Fu, G.; Guo, J.; Pendergrass, A.; Li, P. An Analysis and Modeling Study of a Sea Fog Event over the Yellow and Bohai Seas. J. Ocean Univ. China 2008, 7, 27–34. [Google Scholar] [CrossRef]
  4. Wagh, S.; Krishnamurthy, R.; Wainwright, C.; Wang, S.; Dorman, C.E.; Fernando, H.J.S.; Gultepe, I. Study of Stratus-Lowering Marine-Fog Events Observed During C-FOG. Bound.-Layer Meteorol. 2021, 181, 317–344. [Google Scholar] [CrossRef]
  5. Freeman, E.; Woodruff, S.D.; Worley, S.J.; Lubker, S.J.; Kent, E.C.; Angel, W.E.; Berry, D.I.; Brohan, P.; Eastman, R.; Gates, L.; et al. ICOADS Release 3.0: A Major Update to the Historical Marine Climate Record. Int. J. Climatol. 2017, 37, 2211–2232. [Google Scholar] [CrossRef]
  6. Hunt, G.E. Radiative Properties of Terrestrial Clouds at Visible and Infra-red Thermal Window Wavelengths. Q. J. R. Meteorol. Soc. 1973, 99, 346–369. [Google Scholar] [CrossRef]
  7. Wu, X.; Li, S. Automatic Sea Fog Detection over Chinese Adjacent Oceans Using Terra/MODIS Data. Int. J. Remote Sens. 2014, 35, 7430–7457. [Google Scholar] [CrossRef]
  8. Ellrod, G.P. Advances in the Detection and Analysis of Fog at Night Using GOES Multispectral Infrared Imagery. Weather Forecast. 1995, 10, 606–619. [Google Scholar] [CrossRef]
  9. Cermak, J.; Bendix, J. Dynamical Nighttime Fog/Low Stratus Detection Based on Meteosat SEVIRI Data: A Feasibility Study. Pure Appl. Geophys. 2007, 164, 1179–1192. [Google Scholar] [CrossRef]
  10. Chaurasia, S.; Jenamani, R.K. Detection of Fog Using Temporally Consistent Algorithm with INSAT-3D Imager Data over India. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5307–5313. [Google Scholar] [CrossRef]
  11. Amani, M.; Mahdavi, S.; Bullock, T.; Beale, S. Automatic Nighttime Sea Fog Detection Using GOES-16 Imagery. Atmos. Res. 2020, 238, 104712. [Google Scholar] [CrossRef]
  12. Miller, S.D.; Noh, Y.; Grasso, L.D.; Seaman, C.J.; Ignatov, A.; Heidinger, A.K.; Nam, S.; Line, W.E.; Petrenko, B. A Physical Basis for the Overstatement of Low Clouds at Night by Conventional Satellite Infrared-Based Imaging Radiometer Bi-Spectral Techniques. Earth Space Sci. 2022, 9, e2021EA002137. [Google Scholar] [CrossRef]
  13. Jiang, J.; Yan, W.; Ma, S.; Jie, Y.; Zhang, X.; Hu, S.; Fan, L.; Xia, L. Three Cases of a New Multichannel Threshold Technique to Detect Fog/Low Stratus during Nighttime Using SNPP Data. Weather Forecast. 2015, 30, 1763–1780. [Google Scholar] [CrossRef]
  14. Jiang, J.; Yao, Z.; Liu, Y. Nighttime Fog and Low Stratus Detection under Multi-Scene and All Lunar Phase Conditions Using S-NPP/VIIRS Visible and Infrared Channels. ISPRS J. Photogramm. Remote Sens. 2024, 218, 102–113. [Google Scholar] [CrossRef]
  15. Hu, S.; Ma, S.; Yan, W.; Jiang, J.; Huang, Y. A New Multichannel Threshold Algorithm Based on Radiative Transfer Characteristics for Detecting Fog/Low Stratus Using Night-Time NPP/VIIRS Data. Int. J. Remote Sens. 2017, 38, 5919–5933. [Google Scholar] [CrossRef]
  16. Hu, T.; Jin, Z.; Yao, W.; Lv, J.; Jin, W. Cloud Image Retrieval for Sea Fog Recognition (CIR-SFR) Using Double Branch Residual Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3174–3186. [Google Scholar] [CrossRef]
  17. Yi, L.; Li, M.; Liu, S.; Shi, X.; Li, K.-F.; Bendix, J. Detection of Dawn Sea Fog/Low Stratus Using Geostationary Satellite Imagery. Remote Sens. Environ. 2023, 294, 113622. [Google Scholar] [CrossRef]
  18. Shin, D.; Kim, J.-H. A New Application of Unsupervised Learning to Nighttime Sea Fog Detection. Asia-Pac. J. Atmos. Sci. 2018, 54, 527–544. [Google Scholar] [CrossRef]
  19. Huang, Y.; Wu, M.; Jiang, X.; Li, J.; Xu, M.; Zhang, C.; Guo, J. Weakly Supervised Sea Fog Detection in Remote Sensing Images via Prototype Learning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–13. [Google Scholar] [CrossRef]
  20. Miller, S.; Straka, W.; Mills, S.; Elvidge, C.; Lee, T.; Solbrig, J.; Walther, A.; Heidinger, A.; Weiss, S. Illuminating the Capabilities of the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band. Remote Sens. 2013, 5, 6717–6766. [Google Scholar] [CrossRef]
  21. Cho, Y.-K.; Kim, M.-O.; Kim, B.-C. Sea Fog around the Korean Peninsula. J. Appl. Meteorol. 2000, 39, 2473–2479. [Google Scholar] [CrossRef]
  22. Visible Infrared Imaging Radiometer Suite (VIIRS) Sensor Data Record (SDR) User’s Guide Version 1.3. Available online: https://ncc.nesdis.noaa.gov/documents/documentation/viirs-users-guide-tech-report-142a-v1.3.pdf (accessed on 2 March 2017).
  23. Wang, H.; Wang, H.; Yang, Z.; Yin, J.; GAO, S. Summer 2017 Marine Weather Review. J. Shandong Meteorol. 2017, 37, 75–84. [Google Scholar]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
  25. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  26. Son, J.; Park, S.; Jung, K.-H. Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks. arXiv 2017, arXiv:1706.09318. [Google Scholar] [CrossRef]
  27. Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent Residual Convolutional Neural Network Based on U-Net (R2U-Net) for Medical Image Segmentation 2018. arXiv 2018, arXiv:1802.06955. [Google Scholar]
  28. Sun, G.; Pan, Y.; Kong, W.; Xu, Z.; Ma, J.; Racharak, T.; Nguyen, L.-M.; Xin, J. DA-TransUNet: Integrating Spatial and Channel Dual Attention with Transformer U-Net for Medical Image Segmentation. Front. Bioeng. Biotechnol. 2024, 12, 1398237. [Google Scholar] [CrossRef]
  29. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.J.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar] [CrossRef]
  30. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  31. Krähenbühl, P.; Koltun, V. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. arXiv 2011, arXiv:1210.5644. [Google Scholar]
  32. Li, G.; Fang, Q.; Zha, L.; Gao, X.; Zheng, N. HAM: Hybrid Attention Module in Deep Convolutional Neural Networks for Image Classification. Pattern Recognit. 2022, 129, 108785. [Google Scholar] [CrossRef]
Figure 1. Schematic Diagram of the Study Areas. (The area delineated by the yellow box represents the target maritime region of this study).
Figure 1. Schematic Diagram of the Study Areas. (The area delineated by the yellow box represents the target maritime region of this study).
Remotesensing 17 03285 g001
Figure 2. DNB imagery data under different lunar phases. (a) lunar illumination fraction: 64.27. (b) lunar illumination fraction: 64.43. (c) Lunar illumination fraction: 52.34. (d) Lunar illumination fraction:73.84. (e) Lunar illumination fraction: 95.24. (f) Lunar illumination fraction: 99.28.
Figure 2. DNB imagery data under different lunar phases. (a) lunar illumination fraction: 64.27. (b) lunar illumination fraction: 64.43. (c) Lunar illumination fraction: 52.34. (d) Lunar illumination fraction:73.84. (e) Lunar illumination fraction: 95.24. (f) Lunar illumination fraction: 99.28.
Remotesensing 17 03285 g002
Figure 3. SEGAN method.
Figure 3. SEGAN method.
Remotesensing 17 03285 g003
Figure 4. Structural diagram of the generator of SEGAN.
Figure 4. Structural diagram of the generator of SEGAN.
Remotesensing 17 03285 g004
Figure 5. Results of sea fog case 1.
Figure 5. Results of sea fog case 1.
Remotesensing 17 03285 g005
Figure 6. Results of sea fog case 2.
Figure 6. Results of sea fog case 2.
Remotesensing 17 03285 g006
Figure 7. Results of sea fog case 3.
Figure 7. Results of sea fog case 3.
Remotesensing 17 03285 g007
Figure 8. Results of ablation experiment 1.
Figure 8. Results of ablation experiment 1.
Remotesensing 17 03285 g008
Figure 9. Results of ablation experiment 2.
Figure 9. Results of ablation experiment 2.
Remotesensing 17 03285 g009
Figure 10. Results of ablation experiment 3.
Figure 10. Results of ablation experiment 3.
Remotesensing 17 03285 g010
Table 1. Specific information about all VIIRS bands.
Table 1. Specific information about all VIIRS bands.
BondWave Length/μmSpatial Resolution/km
NadirEdge
M10.402–0.4220.751.6
M20.436–0.4540.751.6
M30.478–0.4980.751.6
M40.545–0.5650.751.6
M50.662–0.6820.751.6
M60.739–0.7540.751.6
M70.846–0.8850.751.6
M81.230–1.2500.751.6
M91.371–1.3860.751.6
M101.580–1.6400.751.6
M112.225–2.2750.751.6
M123.660–3.8400.751.6
M133.973–4.1280.751.6
M148.400–8.7000.751.6
M1510.263–11.2630.751.6
M1611.538–12.4880.751.6
I10.600–0.6800.3750.8
I20.846–0.8850.3750.8
I31.580–1.6400.3750.8
I43.550–3.9300.3750.8
I510.500–12.4000.3750.8
DNB0.5–0.90.750.75
Table 2. Sea fog detection results.
Table 2. Sea fog detection results.
PODFARPrecisionF1-ScoreCSImIoU
R2U-Net0.85120.03350.81310.81240.70170.8246
Attention R2U-Net0.85710.02810.84530.83270.73210.8437
DA-TransUNet0.86380.02540.84050.83740.73560.8468
Auto-DCD0.80760.05530.67390.71840.58080.7535
SEGAN0.87080.02660.83970.83810.73950.8498
Table 3. Detection results of sea fog case 1.
Table 3. Detection results of sea fog case 1.
PODFARPrecisionF1-ScoreCSImIoU
R2U-Net0.68310.00350.96550.80010.66680.8101
Attention R2U-Net0.51580.00070.99110.67850.51340.7241
DA-TransUNet0.73440.00240.97510.83780.72080.8426
Auto-DCD0.41250.03450.60930.49190.32620.6115
SEGAN0.75160.00180.98210.85150.74140.8543
Table 4. Ablation experiment 1 results.
Table 4. Ablation experiment 1 results.
PODFARPrecisionF1-ScoreCSImIoU
SEGAN
(skip-connection = 1)
0.86130.02700.83800.82950.72740.8433
SEGAN
(skip-connection = 2)
0.87080.02660.83970.83810.73950.8498
SEGAN
(skip-connection = 3)
0.86060.02430.84740.83650.73270.8472
SEGAN
(skip-connection = 4)
0.83230.02400.84010.82030.71120.8336
Table 5. Ablation experiment 2 results.
Table 5. Ablation experiment 2 results.
PODFARPrecisionF1-ScoreCSImIoU
SEGAN0.87080.02660.83970.83810.73950.8498
SEGAN
(with HAM [32])
0.81520.02170.85570.81420.70400.8300
Table 6. Ablation experiment 3 results.
Table 6. Ablation experiment 3 results.
PODFARPrecisionF1-ScoreCSImIoU
SEGAN0.87080.02660.83970.83810.73950.8498
SEGAN
(without SE-Net [25])
0.81510.02200.86140.81080.70790.8331
Table 7. A Comparison of average inference time across methods.
Table 7. A Comparison of average inference time across methods.
MethodAverage Inference Time (s)
Without Dense CRFsWith Dense CRFs
R2U-Net3.35633.5739
Attention R2U-Net3.57443.7904
DA-TransUNet4.56734.7269
Auto-DCD1199.10--
SEGAN3.38943.4723
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, W.; Cao, X.; Ma, S. A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks. Remote Sens. 2025, 17, 3285. https://doi.org/10.3390/rs17193285

AMA Style

Qiu W, Cao X, Ma S. A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks. Remote Sensing. 2025; 17(19):3285. https://doi.org/10.3390/rs17193285

Chicago/Turabian Style

Qiu, Wuyi, Xiaoqun Cao, and Shuo Ma. 2025. "A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks" Remote Sensing 17, no. 19: 3285. https://doi.org/10.3390/rs17193285

APA Style

Qiu, W., Cao, X., & Ma, S. (2025). A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks. Remote Sensing, 17(19), 3285. https://doi.org/10.3390/rs17193285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop