Next Article in Journal
Remote Sensing of Coastal Waters, Land Use/Cover, Lakes, Rivers, and Watersheds II
Next Article in Special Issue
Detection of Atmospheric Hydrofluorocarbon-22 with Ground-Based Remote High-Resolution Fourier Transform Spectroscopy over Hefei and an Estimation of Emissions in the Yangtze River Delta
Previous Article in Journal
RANet: Relationship Attention for Hyperspectral Anomaly Detection
Previous Article in Special Issue
Polar Cloud Detection of FengYun-3D Medium Resolution Spectral Imager II Imagery Based on the Radiative Transfer Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monitoring Mesoscale Convective System Using Swin-Unet Network Based on Daytime True Color Composite Images of Fengyun-4B

1
School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Laboratory for Regional Oceanography and Numerical Modeling, Qingdao National Laboratory for Marine Science and Technology, Qingdao 266237, China
3
Technology Innovation Center for Integration Applications in Remote Sensing and Navigation, Ministry of Natural Resources, Nanjing 210044, China
4
Jiangsu Engineering Center for Collaborative Precise Navigation/Positioning and Smart Applications, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(23), 5572; https://doi.org/10.3390/rs15235572
Submission received: 16 October 2023 / Revised: 25 November 2023 / Accepted: 26 November 2023 / Published: 30 November 2023
(This article belongs to the Special Issue Advances in Remote Sensing and Atmospheric Optics)

Abstract

:
The monitoring of mesoscale convective systems (MCS) is typically based on satellite infrared data. Currently, there is limited research on the identification of MCS using true color composite cloud imagery. In this study, an MCS dataset was created based on the true color composite cloud imagery from the Fengyun-4B geostationary meteorological satellite. An MCS true color composite cloud imagery identification model was developed based on the Swin-Unet network. The MCS dataset was categorized into continental MCS and oceanic MCS, and the model’s performance in identifying these two different types of MCS was examined. Experimental results indicated that the model achieved a recall rate of 83.3% in identifying continental MCS and 86.1% in identifying oceanic MCS, with a better performance in monitoring oceanic MCS. These results suggest that using true color composite cloud imagery for MCS monitoring is feasible, and the Swin-Unet network outperforms traditional convolutional neural networks. Meanwhile, we find that the frequency and distribution range of oceanic MCS is larger than that of continental MCS, and the area is larger and some parts of it are stronger. This study provides a novel approach for satellite remote-sensing-based MCS monitoring.

Graphical Abstract

1. Introduction

Severe convective weather refers to atmospheric conditions characterized by the rapid ascent of unstable air masses, often accompanied by strong vertical wind shear. Atmospheric instability can lead to brief yet intense occurrences of extreme weather such as heavy rainfall, thunderstorms, hail, strong winds, and other extreme weather phenomena [1,2,3,4]. The occurrence of severe convective weather is often associated with mesoscale convective systems (MCS), making the prediction and monitoring of MCS a focal point of research in the meteorological community [2,3,5,6]. The forecasting and monitoring of the convective initiation and development of MCS can rely on numerical weather prediction (NWP) [7,8], Doppler weather radars, and satellite observation [9,10]. Doppler weather radars primarily rely on the Z-R relationship to identify convective initiation and MCS. Regions with reflectivity values exceeding 35 dBZ are considered as areas of active convection, and this threshold is used as an indicator of convection initiation [11,12]. Satellite remote sensing primarily relies on specific satellite channels designed for monitoring MCS [13,14], making it one of the most commonly used methods for studying MCS in current research.
Compared to polar-orbiting meteorological satellites, geostationary meteorological satellites offer the advantage of a high spatiotemporal resolution for forecasting [15], monitoring [16,17,18,19], and tracking MCS [20,21]. Satellite remote sensing for monitoring severe convective weather primarily relies on satellite cloud imagery to identify MCS. MCS exhibit different characteristics in various spectral bands. In the visible light spectrum, they appears as high reflectivity, while in the infrared spectrum, they are characterized by lower brightness temperatures. Currently, MCS identification in research is primarily based on the infrared spectrum (around 10.7 μm) due to the reliable physical basis provided by the physical characteristics of this wavelength. Based on the survey results of MCS, they are often classified into M α C S and M β C S based on brightness temperature and morphology criteria [2,22,23]. Methods for monitoring MCS using satellite remote sensing include single-channel threshold methods (e.g., 241 K, 221 K) [24,25,26] and the water vapor-infrared channel approach [27,28], as well as deep-learning techniques like the deep belief nets (DBN) employed by Zheng [29]. Geostationary meteorological satellites typically have a range of spectral channels, including reflectance, near-infrared, shortwave infrared, and longwave infrared channels. MCS exhibit distinctive texture and morphological features in the reflectance channel that differentiate them from clear sky, cloud layers, cirrus clouds, and thin clouds. However, few algorithms can directly identify MCS from true color composite cloud imagery. True color composite cloud imagery typically relies on visual interpretation, and using true color composite cloud imagery makes it challenging to accurately extract MCS information. Many satellites carry the visible light band, which primarily measures the reflectivity/albedo of atmospheric target objects. True color composite images more closely resemble objects observed by the human eye, and provide information on the shape of the object’s contours and internal texture relative to the infrared band. Limited by the imaging characteristics of the bands, MCS monitoring from true color composite cloud images requires the use of emerging computer vision techniques. The semantic segmentation model is one of the current research hotspots; in the previous research, few scholars directly identified MCS from true color composite cloud images, so the direct identification of MCS from true color composite cloud images is a worthwhile exploration of the research, which can enable many non-professional satellites used for meteorological monitoring to also identify MCS, and improve the meteorological industry’s ability to monitor and forecast MCS. Therefore, investigating MCS identification based on true color composite cloud imagery is of significant importance, as it could enable non-meteorological satellites to monitor MCS as well.
Due to the influence of the monsoon climate, the northwest Pacific region experiences frequent convective weather during the summer season. Depending on the underlying surface characteristics, convective cloud systems in this region can be classified into continental convective systems and oceanic convective systems. Yang et al. [30] used data from FY-2E and CloudSat/CALIPSO to study MCS in terms of non-penetrative (DCwo)/penetrative convection (CO), infrared cloud top brightness temperature, cloud cluster characteristics, cloud cluster area, eccentricity, and other aspects. They pointed out that there are significant differences between continental MCS and oceanic MCS.
The Transformer model [31], initially developed for natural language processing, has found extensive applications in the field of remote-sensing image processing. Many researchers have proposed derivative models such as ViT [32], Swin-Transformer [33], and TransUNet [34] based on this architecture. Unlike convolutional neural networks (CNNs), which struggle to capture contextual information effectively, Transformer networks with attention mechanisms have proven to be highly effective in addressing this issue. In this study, we began by creating true color composite cloud imagery using data from the Fengyun-4B geostationary meteorological satellite. We also supplemented this with infrared and water vapor band data to construct a comprehensive true color composite cloud imagery MCS dataset. We used the Swin-Unet network [35] to build a model for monitoring MCS in true color composite cloud imagery. Additionally, we compared the performance of this model with three traditional convolutional neural networks (FCN-8s, SegNet, and Unet). Furthermore, we conducted an evaluation and analysis of MCS, distinguishing between continental MCS and oceanic MCS. The technical workflow of this study is depicted in Figure 1.
This article is divided into seven sections. Section 1 provides an introduction to the research background. Section 2 introduces the satellite data and study area. Section 3 elucidates the research methodology and evaluation metrics. Section 4 explains the predictions of the model and comparatively analyzes the continental MCS and oceanic MCS. Section 5 selects a case study and compares it with longwave infrared monitoring results. Section 6 is a discussion of our study. Finally, Section 7 is the conclusion of this article.

2. Data and Study Area

2.1. Fengyun-4B Geostationary Meteorological Satellite AGRI and GPM Precipitation Data

China’s geostationary meteorological satellites include the Fengyun-2 series and the Fengyun-4 series. The Fengyun-4 series represents the new generation of geostationary meteorological satellites [36,37], comprising Fengyun-4A and Fengyun-4B. Fengyun-4A serves as an experimental satellite, while Fengyun-4B is the operational satellite responsible for high-frequency observations of the atmosphere and cloud layers. These satellites utilize a three-axis stable attitude control method, enhancing data stability compared to the spin-stabilized Fengyun-2 series meteorological satellites. The improved temporal and spatial resolution of Fengyun-4 satellites allows for full-disk observations every 15 min and rapid regional observations over China every 5 min. The spatial resolution for visible/infrared channels is around 0.5–1 km, and for the infrared spectrum, it is approximately 2–4 km. This enables the observation of meteorological elements such as typhoons, severe convective weather, and sea fog [15,38,39,40,41].
The successful launch of Fengyun-4B on 3 June 2021, positioned it in geostationary orbit over the equator at 133°E longitude. This advanced satellite is equipped with the next-generation AGRI (Advanced Geostationary Radiation Imager), boasting an extensive spectral range spanning from 0.4 μm to 13.8 μm, covering visible and near-infrared bands ( C 01 , C 02 , C 03 ), shortwave infrared bands ( C 04 , C 05 , C 06 ), midwave infrared bands ( C 07 , C 08 ), water vapor bands ( C 09 , C 10 , C 11 ), and longwave infrared bands ( C 12 , C 13 , C 14 , C 15 ). Notably, Fengyun-4B introduces the C 11 channel, centered at 7.42 μm, enabling the observation of lower-level water vapor (Table 1). Complementing its capabilities are additional payloads, including SEP (Space Environment Monitoring Instrument Package), GHI (Geo High-speed Imager), and GIIRS (Geostationary Interferometric Infrared Sounder), facilitating space weather monitoring, rapid cloud imaging, and high-frequency three-dimensional atmospheric observations. Fengyun-4B’s data have been available to users since 1 June 2022.
Global Precipitation Measurement (GPM) products were also utilized in this study. The research made use of the GPM_3IMERGHH data available at https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGHH_07/summary (accessed on 10 November 2023). This product belongs to the IMERG Level-3 category with a spatial resolution of 0.1° × 0.1° and a temporal resolution of 30 min. Level-3 data for GPM are generated based on GPM IMERG (Integrated Multi-satellite Retrievals for GPM), which is mainly used to compare the consistency of MCS locations with precipitation data.

2.2. Study Area

The research focuses on the Northwestern Pacific region (90°155′E, 0°55′N), which includes countries and regions such as China, Japan, the Philippines, the Indochinese Peninsula, and the Korean Peninsula (Figure 2). This area falls within the coverage of the AGRI sensor and encompasses both continental and oceanic areas. Satellite observations have indicated an increasing frequency and intensity of mesoscale convective systems (MCS) in this region [42,43]. Influenced by the monsoon climate, this area is one of the globally high-frequency MCS occurrence regions. Over the land, particularly in the middle and lower reaches of the Yangtze River plain, MCS formation is primarily associated with the large-scale circulation patterns during the East Asian monsoon season [44]. It is often linked to the mei-yu front, where warm and moist air is lifted along the front in the presence of low-level wind shear, leading to the initiation of MCS. MCS events in this region are frequently embedded within the mei-yu front or manifest as isolated thunderstorm cells [45]. In contrast, over the ocean, strong convective activity appears in the form of tropical cloud clusters. These clusters are commonly associated with the development of tropical disturbances and easterly waves in the tropical convergence zone. Relative to continental MCS, oceanic MCS typically exhibit a more uniform structure with vigorous convection development [46,47].

3. Method and Evaluation Metrics

3.1. The Characteristics of MCS (Mesoscale Convective Systems) in Different Spectral Bands

The discussion of MCS characteristics in this context is primarily based on two-dimensional remote-sensing imagery and does not involve the three-dimensional structural features of MCS. Reflectance band remote-sensing images primarily depict MCS reflectivity, where MCS predominantly appear as high-reflectance features, thus appearing white or gray in the images. The developing MCS exhibit loose, granular texture characteristics and the mature MCS exhibit compact and uniform texture characteristics. Due to the vigorous convective development, their central region is close to the top of the troposphere, showing vertical development, with obvious height extension in the vertical direction of the MCS, characterized by a lower bottom and higher top of the cloud. This is manifested as the lower bottoms and higher tops of cloud clusters. This structural feature also results in distinct shadow effects, with relatively dark shadow areas typically found beneath MCS. Influenced by internal particle collision and merging, cloud droplets and ice crystals within MCS typically have larger particle radii, a characteristic that can be effectively detected in optical bands, making it suitable for identifying optical thickness.
As shown in Figure 3, infrared band imagery characterizes the brightness temperature of MCS, with areas of vigorous convection displaying noticeably lower brightness temperatures than the surrounding cloud regions. There is also a distinct low-value area within the MCS, typically representing deep convective regions. In research, a specific threshold is often employed on infrared cloud imagery to identify MCS [48]. The grayscale distribution on the water vapor channel can provide information about the water vapor distribution as well as the concentration, and MCS are well represented on the water vapor map, with features similar to infrared band imagery. Additionally, to enhance the visualization of MCS, an infrared enhanced cloud image is commonly used to depict MCS cloud clusters (Figure 3d). In the enhanced infrared cloud maps, meteorologists generally set a value field on the original brightness grayscale map and use a “pseudo-color composite” in the remote-sensing field to divide the image elements between certain brightness temperatures, so that it is easy to distinguish the convective intensity, range, and height of cloud tops in MCS areas.
Satellite remote-sensing cloud imagery provides both cloud pixel characteristics and cloud system features for MCS. In cases of severe convective weather, such as MCS development, temporal changes are rapid, necessitating continuous observational data. During the initial formation of MCS, several thunderstorm cells may develop over small-scale terrain features and convergence zones, giving rise to convective weather. In the mature stage of the MCS, with the influx of moist and unstable lower-level air, the moist layer thickens, leading to vigorous convective updrafts, and the thunderstorm weather transitions into heavy rainfall.

3.2. Mesoscale Convective Systems (MCS) Label Dataset

The creation of the dataset involves the initial step of synthesizing true color images based on the characteristics of convective cloud clusters in visible light imagery. The Advanced Geostationary Radiation Imager (AGRI) was first deployed on FY-4A, and it has spectral response functions that differ from FY-4B AGRI (Figure 4). Since there is no green channel, a simulation of the green channel is required during the true color composite, following commonly used algorithms for the true color composite from geostationary meteorological satellite data [49,50].
Due to the absence of a dedicated green channel in AGRI, it is necessary to reconstruct three channels, namely N e w B , N e w G , and N e w R , for generating true color images. The red channel, centered at 0.65 μm, has a relatively wide spectral range that absorbs some information from the green and near-infrared bands. This spectral characteristic can lead to a reddish bias in the synthesized images. In contrast, the FY-4B AGRI spectral response function has its peak in the red spectral band, somewhat attenuating the energy in the green band. Therefore, adjustments to the coefficients of the three originally constructed channels are required. After applying zenith angle correction to the original reflectance images, the construction of N e w B , N e w G , and N e w R is carried out using the following method, followed by normalizing the data to a range of 0 to 255. Equations (1)–(3) adopt the method proposed by Yan et al. [51] and adjust the coefficients according to the Fengyun-4B spectral response function to generate similar “green” channels, where C A G R I 01 represents the reflectance value of the first channel of Fengyun-4B AGRI, i.e., the blue light channel after radiometric calibration, and similarly, C A G R I 02 and C A G R I 03 represent the reflectance values of the second and third channels, respectively.
N e w B = C A G R I 01
N e w G = 0.5 × C A G R I 01 + 0.35 × C A G R I 02 + 0.2 × C A G R I 03
N e w R = C A G R I 02 + 0.1 × C A G R I 03
To enhance the contrast of the true color composite cloud imagery, a color image stretching process is applied. The pixel values are mapped from the input range [0, 30, 60, 120, 190, 255] to the output range [0, 110, 160, 210, 240, 255]. Additionally, a cubic spline interpolation is performed. This stretching process is applied to the remote-sensing images in the range of 0 to 255, resulting in an enhanced image. Figure 5 shows the effect before and after stretching.
Daytime cloud imagery is created using satellite images captured at higher solar zenith angles. Nighttime data in the reflection channel have values of 0 and are not considered in this experiment. It is important to note that during the label dataset creation process, efforts should be made to mark the edges of MCS while minimizing the influence of other cloud systems like stratocumulus and cirrus clouds. The labeled MCS categories include isolated MCS, linear MCS, and composite MCS composed of multiple isolated MCSs. In our study, Fengyun-4B satellite data from June to August 2022 were selected to create the MCS dataset. The data from the last week of each month were used to create the test dataset, while the data from other time periods were used for the training and valid datasets. The temporal division of the dataset is shown in Table 2. This dataset partitioning helps avoid interference from MCS with similar morphologies in adjacent data frames.
The generation of the dataset followed the following process:
(1) For the Fengyun-4B images at the same moment, the data are first preprocessed, and the data are synthesized into a true color composite image (0.47/0.65/0.825 μm) according to the above scheme, before separating the water vapor channel (6.25 μm) and the longwave infrared channel (10.8 μm) in the data.
(2) Based on the calculation of the water vapor-LWIR channel difference, a dynamic threshold is set to obtain the rough extraction results of the MCS, and morphological processing is performed.
(3) Compare the true color composite image, correct the errors in the rough extraction results using GIS software (QGIS Desktop Version: 3.22.14), obtain the finely extracted MCS, and complete the label production.
(4) Pair the true color composite image with the label to obtain the image and label of the corresponding moment.
Finally, all the image sizes were set to 512 × 512 pixels. During the model training process, both continental MCS and oceanic MCS were considered. It is important to note that for validating the model’s ability to recognize continental and oceanic MCS, the test dataset was divided into continental MCS and oceanic MCS categories.

3.3. Swin-Unet Model and Experimental Environment

3.3.1. Swin-Unet

The existing semantic segmentation models primarily rely on fully convolutional neural networks (FCNs) and architectures such as Unet. These networks are characterized by a symmetric encoder–decoder structure with ‘skip’ connections. In the encoder, continuous convolution and pooling operations, along with downsampling, are employed to capture deep features with an extended receptive field. Subsequently, the decoder up-samples the extracted deep features to the original resolution for pixel-level predictions. The skip connections primarily serve to fuse high-resolution features from different scales in the encoder, mitigating the spatial information loss incurred during the downsampling process. The architecture of the Swin-Unet network model is illustrated in Figure 6. Differing from traditional convolutional neural networks (CNNs), Swin-Unet is based on a pure Transformer network. Its structure resembles that of the Unet, comprising four main components: encoder, bottleneck, decoder, and skip connections. The encoder, bottleneck, and decoder all consist of fundamental Swin-Transformer units. In the initial phase of the network, images are transformed into sequential inputs. If each patch size is set to 4 × 4 and considering the three channels (R, G, B), each patch, after undergoing linear embedding, results in a feature vector of dimensions 4 × 4 × 3 = 48.
Swin-Unet is organized as a four-level hierarchical structure, where each layer in the encoder comprises a Swin-Transformer module and a patch merging module, facilitating image downsampling. The Swin-Transformer module preserves the dimensions of the image after passing through it. To ensure the recognition of multi-scale context (MCS) features at different scales, traditional convolutional neural networks (CNNs) usually employ convolution or pooling (max pooling, average pooling) for feature map downsampling. In the encoder, patch merging accomplishes a similar operation, reducing the image size by half and doubling the number of channels with each patch merging operation. In contrast to downsampling in the encoder, the decoder reverses this operation through patch expanding, performing image upsampling. In the patch expanding module, the primary objective is to restore the downsampled images to their original size and reduce the number of channels until the image matches the input image’s dimensions. During the upsampling process, each Swin-Transformer block simultaneously receives inputs from low-resolution features and skip connections, aiming to complement multi-scale features between the encoder and the decoder. This effectively mitigates the impact of spatial position information loss caused by downsampling during the process.
As shown in Figure 7, the Swin-Transformer consists of two concatenated modules similar to the Transformer encoder module in Vision Transformer (ViT). In the Swin-Transformer, the multi-head self-attention (MSA) is replaced with window-based MSA (W-MSA) in the first structure and shifted window-based MSA (SW-MSA) in the second structure. SW-MSA achieves feature fusion between patches and different regions of patches by shifting the position of the patches. The shifting step size is typically half the size of a single window, allowing patches to interact with different neighboring regions for global feature fusion.
In the Swin-Unet network, a deep neural network with an encoder–decoder structure similar to Unet is constructed based on the Swin-Transformer block. In the encoder, a local-to-global self-attention mechanism is implemented, as defined by the self-attention formula in Equation (4). Finally, in the decoder, global features are upsampled to the input resolution, which corresponds to the size of the input true color composite images (512 × 512), enabling pixel-wise segmentation predictions. The Swin-Unet network incorporates patch expansion layers to avoid the traditional convolution or interpolation methods for upsampling and feature dimension augmentation. Experimental results demonstrate the effectiveness of the skip connection structure, similar to that in the Unet network, for Transformer networks. It accurately enables image segmentation, predicting the multi-scale context (MCS) from input true color composite images.
A t t e n t i o n Q , K , V = S o f t M a x ( Q K T d + B ) V
In the formula, Q, K, and V represent the query, key, and value, respectively. The SoftMax function is utilized to compute weights, and ‘d’ denotes the feature dimension. Within the formula, the dot product operation is performed on the query and key. After undergoing SoftMax processing, it yields weights for each value. These weights are then used to multiply the corresponding values and subsequently summed to obtain the final output. Leveraging the attention mechanism, the model has the capability to effectively comprehend the aspects of the multi-scale context (MCS) within the input image, extract pivotal features, and enhance the precision of semantic segmentation.

3.3.2. Experimental Environment

The computer hardware used in the study includes an Intel Core i9-12900K CPU and an NVIDIA RTX A4000 GPU. The software stack consists of Python with a version of 3.8, PyTorch version 1.13.1, and CUDA version 11.7. The hyperparameters were configured with a batch size of 10, an initial learning rate of 0.001, and a total of 100 iterations.

3.4. Evaluation Metrics

The model’s predictions and ground truth results are represented as binary images, where regions with a value of 1 represent MCS (mesoscale convective systems), and regions with a value of 0 represent non-MCS areas. TP (true positive) indicates cases where both the model and ground truth correctly classify an area as MCS, FN (false negative) represents cases where the model erroneously classifies an area as non-MCS, but the ground truth is MCS, indicating a missed detection. FP (false positive) represents cases where the model incorrectly classifies an area as MCS, but the ground truth is non-MCS, indicating a false alarm. TN (true negative) indicates cases where both the model and ground truth correctly classify an area as non-MCS (Table 3).
Based on the definitions of TP, FN, FP, and TN, the model’s performance is evaluated using the following Equations (5)–(9).

3.4.1. Recall

The recall rate measures the model’s ability to correctly identify MCS and represents the model’s sensitivity in recognizing severe convective cloud clusters. It is defined in binary classification as the ratio of the number of correctly predicted positive samples by the model to the total number of actual positive samples:
R e c a l l = T P T P + F N

3.4.2. F1

The F1 score provides a comprehensive assessment of the model’s performance by balancing both recall and precision. It is calculated as the harmonic mean of recall and precision, aiming to strike a balance between the model’s sensitivity and accuracy in MCS prediction.
F 1 = 2 × P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l
The formula to calculate precision is as follows:
P r e c i s i o n = T P T P + F P

3.4.3. IoU

Intersection over union (IoU) is one of the commonly used evaluation metrics in deep-learning models. It calculates the overlap between the predicted region and the ground truth region. When the prediction and the ground truth are identical, the IoU value is 1.
I o U = T P T P + F P + F N

3.4.4. FAR

The false alarm rate (FAR) refers to the ratio of the area in which the model predicts MCS but there are no actual MCS to the total predicted area.
F A R = F P T P + F P

4. Results

4.1. Swin-Unet Model Prediction Results for Continental MCS

To demonstrate the model’s monitoring capability for MCS, we selected eight typical convective weather processes from the test dataset for model test, including four continental MCS and four oceanic MCS. These satellite images were not involved in the model training, which can objectively evaluate the model’s ability to monitor MCS. In addition to showcasing the predictive capabilities of the Swin-Unet network, we also selected FCN-8s, SegNet, and Unet as reference comparisons. After synthesizing the original data into true color images and processing them into the required format for the model input, the output results were ultimately concatenated to obtain the predicted result images.
Figure 8 illustrates the monitoring capabilities of deep neural networks for continental MCS. The focus of the figure is on the East Asian region, particularly influenced by the summer monsoon, encompassing mainland China, Mongolia, select regions of India, and the northern part of the Indochina Peninsula. In the context of the four selected representative convective processes, Swin-Unet, SegNet, and Unet networks consistently capture the MCS positions accurately. Although FCN-8s is capable of detecting MCS, its precision is somewhat lower than the other three models, leading to more instances of both missed and misidentified MCS. In Figure 8a, FCN-8s fails to recognize two MCS in the Central Plains region and one convective cell in the South China region. SegNet and Unet exhibit a better recognition accuracy than FCN-8s but introduce misjudgments in the southwestern region. Notably, the Swin-Unet network accurately captures all four MCS in the mainland area. Moving to Figure 8b, FCN-8s, SegNet, and Unet make erroneous identifications of high-level clouds over the Shandong Peninsula. Swin-Unet, however, avoids this issue. In Figure 8c, Swin-Unet consistently outperforms the other three networks, particularly in correctly identifying the absence of MCS in high-latitude rainbands. Nevertheless, it exhibits misidentifications within the Sichuan province. Lastly, in Figure 8d, Swin-Unet provides predictions consistent with labels for sparsely distributed and relatively small-area MCS. In contrast, FCN-8s, SegNet, and Unet networks inaccurately identify MCS in the South China region.
Table 4 presents the evaluation metrics for these models, aiming to assess their ability to monitor MCS. For binary classification networks, IoU stands out as a crucial metric. Among the four models, Swin-Unet achieves the highest IoU at 57.46%, significantly outperforming the FCN-8s network. In terms of the recall metric, Swin-Unet and Unet exhibit a similar performance, both surpassing 0.83, indicating a high level of capability in MCS monitoring. The F1 score, which integrates both accuracy and recall, highlights Swin-Unet’s superiority with a considerably higher F1 score compared to the other three networks. Additionally, Swin-Unet attains the lowest FAR. Taken together, the comprehensive analysis suggests that Swin-Unet excels in the detection of continental MCS compared to the other three network architectures.

4.2. Swin-Unet Model Prediction Results for Oceanic MCS

The distinction between continental and oceanic MCS lies in the distinct underlying surfaces, leading to inherent mechanistic differences. The figures primarily showcase the tropical oceanic region east of the Philippines, situated in a tropical convergence zone known for frequent tropical cloud clusters. To explore the models’ monitoring capabilities for oceanic MCS, three random time instances and a satellite image featuring the powerful typhoon “Hinnamnor” were selected.
From the satellite remote-sensing images, it is evident that cloud systems over the ocean are more continuous and cover larger areas compared to land regions. In Figure 9a,b, Swin-Unet, Unet, SegNet, and FCN-8s all accurately monitor oceanic MCS. However, in Figure 9b, FCN-8s detects a smaller area of MCS in the eastern waters of Mindanao Island, Philippines, while the other three models accurately identify it. In Figure 9d, all four models recognize the super typhoon “Hinnamnor”, but FCN-8s and Unet fail to identify the convective cell on the southern side of the typhoon. In summary, except for the false alarm rate (FAR) metric, the Swin-Unet network, as employed in our study, achieves the best performance in oceanic MCS identification.
Comparing Table 4 and Table 5, it is evident that the models exhibit a notable improvement in the recall, F1, and IoU metrics for oceanic MCS recognition, indicating a stronger capability in identifying oceanic MCS compared to continental MCS. In the binary classification scenario, the IoU increases from 57.46% to 71.65%. Unlike the Swin-Unet network, the FCN-8s, SegNet, and Unet networks are all based on a convolutional neural network (CNN). In our study, the CNN focuses solely on local information extraction, lacking an understanding of the global context of the target object. Moreover, CNNs exhibit translation invariance, contributing to the inferior performance of CNN-based networks in MCS detection compared to the Swin-Unet network. Swin-Unet, utilizing the Transformer architecture, possesses the ability of global relationship modeling, capturing connections between objects in the satellite image. The attention mechanism in the Transformer structure allows for a more effective extraction of MCS features. Consequently, the Swin-Unet network outperforms CNN-based networks in the recognition of both continental and oceanic MCS, showcasing the advantages of leveraging Transformer structures for this task.

4.3. Comparative Analysis of Continental MCS and Oceanic MCS for the Test Dataset

According to the test dataset division described in Section 3.2, with Fengyun-4B’s time resolution of 15 min and a total of 785 full-disk images, the data were interpolated to the study area. We then fed the data into the pre-trained Swin-Unet network, resulting in the model’s predicted MCS outcomes. Applying the MCS classification criteria [23], a total of 168 continental MCS samples and 2702 oceanic MCS samples were identified. After processing the MCS result data into binary images, we computed the MCS occurrence frequency in the test dataset (Figure 10). The study reveals that MCS are widely distributed in the East Asia region, with oceanic MCS showing higher occurrence frequencies compared to continental MCS. High-frequency areas for oceanic MCS are mainly located in the maritime region between 0° and 25°N. This includes the western Pacific, the Philippines, the South China Sea, and the waters east of Malaysia. Continental MCS, on the other hand, are primarily found in southern and central China, corresponding to the monsoon belt, but their frequency of occurrence is lower than that of oceanic MCS. Additionally, continental MCS are also prevalent in the Indochina Peninsula, northwestern India, and parts of Pakistan.
As shown in Figure 11, the study divides MCS occurrence frequencies into different categories for comparative analysis. These categories include regions with occurrence frequencies higher than 30% (Figure 11a), frequencies ranging from 20% to 30% (Figure 11b), frequencies ranging from 10% to 20% (Figure 11c), and frequencies ranging from 5% to 10% (Figure 11d). During the study period, the South Bay of Bengal is identified as a high-frequency MCS occurrence region. Additionally, areas east and west of the Philippines exhibit MCS occurrence frequencies exceeding 20%, making them significant regions for future research on oceanic MCS. In Figure 11c, it is evident that oceanic MCS are distributed in tropical convergence regions, forming an approximately equidistant pattern. The northern regions of India, Pakistan, and northern Vietnam have MCS occurrence frequencies similar to those of oceanic MCS. The study requires an analysis of MCS distribution, and thus, a minimum threshold of 5% is set to ensure the presence of MCS in the specified areas. In Figure 11d, it is observed that oceanic MCS exhibit a wider distribution compared to continental MCS. Continental MCS are predominantly located around the northern side of the subtropical high-pressure system, forming a belt-like pattern extending from Chongqing to Jiangsu Province in China. Additionally, MCS activity is also evident in the North Korean region.
We conducted a statistical analysis of specific continental MCS and oceanic MCS cases, focusing on two main physical properties: the number of individual MCS pixels and the average value of the coldest 25% of MCS pixels [52]. Figure 12 provides a detailed comparison of these two properties between continental MCS and oceanic MCS. In terms of the number of individual MCS pixels, oceanic MCS typically exhibit a larger area than continental MCS. The number of pixels in the 25% high-value region for oceanic MCS ranges from 620 to 1100 pixels (Fengyun-4B spatial resolution is 4 km), while the same region for continental MCS contains 500 to 710 pixels. The median pixel counts for oceanic MCS and continental MCS are 466 and 380 pixels, respectively. Additionally, representing MCS intensity based on the 25% coldest pixel average value is a recommended method. We calculated the brightness temperature for each MCS using AGRI Channel 13. As shown in Figure 12b, continental MCS and oceanic MCS in the test dataset exhibit comparable intensities, with median values of 228.51K and 227.18K, respectively. Due to the stronger latent heat release over oceanic areas, the test dataset shows that the minimum brightness temperature for oceanic MCS is 190.9K, while for continental MCS, it is 197K. In some cases, oceanic MCS can achieve lower brightness temperatures compared to continental MCS.

5. Case Study

To verify the model’s practical applicability in MCS monitoring, we selected the continuous time period from 00:00 to 02:00 UTC on 24 June 2022, to assess the model’s monitoring capability for the movement of MCS cloud clusters. Figure 13 displays the positions of MCS during this continuous time period, including the labels, Swin-Unet’s prediction results, and the results obtained using the longwave infrared brightness temperature threshold method with a brightness temperature threshold of 241 K. During the verification process, we determined the MCS extent from the visible light synthetic cloud images and calculated relevant features based on the 10.8 μm brightness temperature values. These features included the average brightness temperature of the coldest 25% of pixels within individual MCS cloud clusters and the centroids of the cloud clusters. The results indicate that the model’s prediction, the labels, and the results obtained from the longwave infrared brightness temperature threshold method accurately identify the MCS positions within the cloud system. When observing the true color composite cloud image, it is evident that this convective system is embedded within a continuous cloud cover. In the visible light image, it displays a unique texture structure and exhibits distinct hierarchical relationships with the surrounding cloud regions.
In this study, we conducted parameter statistical analyses only on the target MCS. During the movement of the MCS, we calculated the centroids of the MCS regions monitored by the three methods. The formula for calculating the centroid is referenced from Formula (10), where x 0 and y 0 represent the longitude and latitude coordinates of the centroid within the MCS, and T i represents the brightness temperature at ( x i , y i ) for Channel 13. The results indicated that the MCS exhibited a northwest-to-southeast movement trend. The conclusions obtained from the three methods were similar, and after fitting the path data, all three methods achieved an R 2 value greater than 0.95 (Figure 14).
x 0 = i = 1 n x i T i i = 1 n T i       y 0 = i = 1 n y i T i i = 1 n T i
Cloud area and the brightness temperature values inside the cloud can be used to measure the size and range of an individual MCS. In this study, we conducted statistical analyses of the area and the average brightness temperature of the 25% coldest pixels within the MCS for eight time periods. Given that the Fengyun-4B AGRI data have a resolution of 4 km, the changes in MCS area and the average brightness temperature of the 25% coldest pixels are shown in Figure 15 and Figure 16. In the area change analysis, the use of different brightness temperature threshold ranges in the infrared temperature threshold method can affect the recognition area of MCS. When using a threshold of 241 K, the resulting area is greater than the results obtained from Label and Swin-Unet at various time periods. This shows that the MCS area increased continuously and then decreased at the final time period. However, the results from Label and Swin-Unet both indicate that the cloud area underwent a process of maintenance, weakening, strengthening, and weakening during the development process. It is important to note that the infrared temperature threshold method has a limitation, where using a higher threshold value results in a larger area and may be influenced by some high cloud types, including warmer cloud parts. Therefore, selecting the appropriate threshold for monitoring MCS is a crucial aspect of future research. In addition, regarding the change in the average brightness temperature of the 25% coldest pixels, all three methods showed similar conclusions. The brightness temperature increased in the initial stages, indicating a weakening of MCS intensity. However, at the final time period, there was a decrease in brightness temperature, indicating an increase in MCS intensity.
The GPM 3IMERGHH data were also applied for precipitation analysis in this MCS activity. This product represents precipitation intensity with units of mm/h, providing valuable verification information for identifying areas dominated by convective precipitation. In Figure 17, the MCS positions at 00:15, 00:45, 01:15, and 01:45 were overlaid on the GPM product. It can be visually observed from the figure that this MCS movement process brought about a precipitation event, and the MCS positions are in strong agreement with the GPM 3IMERGHH data. Within the target cloud clusters, the GPM data show that the MCS brought precipitation intensity exceeding 10 mm/h, with a very small region exceeding 25 mm/h. This indicates that the convective system within this MCS contributed to precipitation in the area.

6. Discussion

It is worth noting that the research on MCS identification based on true color composite cloud imagery can be extended to other satellites and sensors, including meteorological satellites and other satellites. Firstly, due to the one-year observation limitation of the Fengyun-4B data, it is currently not possible to train the Swin-Unet network on a large-scale dataset. In the future, further data collection and sample dataset creation will be needed to achieve more accurate MCS identification. Secondly, this study has focused on qualitative analysis, primarily investigating whether MCS can be identified from true color composite cloud imagery. The results currently indicate that the method identifies MCS in areas of vigorous convection, influenced by the labeling process. Additionally, to ensure the reliability of research results, more retrieval data and comparisons with ground-based weather radar and precipitation data are required for further analysis. This will help assess the applicability and accuracy of this method in MCS-monitoring research and provide more reliable data for future studies.
In this paper, we employed the Swin-Unet network to investigate the performance of Transformer models in identifying MCS from true color composite cloud imagery. After training the model using labels defined on a summer dataset, the Swin-Unet model achieved good results. However, for MCS operational needs, algorithms developed for the infrared wavelength are still essential since they offer day and night observation capabilities. In summary, applying Transformer models to the infrared wavelength and different wavelength combinations in the future is a worthwhile direction to explore. Additionally, Transformer models may perform similarly to CNN when dealing with small sample sizes. With more data provided by Fengyun-4B, we can obtain a larger MCS sample dataset to train the Swin-Unet network. Since the Fengyun-4B satellite’s Geostationary Hyperspectral Imager (GHI) lacks a water vapor band, MCS monitoring is currently reliant on long-wave infrared data. The method proposed in this paper can provide a reference for GHI-based MCS monitoring.

7. Conclusions

In this study, we utilized remote-sensing data from Fengyun-4B, China’s latest generation of geostationary meteorological satellite, to create a manually annotated MCS dataset with higher accuracy. We employed the Swin-Unet network, comparing it with other convolutional neural networks, and found that the Swin-Unet network achieved the best results in MCS recognition.
Monitoring MCS based on true color composite cloud imagery has been lacking in computer pattern recognition methods, and there is currently debate on whether MCS should be differentiated as continental MCS and oceanic MCS. Through experiments using the dataset with Swin-Unet network training, we achieved an IoU of 57.46% for continental MCS and 71.65% for oceanic MCS. The higher performance of oceanic MCS monitoring is likely due to the relatively simpler structure of oceanic MCS, while continental MCS often embeds within layered clouds, leading to a lower IoU. We also analyzed the distribution frequency of continental MCS and oceanic MCS in the test set, and concluded that the distribution range of oceanic MCS is wider and the distribution frequency is higher than that of continental MCS. In the statistical analysis, the oceanic MCS are more likely to reach a larger area, and some of the oceanic MCS are able to reach cooler brightness temperatures, i.e., stronger convection. In the case study, we selected a single MCS for tracking and monitoring, calculating cloud cluster area and the average brightness temperature of the 25% coldest pixels at the corresponding moment, and there exists a good agreement between the MCS location results predicted by the Swin-Unet network and the GPM precipitation data. The research results indicate that the Swin-Unet network can recognize MCS based on true color composite cloud imagery, providing a new approach for MCS monitoring using visible light data when long-wave infrared data are unavailable. In future research, we hope to develop a faster MCS-monitoring method based on the Fengyun-4B Geostationary Hyperspectral Imager (GHI) sensor.

Author Contributions

Methodology, R.X. and T.X.; software, T.X.; validation, S.B. and C.W.; formal analysis, J.L.; data curation, X.Z.; writing—original draft preparation, M.W.; writing—review and editing, J.L.; projection administration, R.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program (No. 2022YFC3004200), the National Natural Science Foundation of China under Grant 42176180, and the National Key Research and Development Program (No. 2022YFC3104900).

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here:(1) Satellite data were obtained using the Fengyun-4B satellite data provided by the China Meteorological Administration, National Satellite Meteorological Center, available at http://satellite.nsmc.org.cn/PortalSite/Data/DataView.aspx?currentculture=zh-CN, assessed on 1 June 2023. (2) GPM 3IMERGHH data provided by National Aeronautics and Space Administration, available at https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGHH_07/summary, accessed on 10 November 2023.

Acknowledgments

The authors would like to thank the China Meteorological Administration and the National Satellite Meteorological Centre for the Fengyun-4B AGRI L1 level data, NASA and JAXA for the GPM 3IMERGHH data, respectively.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maddox, R.A. Mesoscale convective complexes. Bull. Am. Meteorol. Soc. 1980, 61, 1374–1387. [Google Scholar] [CrossRef]
  2. Anderson, C.J.; Arritt, R.W. Mesoscale convective complexes and persistent elongated convective systems over the United States during 1992 and 1993. Mon. Weather. Rev. 1998, 126, 578–599. [Google Scholar] [CrossRef]
  3. Jirak, I.L.; Cotton, W.R.; McAnelly, R.L. Satellite and radar survey of mesoscale convective system development. Mon. Weather. Rev. 2003, 131, 2428–2449. [Google Scholar] [CrossRef]
  4. Schumacher, R.S.; Rasmussen, K.L. The formation, character and changing nature of mesoscale convective systems. Nat. Rev. Earth Environ. 2020, 1, 300–314. [Google Scholar] [CrossRef]
  5. Augustine, J.A.; Howard, K.W. Mesoscale convective complexes over the United States during 1986 and 1987. Mon. Weather. Rev. 1991, 119, 1575–1589. [Google Scholar] [CrossRef]
  6. Feng, Z.; Leung, L.R.; Liu, N.; Wang, J.; Houze Jr, R.A.; Li, J.; Hardin, J.C.; Chen, D.; Guo, J. A global high-resolution mesoscale convective system database using satellite-derived cloud tops, surface precipitation, and tracking. J. Geophys. Res. Atmos. 2021, 126, e2020JD034202. [Google Scholar] [CrossRef]
  7. Wang, D.; Giangrande, S.E.; Feng, Z.; Hardin, J.C.; Prein, A.F. Updraft and downdraft core size and intensity as revealed by radar wind profilers: MCS observations and idealized model comparisons. J. Geophys. Res. Atmos. 2020, 125, e2019JD031774. [Google Scholar] [CrossRef]
  8. Yanase, W.; Shimada, U.; Kitabatake, N.; Tochimoto, E. Tropical Transition of Tropical Storm Kirogi (2012) over the Western North Pacific: Synoptic Analysis and Mesoscale Simulation. Mon. Weather. Rev. 2023, 151, 2549–2572. [Google Scholar] [CrossRef]
  9. Ryzhkov, A.; Zmic, D. Observations of a MCS with a dual-polarization radar. In Proceedings of the Proceedings of IGARSS’94-1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; pp. 375–377. [Google Scholar]
  10. Hagen, M.; Schiesser, H.-H.; Dorninger, M. Monitoring of mesoscale precipitation systems in the Alps and the northern Alpine foreland by radar and rain gauges. Meteorol. Atmos. Phys. 2000, 72, 87–100. [Google Scholar] [CrossRef]
  11. Roberts, R.D.; Rutledge, S. Nowcasting storm initiation and growth using GOES-8 and WSR-88D data. Weather. Forecast. 2003, 18, 562–584. [Google Scholar] [CrossRef]
  12. Roberts, R.D.; Anderson, A.R.; Nelson, E.; Brown, B.G.; Wilson, J.W.; Pocernich, M.; Saxen, T. Impacts of forecaster involvement on convective storm initiation and evolution nowcasting. Weather. Forecast. 2012, 27, 1061–1089. [Google Scholar] [CrossRef]
  13. Walker, J.R.; MacKenzie, W.M.; Mecikalski, J.R.; Jewett, C.P. An Enhanced Geostationary Satellite–Based Convective Initiation Algorithm for 0–2-h Nowcasting with Object Tracking. J. Appl. Meteorol. Climatol. 2012, 51, 1931–1949. [Google Scholar] [CrossRef]
  14. Zhuge, X.; Zou, X. Summertime convective initiation nowcasting over southeastern China based on Advanced Himawari Imager observations. J. Meteorol. Soc. Japan Ser. II 2018, 96, 337–353. [Google Scholar] [CrossRef]
  15. Sun, H.; Wang, H.; Yang, J.; Zeng, Y.; Zhang, Q.; Liu, Y.; Gu, J.; Huang, S. Improving Forecast of Severe Oceanic Mesoscale Convective Systems Using FY-4A Lightning Data Assimilation with WRF-FDDA. Remote Sens. 2022, 14, 1965. [Google Scholar] [CrossRef]
  16. Zhang, X.; Shen, W.; Zhuge, X.; Yang, S.; Chen, Y.; Wang, Y.; Chen, T.; Zhang, S. Statistical characteristics of mesoscale convective systems initiated over the Tibetan Plateau in summer by Fengyun satellite and precipitation estimates. Remote Sens. 2021, 13, 1652. [Google Scholar] [CrossRef]
  17. Hui, W.; Guo, Q. Preliminary characteristics of measurements from Fengyun-4A Lightning Mapping Imager. Int. J. Remote Sens. 2021, 42, 4922–4941. [Google Scholar] [CrossRef]
  18. Hidayat, A.; Efendi, U.; Rahmadini, H.; Nugraheni, I. The Characteristics of squall line over Indonesia and its vicinity based on Himawari-8 satellite imagery and radar data interpretation. Proc. IOP Conf. Ser. Earth Environ. Sci. 2019, 303, 012059. [Google Scholar] [CrossRef]
  19. Chen, D.; Guo, J.; Yao, D.; Lin, Y.; Zhao, C.; Min, M.; Xu, H.; Liu, L.; Huang, X.; Chen, T. Mesoscale convective systems in the Asian monsoon region from Advanced Himawari Imager: Algorithms and preliminary results. J. Geophys. Res. Atmos. 2019, 124, 2210–2234. [Google Scholar] [CrossRef]
  20. Vila, D.A.; Machado, L.A.T.; Laurent, H.; Velasco, I. Forecast and Tracking the Evolution of Cloud Clusters (ForTraCC) using satellite infrared imagery: Methodology and validation. Weather. Forecast. 2008, 23, 233–245. [Google Scholar] [CrossRef]
  21. Song, F.; Feng, Z.; Leung, L.R.; Pokharel, B.; Wang, S.Y.S.; Chen, X.; Sakaguchi, K.; Wang, C.c. Crucial roles of eastward propagating environments in the summer MCS initiation over the US Great Plains. J. Geophys. Res. Atmos. 2021, 126, e2021JD034991. [Google Scholar] [CrossRef]
  22. Chen, S.-J.; Lee, D.-K.; Tao, Z.-Y.; Kuo, Y.-H. Mesoscale convective system over the yellow sea–a numerical case study. Meteorol. Atmos. Phys. 1999, 70, 185–199. [Google Scholar] [CrossRef]
  23. Zengping, F.; Yongguang, Z.; Yan, Z.; Hongqing, W. MCS census and modification of MCS definition based on geostationary satellite infrared imagery. J. Appl. Meteorol. Sci. 2008, 19, 82–90. [Google Scholar]
  24. Murakami, M. Analysis of the deep convective activity over the western Pacific and southeast Asia Part I: Diurnal variation. J. Meteorol. Soc. Japan. Ser. II 1983, 61, 60–76. [Google Scholar] [CrossRef]
  25. Fu, R.; Del Genio, A.D.; Rossow, W.B. Behavior of deep convective clouds in the tropical Pacific deduced from ISCCP radiances. J. Clim. 1990, 3, 1129–1152. [Google Scholar] [CrossRef]
  26. Hall, T.J.; Haar, T.H.V. The diurnal cycle of west Pacific deep convection and its relation to the spatial and temporal variation of tropical MCSs. J. Atmos. Sci. 1999, 56, 3401–3415. [Google Scholar] [CrossRef]
  27. Schmetz, J.; Tjemkes, S.; Gube, M.; Van de Berg, L. Monitoring deep convection and convective overshooting with METEOSAT. Adv. Space Res. 1997, 19, 433–441. [Google Scholar] [CrossRef]
  28. Setvák, M.; Rabin, R.M.; Wang, P.K. Contribution of the MODIS instrument to observations of deep convective storms and stratospheric moisture detection in GOES and MSG imagery. Atmos. Res. 2007, 83, 505–518. [Google Scholar] [CrossRef]
  29. Zheng, Y.; Yang, X.; Li, Z. Detection of severe convective cloud over sea surface from geostationary meteorological satellite images based on deep learning. J. Remote Sens. (Chin.) 2020, 24, 97–106. [Google Scholar] [CrossRef]
  30. Yang, Y.; Wu, X.; Wang, X. The sea-land characteristics of deep convections and convective overshootings over China sea and surrounding areas based on the CloudSat and FY-2E datasets. Acta Meteorol. Sin. 2019, 77, 256–267. [Google Scholar]
  31. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  32. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  33. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  34. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  35. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
  36. Min, M.; Wu, C.; Li, C.; Liu, H.; Xu, N.; Wu, X.; Chen, L.; Wang, F.; Sun, F.; Qin, D. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series. J. Meteorol. Res. 2017, 31, 708–719. [Google Scholar] [CrossRef]
  37. Sun, F.; Li, B.; Min, M.; Qin, D. Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations. Remote Sens. 2021, 13, 2229. [Google Scholar] [CrossRef]
  38. Zhang, X.; Xu, D.; Liu, R.; Shen, F. Impacts of FY-4A AGRI Radiance Data Assimilation on the Forecast of the Super Typhoon “In-Fa” (2021). Remote Sens. 2022, 14, 4718. [Google Scholar]
  39. Kong, X.; Jiang, Z.; Ma, M.; Chen, N.; Chen, J.; Shen, X.; Bai, C. The temporal and spatial distribution of sea fog in offshore of China based on FY-4A satellite data. Proc. J. Phys. Conf. Ser. 2023, 2486, 012015. [Google Scholar] [CrossRef]
  40. Wang, W.; Qu, P.; Liu, Z. Mesoscale Characteristics Analysis of a Fog Case with Complete Weather Information Derived from FengYun-4A Data. Meteorol. Environ. Res. 2022, 13, 10–27. [Google Scholar]
  41. Yi, L.; Li, M.; Liu, S.; Shi, X.; Li, K.-F.; Bendix, J. Detection of dawn sea fog/low stratus using geostationary satellite imagery. Remote Sens. Environ. 2023, 294, 113622. [Google Scholar] [CrossRef]
  42. Sun, J.; Zhao, S.; Xu, G.; Meng, Q. Study on a mesoscale convective vortex causing heavy rainfall during the Mei-yu season in 2003. Adv. Atmos. Sci. 2010, 27, 1193–1209. [Google Scholar] [CrossRef]
  43. Zhang, T.; Lin, W.; Lin, Y.; Zhang, M.; Yu, H.; Cao, K.; Xue, W. Prediction of tropical cyclone genesis from mesoscale convective systems using machine learning. Weather. Forecast. 2019, 34, 1035–1049. [Google Scholar] [CrossRef]
  44. Yihui, D.; Chan, J.C. The East Asian summer monsoon: An overview. Meteorol. Atmos. Phys. 2005, 89, 117–142. [Google Scholar] [CrossRef]
  45. Xu, X.; Xue, M.; Wang, Y.; Huang, H. Mechanisms of secondary convection within a Mei-Yu frontal mesoscale convective system in eastern China. J. Geophys. Res. Atmos. 2017, 122, 47–64. [Google Scholar] [CrossRef]
  46. May, P.T.; Mather, J.H.; Vaughan, G.; Jakob, C. Characterizing oceanic convective cloud systems: The tropical warm pool international cloud experiment. Bull. Am. Meteorol. Soc. 2008, 89, 153–155. [Google Scholar] [CrossRef]
  47. Wang, K.; Chen, G.; Bi, X.; Shi, D.; Chen, K. Comparison of convective and stratiform precipitation properties in developing and nondeveloping tropical disturbances observed by the Global Precipitation Measurement over the western North Pacific. J. Meteorol. Soc. Japan. Ser. II 2020, 98, 1051–1067. [Google Scholar] [CrossRef]
  48. Jun, L.; Bin, W.; Dong-Hai, W. The characteristics of mesoscale convective systems (MCSs) over East Asia in warm seasons. Atmos. Ocean. Sci. Lett. 2012, 5, 102–107. [Google Scholar] [CrossRef]
  49. Gumley, L.; Descloitres, J.; Schmaltz, J. Creating Reprojected True Color MODIS Images: A Tutorial; University of Wisconsin–Madison: Madison, WI, USA, 2003; p. 19. [Google Scholar]
  50. Zhuge, X.-Y.; Zou, X.; Wang, Y. A fast cloud detection algorithm applicable to monitoring and nowcasting of daytime cloud systems. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6111–6119. [Google Scholar] [CrossRef]
  51. Yan, J.; Qu, J.; Zhang, F.; Guo, X.; Wang, Y. Study on multi-dimensional dynamic hybrid imaging technology based on FY-4A/AGRI. J. Meteorol. Environ. 2022, 38, 98–105. [Google Scholar] [CrossRef]
  52. Huang, Y.; Zhang, M. Contrasting Mesoscale Convective System Features of Two Successive Warm-Sector Rainfall Episodes in Southeastern China: A Satellite Perspective. Remote Sens. 2022, 14, 5434. [Google Scholar] [CrossRef]
Figure 1. Research methodology.
Figure 1. Research methodology.
Remotesensing 15 05572 g001
Figure 2. Study area, including both land and ocean regions.
Figure 2. Study area, including both land and ocean regions.
Remotesensing 15 05572 g002
Figure 3. Characteristics and annotation illustration of MCS in different spectral bands, with the red lines indicating the MCS boundary range. (a) True color composite image, (b) longwave infrared channel image, (c) water vapor channel image, (d) enhanced longwave infrared image.
Figure 3. Characteristics and annotation illustration of MCS in different spectral bands, with the red lines indicating the MCS boundary range. (a) True color composite image, (b) longwave infrared channel image, (c) water vapor channel image, (d) enhanced longwave infrared image.
Remotesensing 15 05572 g003
Figure 4. Comparison of spectral response functions for Fengyun-4A/B AGRI Channel 01, Channel 02, and Channel 03.
Figure 4. Comparison of spectral response functions for Fengyun-4A/B AGRI Channel 01, Channel 02, and Channel 03.
Remotesensing 15 05572 g004
Figure 5. Comparison of full-disk images from Fengyun-4B AGRI sensor. (a) Before stretching. (b) After stretching.
Figure 5. Comparison of full-disk images from Fengyun-4B AGRI sensor. (a) Before stretching. (b) After stretching.
Remotesensing 15 05572 g005
Figure 6. Swin-Unet network model architecture.
Figure 6. Swin-Unet network model architecture.
Remotesensing 15 05572 g006
Figure 7. The Swin-Transformer architecture.
Figure 7. The Swin-Transformer architecture.
Remotesensing 15 05572 g007
Figure 8. The results of different neural network models for continental MCS recognition. Red pixels represent label values, while blue pixels represent the predictions of FCN-8s, SegNet, Unet, and Swin-Unet. (a) 26 June 2022 02:30 UTC (b) 26 July 2022 02:15 UTC (c) 27 July 2022 07:00 UTC (d) 29 July 2022 05:45 UTC.
Figure 8. The results of different neural network models for continental MCS recognition. Red pixels represent label values, while blue pixels represent the predictions of FCN-8s, SegNet, Unet, and Swin-Unet. (a) 26 June 2022 02:30 UTC (b) 26 July 2022 02:15 UTC (c) 27 July 2022 07:00 UTC (d) 29 July 2022 05:45 UTC.
Remotesensing 15 05572 g008
Figure 9. The recognition results of different neural network models for oceanic MCS: (a) 23 June 2022 00:45 UTC, (b) 26 June 2022 01:15 UTC, (c) 24 July 2022 01:15 UTC, (d) 29 August 2022 03:15 UTC. In the figures, red pixels represent the ground truth (label values), while blue pixels represent the predicted values by FCN-8s, SegNet, Unet, and Swin-Unet.
Figure 9. The recognition results of different neural network models for oceanic MCS: (a) 23 June 2022 00:45 UTC, (b) 26 June 2022 01:15 UTC, (c) 24 July 2022 01:15 UTC, (d) 29 August 2022 03:15 UTC. In the figures, red pixels represent the ground truth (label values), while blue pixels represent the predicted values by FCN-8s, SegNet, Unet, and Swin-Unet.
Remotesensing 15 05572 g009
Figure 10. Frequency of distribution of MCS in the study area.
Figure 10. Frequency of distribution of MCS in the study area.
Remotesensing 15 05572 g010
Figure 11. Distribution of MCS in different frequency intervals. (a) Occurrence frequency ≥ 30%. (b) 30% > Occurrence frequency ≥ 20%. (c) 20% > Occurrence frequency ≥ 10%. (d) 10% > Occurrence frequency ≥ 10%.
Figure 11. Distribution of MCS in different frequency intervals. (a) Occurrence frequency ≥ 30%. (b) 30% > Occurrence frequency ≥ 20%. (c) 20% > Occurrence frequency ≥ 10%. (d) 10% > Occurrence frequency ≥ 10%.
Remotesensing 15 05572 g011
Figure 12. Continental MCS and oceanic MCS pixel counts (a), 25% coldest pixel average value (b) by box plot analysis.
Figure 12. Continental MCS and oceanic MCS pixel counts (a), 25% coldest pixel average value (b) by box plot analysis.
Remotesensing 15 05572 g012
Figure 13. The MCS-monitoring situation from 24 June 2022 00:00 to 24 June 2022 02:00 (UTC). In the figure: (a) Label: Red boundaries represent the MCS extent. (b) Swin-Unet predictions: Blue boundaries represent the MCS extent. (c) Brightness temperature threshold method at 241 K: Purple boundaries represent the MCS extent.
Figure 13. The MCS-monitoring situation from 24 June 2022 00:00 to 24 June 2022 02:00 (UTC). In the figure: (a) Label: Red boundaries represent the MCS extent. (b) Swin-Unet predictions: Blue boundaries represent the MCS extent. (c) Brightness temperature threshold method at 241 K: Purple boundaries represent the MCS extent.
Remotesensing 15 05572 g013
Figure 14. The movement path of the cloud cluster’s centroid calculated using the Label, Swin-Unet, and the brightness temperature threshold method.
Figure 14. The movement path of the cloud cluster’s centroid calculated using the Label, Swin-Unet, and the brightness temperature threshold method.
Remotesensing 15 05572 g014
Figure 15. The variation in the area of a specific MCS during its development from 00:00 to 02:00 UTC on 24 June 2022. This graph illustrates how the MCS’s area changes over time during this period.
Figure 15. The variation in the area of a specific MCS during its development from 00:00 to 02:00 UTC on 24 June 2022. This graph illustrates how the MCS’s area changes over time during this period.
Remotesensing 15 05572 g015
Figure 16. Brightness temperature average of the coldest 25% pixels changes during the development process of a specific MCS from 00:00 to 02:00 UTC on 24 June 2022. This provides insights into the evolution of the MCS during this time period.
Figure 16. Brightness temperature average of the coldest 25% pixels changes during the development process of a specific MCS from 00:00 to 02:00 UTC on 24 June 2022. This provides insights into the evolution of the MCS during this time period.
Remotesensing 15 05572 g016
Figure 17. (a) 24 June 2022 00:15. (b) 24 June 2022 00:45. (c) 24 June 2022 01:15. (d) 24 June 2022 01:45. Analysis and comparison of GPM precipitation data within 15 min before and after.
Figure 17. (a) 24 June 2022 00:15. (b) 24 June 2022 00:45. (c) 24 June 2022 01:15. (d) 24 June 2022 01:45. Analysis and comparison of GPM precipitation data within 15 min before and after.
Remotesensing 15 05572 g017
Table 1. Comparison of Fengyun-4A/B AGRI sensor band configurations.
Table 1. Comparison of Fengyun-4A/B AGRI sensor band configurations.
SpectrumFengyun-4AFengyun-4BMain Application
ChannelCentral
Wavelength
(μm)
ChannelCentral
Wavelength
(μm)
VIS/NIR10.47 μm10.47 μmAerosols, true color synthesis
20.65 μm20.65 μmTrue color synthesis
30.825 μm30.825 μmTrue color synthesis
SWIR41.375 μm41.379 μmCirrus
51.61 μm51.61 μmDistinguish low clouds and snow
Cloud phase separation
62.25 μm62.25 μmCirrus, aerosols
MIR73.75 μm (High)73.75 μmHigh-albedo targets, fire points
83.75 μm (Low)83.75 μmLow-albedo targets, surface
Water Vapor96.25 μm96.25 μmHigh-level water vapor
107.1 μm106.95 μmMiddle-level water vapor
——117.42 μmLow-level water vapor
LWIR118.5 μm128.55 μmClouds
1210.7 μm1310.8 μmClouds, LST
1312.0 μm1412 μmClouds, water vapor content, LST
1413.5 μm1513.3 μmClouds, water vapor
Table 2. Time division of the dataset.
Table 2. Time division of the dataset.
Train/Valid DatasetTrain Clip NumberTest Dataset
1–22 June 202218,076 (512 × 512 pixels)
(Continental/Oceanic MCS)
23–30 June 2022
1–23 July 202224–31 July 2022
1–22 August 202223–31 August 2022
Table 3. Confusion matrix for MCS prediction results.
Table 3. Confusion matrix for MCS prediction results.
Prediction MCSPrediction Non-MCS
Label MCSTP (True positive)FN (False negative)
Label non-MCSFP (False positive)TN (True negative)
Table 4. The performance comparison of the Swin-Unet, Unet, SegNet, and FCN-8s networks for continental MCS.
Table 4. The performance comparison of the Swin-Unet, Unet, SegNet, and FCN-8s networks for continental MCS.
RecallF1IoUFAR
Swin-Unet83.37%72.9%57.46%33.96%
Unet83.19%68.43%52.09%37.11%
SegNet81.94%62.39%45.48%45.24%
FCN-8s59.93%51.68%36.37%36.81%
Table 5. The performance comparison of the Swin-Unet, Unet, SegNet, and FCN-8s networks for oceanic MCS.
Table 5. The performance comparison of the Swin-Unet, Unet, SegNet, and FCN-8s networks for oceanic MCS.
RecallF1IoUFAR
Swin-Unet86.1%83.47%71.65%17.14%
Unet81.74%82.14%69.69%17.31%
SegNet80.46%82.76%70.63%14.73%
FCN-8s73.29%77.48%64.18%12.87%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, R.; Xie, T.; Bai, S.; Zhang, X.; Li, J.; Wang, M.; Wang, C. Monitoring Mesoscale Convective System Using Swin-Unet Network Based on Daytime True Color Composite Images of Fengyun-4B. Remote Sens. 2023, 15, 5572. https://doi.org/10.3390/rs15235572

AMA Style

Xiang R, Xie T, Bai S, Zhang X, Li J, Wang M, Wang C. Monitoring Mesoscale Convective System Using Swin-Unet Network Based on Daytime True Color Composite Images of Fengyun-4B. Remote Sensing. 2023; 15(23):5572. https://doi.org/10.3390/rs15235572

Chicago/Turabian Style

Xiang, Ruxuanyi, Tao Xie, Shuying Bai, Xuehong Zhang, Jian Li, Minghua Wang, and Chao Wang. 2023. "Monitoring Mesoscale Convective System Using Swin-Unet Network Based on Daytime True Color Composite Images of Fengyun-4B" Remote Sensing 15, no. 23: 5572. https://doi.org/10.3390/rs15235572

APA Style

Xiang, R., Xie, T., Bai, S., Zhang, X., Li, J., Wang, M., & Wang, C. (2023). Monitoring Mesoscale Convective System Using Swin-Unet Network Based on Daytime True Color Composite Images of Fengyun-4B. Remote Sensing, 15(23), 5572. https://doi.org/10.3390/rs15235572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop