Next Article in Journal
Comparison and Improvement of Bioinspired Mobile Algorithms to Trace the Emission Source Based on the Simulation Scenarios
Previous Article in Journal
Comparison of Fluorescent Techniques Using Two Enzymes Catalysed for Measurement of Atmospheric Peroxides
 
 
Article
Peer-Review Record

Forecasting the June Ridge Line of the Western Pacific Subtropical High with a Machine Learning Method

Atmosphere 2022, 13(5), 660; https://doi.org/10.3390/atmos13050660
by Cunyong Sun, Xiangjun Shi *, Huiping Yan, Qixiao Jiang and Yuxi Zeng
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Atmosphere 2022, 13(5), 660; https://doi.org/10.3390/atmos13050660
Submission received: 27 February 2022 / Revised: 4 April 2022 / Accepted: 20 April 2022 / Published: 21 April 2022
(This article belongs to the Section Climatology)

Round 1

Reviewer 1 Report

Review of “Forecasting the June Ridge Line of the Western Pacific Subtropical High with a Machine Learning Method” by Sun et al.

The submitted manuscript proposed a new method to forecast the western Pacific subtropical high ridge in June using the preceding autumn/winter SST anomaly patterns mainly over the Indian and Pacific Oceans based on a neural network (NN) model. The authors demonstrated that the predicted WPSHRL agrees well with the observed one. This topic is quite interesting and I like the idea behind the seasonal climate prediction using a machine learning technique. The manuscript is well-organized end well-written. I would suggest a minor revision before possible publication. Please find my comments below.

1. Line 60: Why did the authors choose such a coarse resolution of SST data? Are the results similar if using other SST datasets with finer resolutions? In other words, will an increased SST resolution further improve the forecast skills?

2. Lines 63-69: The authors used the monthly WPSHRL index obtained from NCC (1961-2016) and manually defined using reanalysis data (if so, you need to specify which reanalysis dataset was used) after 2016. Can you validate the consistency between these two indexes, that is, what’s the correlation between the self-defined index and the NCC’s index during 1961-2016? It would be better to show readers whether the self-defined WPSHRL index matches well with that from NCC before you merged them.

3. Line 79: An alternative and better way to show their linear relationship are to simply give the regression/correlation maps, as the authors aimed to show the background knowledge on the *linear* relationships.

4. Lines 83-84: I’m struggling a bit with the selection of these 40 composite samples. There should be 41 samples after removing the 21-year running average, how to make 40 composite samples? What is the actual range of the random value, e.g., [0.95,1.05]? Are SST and WPSHRL magnified or reduced by the same factor? The authors need to clarify more on how they obtained these samples.

5. I understand that it is difficult to physically interpret the relationship between preceding SST anomaly patterns and the Jun WPSHRL in a machine learning framework. But I still wonder if there is any way to discuss a little more on this aspect from the atmospheric perspective, e.g., the sensitivity of geopotential height and u wind over the WPSH region?

6. Figure 1: Can you add the statistical significance test?

Author Response

Response to Reviewer 1

We thank the reviewer for the time spent evaluating our study and for the valuable comments and suggestions, which helped us to improve the manuscript. We hope that the revised manuscript and our response to the comments are satisfactory. The reviewer’s comments are in italics, and our responses are in standard font below.

 

General Assessment:

The submitted manuscript proposed a new method to forecast the western Pacific subtropical high ridge in June using the preceding autumn/winter SST anomaly patterns mainly over the Indian and Pacific Oceans based on a neural network (NN) model. The authors demonstrated that the predicted WPSHRL agrees well with the observed one. This topic is quite interesting and I like the idea behind the seasonal climate prediction using a machine learning technique. The manuscript is well-organized end well-written. I would suggest a minor revision before possible publication. Please find my comments below.

Reply: We do appreciate the positive comment.

 

Comments:

1.) Line 60: Why did the authors choose such a coarse resolution of SST data? Are the results similar if using other SST datasets with finer resolutions? In other words, will an increased SST resolution further improve the forecast skills?

Reply: Although SST datasets with finer resolutions could provide more information on the finer scale, the information from the finer scale hardly contributes to the prediction of WPSHRL at lead times of more than one season. Furthermore, the SST data from two adjacent girds are very similar. In other words, the freedom of the SST field is much less than the number of grids. Therefore, only N (N≤ 20, tunable parameter) leading principal components of SST field are used as predictors in this study.

 

2.) Lines 63-69: The authors used the monthly WPSHRL index obtained from NCC (1961-2016) and manually defined using reanalysis data (if so, you need to specify which reanalysis dataset was used) after 2016. Can you validate the consistency between these two indexes, that is, what’s the correlation between the self-defined index and the NCC’s index during 1961-2016? It would be better to show readers whether the self-defined WPSHRL index matches well with that from NCC before you merged them.

Reply: Thank you for the comments. The WPSHRL index is defined as the averaged 0 line of ?gh/?y in the area of (10°N~45°N, 110°E~150°E) based on the monthly mean 500 hPa geopotential height (gh) field (Liu et al., 2012). We asked a person who worked at NCC about diagnosing the WPSHRL index (see the Acknowledgments). The definition and diagnosing method were not changed after 2016. The self-diagnosed WPSHRL indexes (2017-2021) are based on the commonly used ERA5 reanalysis data, and corresponding figures have been archived in a public repository (see Data Availability Statement). The ERA5 reanalysis data is available from 1979. We checked the consistency between the NCC WPSHRL index (1979 to 2016) and corresponding ERA5 data (gh and wind field). These figures (1979 to 2021) were uploaded as a supplement (Please see the attachment, WPSHRL_index_basedon_ERA5.pdf or atmosphere-1636338-coverletter). Overall, the consistency is definitely acceptable.

Reference

Liu, Y.; Li, W.; Ai, W.; Li, Q. Reconstruction and application of the monthly Western Pacific Subtropical High indices. J. Appl. Meteor. Sci. (in Chinese) 2012, 23, 414-423.

 

3.) Line 79: An alternative and better way to show their linear relationship are to simply give the regression/correlation maps, as the authors aimed to show the background knowledge on the *linear* relationships.

Reply: Thank you for the comments. The “linear relationship” indicates the relationship get from traditional linear statistical methods. Previous studies have shown that the low composite and high composite are almost opposite, and the WPSHRL index is apt to be low/high if the pattern of prophase SST anomalies is similar to the low/high composite (Chen 1982; Huang and Sun, 1994; Ying and Sun, 2000; Ai and Chen, 2000; Yao and Yan, 2008; Zeng et al., 2010; Ong-Hua and Feng, 2011; Xue and Zhao, 2017). These conclusions are used as climatological background knowledge in this study. In the revised manuscript, the corresponding sentences were rewritten.

Reference

Chen, L. Interaction between the subtropical high over the north Pacific and the sea surface temperature of the eastern equatorial Pacific. Chin. J. Atmos. Sci. (in Chinese) 1982, 6, 148-156.

Huang, R.; Sun, F. Impacts of the Thermal State and the Convective Activities in the Tropical Western Warm Pool on the Summer Climate Anomalies in East Asia. Chin. J. Atmos. Sci. (in Chinese) 1994, 18, 141-151.

Ying, M.; Sun, S. A Study on the Response of Subtropical High over the Western Pacific the SST Anomaly. Chin. J. Atmos. Sci. (in Chinese) 2000, 24, 193-206.

Ai, Y.; Chen, X. Analysis of the correlation between the Subtropical High over Western Pacific in Summer and SST. J. Trop. Meteor. (in Chinese) 2000, 16, 1-8.

Yao, Y.; Yan, H. Relationship between proceeding pacific sea surface temperature and Subtropical High indexes of main raining seasons. J. Trop. Meteor. (in Chinese) 2008, 24, 483-489.

Zeng, G.; Sun, Z.; Lin, Z.; Ni, D. Numerical Simulation of Impacts of Sea Surface Temperature Anomaly upon the Interdecadal Variation in the Northwestern Pacific Subtropical High. Chin. J. Atmos. Sci. (in Chinese) 2010, 34, 307-322.

Ong-Hua, S.; Feng, X. Two northward jumps of the summertime western pacific subtropical high and their associations with the tropical SST anomalies. Atmos. Ocean. Sci. Lett. 2011, 4, 98-102.

Xue, F.; Zhao, J. Intraseasonal variation of the East Asian summer monsoon in La Niña years. Atmos. Ocean. Sci. Lett. 2017, 10, 156-161.

 

4.) Lines 83-84: I’m struggling a bit with the selection of these 40 composite samples. There should be 41 samples after removing the 21-year running average, how to make 40 composite samples? What is the actual range of the random value, e.g., [0.95,1.05]? Are SST and WPSHRL magnified or reduced by the same factor? The authors need to clarify more on how they obtained these samples.

Reply: We are sorry about this confusion. In the revised manuscript, the “composite samples” was renamed as “extended samples”. In addition, an example was added to introduce how to produce the extended samples. 20 extended samples are produced by magnifying or reducing the low composite data (SST field and corresponding WPSHRL index, upper panel of Fig. 1). For example, both the SST field and corresponding WPSHRL index (i.e., −0.75) multiplied by 1.1 (or other values around 1.0) can produce an extended sample. Similarly, another 20 extended samples are produced by magnifying or reducing the high composite data (SST field and corresponding WPSHRL index, lower panel of Fig.1). There is a total of 40 extended samples used in the forecasting system.

 

5.) I understand that it is difficult to physically interpret the relationship between preceding SST anomaly patterns and the Jun WPSHRL in a machine learning framework. But I still wonder if there is any way to discuss a little more on this aspect from the atmospheric perspective, e.g., the sensitivity of geopotential height and u wind over the WPSH region?

Reply: Thank you for the comments. Although it is difficult to illustrate the physical interpretability of an NN model with several hidden layers, the Perturbation-Based Method can be used to check the relation between input predictors and target predictands from the NN model (Toms et al., 2020; Yuan et al., 2020). Thus, the interpretability of the forecast system was shown in Section 3.3. Based on the comments, more physically interpretation from atmospheric perspective was added in this section. The SST anomalies of prophase autumn and winter might continue into summer (Ying and Sun, 2000; Chen et al., 2016). In the summer, the increasing SST in the western Pacific warm pool would lead to a higher (more northerly) WPSHRL index (Tsuyoshi 1987; Huang and Li, 1988; Qian et al., 2021).

Reference

Toms, B.A.; Barnes, E.A.; Ebert-Uphoff, I. Physically interpretable neural networks for the geosciences: Applications to Earth system variability. arXiv 2020, arXiv:1912.01752.

Yuan, H.; Yu, H.; Gui, S.; Ji, S. Explainability in Graph Neural Networks: A Taxonomic Survey. arXiv 2020, arXiv:2012.15445.

Ying, M.; Sun, S. A Study on the Response of Subtropical High over the Western Pacific the SST Anomaly. Chin. J. Atmos. Sci. (in Chinese) 2000, 24, 193-206.

Chen, D.; Gao, S.; Chen, J.; Gao, S. The synergistic effect of SSTA between the equatorial eastern Pacific and the Indian-South China Sea warm pool region influence on the western Pacific subtropical high. Haiyang Xuebao. (in Chinese) 2016, 38, 1-15.

Tsuyoshi, N. Convective Activities in the Tropical Western Pacific and Their Impact on the Northern Hemisphere Summer Circulation. J. Meteor. Soc. Japan. Ser. II. 1987, 65, 373-390.

Huang, R.; Li, W. Influence of heat source anomaly over the western tropical Pacific on the subtropical high over East Asia and its physical mechanism. Chin. J. Atmos. Sci. (in Chinese) 1988, 12, 107-116.

Qian. Q.; Liang, P.; Qi, L. Advances in the Study of Intraseasonal Activity and Variation of Western Pacific Subtropical High. Meteor. Environ. Sci. 2021, 44, 93-101.

 

6.) Figure 1: Can you add the statistical significance test?

Reply: In order to extend the samples used for training the NN model, some artificial samples (i.e., extended samples introduced above) are produced based on composite analysis (SST field and corresponding WPSHRL index, Figure 1). Because the grids with statistically significant SST values are relatively small, the extended samples don't work well if they were produced based on statistically significant SST regions.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper is well structured and presents a novel work of interest to the understanding the rainfall mechanism. The authors address a very interesting topic to forecast the June Ridge Line of the Western Pacific Subtropical High with a Machine Learning Method. I have read through the paper and have suggested some points to be addressed before it is considered for publication. I recommend that the paper is accepted with major revision.

Major comments:

  1. The introduction is very poorly written. It lacks a deep clarification and does not focuses on the research problem. For example, not enough background information. What is the contribution/novelty of this study, and its major findings to understand this type of exploration in other parts in the world? What kind of local and global knowledge that do authors want to improve?
  2. L36-37: “One major purpose of this study is to investigate whether the machine learning method can provide a skillful prediction of the WPSHRL for lead times longer than one season.” It is very early to highlight the objective before giving any background information.
  3. Separate the “discussion” section from conclusion. There are plenty of works already conducted on this issue over many regions across the world. But none of such works have been discussed. Thus, it is strongly suggested to provide a discussion of your results with comparison to other studies. 

Minor comments:

  1. L35: Is it "indexes" or "indices"? Please confirm.
  2. L128-129: "we actually care about". It is not a scientific way of writing. Please write it in a different approach.
  3. Sentence is unclear. Which training set?
  4. L164: What is "leave-one-out approach". Please explain it for general readers.

Author Response

Response to Reviewer 2

We do appreciate the thorough review and constructive comments, which have helped us to substantially improve the quality of the manuscript. We hope that the modified manuscript and our responses to the comments are satisfactory. The reviewer’s comments are in italics, and our responses are in standard font below.

 

General Assessment:

The paper is well structured and presents a novel work of interest to the understanding the rainfall mechanism. The authors address a very interesting topic to forecast the June Ridge Line of the Western Pacific Subtropical High with a Machine Learning Method. I have read through the paper and have suggested some points to be addressed before it is considered for publication. I recommend that the paper is accepted with major revision.

Reply: We do appreciate the positive comment.

 

Major comments:

1.) The introduction is very poorly written. It lacks a deep clarification and does not focuses on the research problem. For example, not enough background information. What is the contribution/novelty of this study, and its major findings to understand this type of exploration in other parts in the world? What kind of local and global knowledge that do authors want to improve?

Reply: Thank you for this comment. In the revised manuscript, the introduction section was rewritten. We clearly pointed out that, at lead times longer than three months (i.e., one season), there is still no study that reported an acceptable forecast skill (>0.5). The major purpose of this study is to develop an acceptable forecast system with lead time of three months. Correspondingly, the contribution/novelty of this study was also clearly pointed out in the conclusion section.

2.) L36-37: “One major purpose of this study is to investigate whether the machine learning method can provide a skillful prediction of the WPSHRL for lead times longer than one season.” It is very early to highlight the objective before giving any background information.

Reply: We thank the reviewer for pointing this out. As mentioned above, the introduction section was rewritten in the revised manuscript. “One major purpose of this study is to investigate whether the machine learning method can provide a skillful prediction of the WPSHRL for lead times longer than one season.” was replaced by “The major purpose of this study is to develop an acceptable forecast system for the June WPSHRL index at a lead time of three months. To achieve this, we must try some new methods (e.g., machine learning)”.

3.) Separate the “discussion” section from conclusion. There are plenty of works already conducted on this issue over many regions across the world. But none of such works have been discussed. Thus, it is strongly suggested to provide a discussion of your results with comparison to other studies?

Reply: Thank you for this comment. In the revised manuscript, the introduction and conclusion sections were rewritten, and a discussion section was added.

Firstly, in the introduction section of the revised manuscript, we clearly pointed out the contribution/novelty of this study as follows: “Previous studies have shown that the monthly (i.e., at the lead time of one month) forecast skill can reach 0.5 or higher. However, at lead times longer than three months (i.e., one season), there is still no study that reported an acceptable forecast skill (>0.5). Note that, the prediction with lead time of three months has more application value than the prediction with lead time of one month. Therefore, the major purpose of this study is to develop an acceptable forecast system for the June WPSHRL index at a lead time of three months”.

Secondly, in the conclusion section of the revised manuscript, we also clearly pointed out the contribution/novelty of this study as follows: “The forecast system is valuable in a real application sense. This successful forecast system suggests that the prediction of the atmospheric circulation indexes beyond one season might be improved by non-linear statistical methods (e.g., the NN model)”.

Finally, a discussion section was added to the revised manuscript. This section provides a discussion of our experiences with comparison to other studies. With regard to how to deal with the small sample size problem and how to take advantage of climatological background knowledge, this section discussed the experiences from our study and previous studies.

Minor comments:

1.) L35: Is it "indexes" or "indices"? Please confirm.

Reply: The sentence “There are few studies about the prediction of general atmospheric circulation indexes” was removed in the revised manuscript.

2.) L128-129: "we actually care about". It is not a scientific way of writing. Please write it in a different approach.

Reply: Thanks. “what we actually care about is the performance on new unseen examples” was rewritten as “the NN model is actually estimated by the performance on new unseen examples”.

3.) Sentence is unclear. Which training set?

Reply: Thanks for this suggestion. In the revised manuscript, we clearly point out that the “training set” indicates all or part of the already existing samples.

4.) L164: What is "leave-one-out approach". Please explain it for general readers.

Reply: Cross-validation, which is a widely used approach in machine learning, allows one to estimate generalization error even when the dataset is small (Goodfellow et al., 2016). Definition leave-one-out cross-validation is a special case of cross-validation where the number of folds equals the number of instances in the data set. Thus, the learning algorithm is applied once for each instance, using all other instances as a training set and using the selected instance as a single-item test set (Volpe et al., 2011). In the revised manuscript, the "leave-one-out approach" was clearly introduced based on the forecast system developed by this study. “The whole observed data set (e.g., 1961~2021) is divided into 61 subsets, each containing one sample. Firstly, the model is trained based on the 2nd~61st samples and the 1st prediction (i.e., the predicted 1961 WPSHRL index) is calculated with this model using the SST data of the 1st sample. Subsequently, the model is trained again based on the 1st and 3rd~61st samples and the 2nd prediction (i.e., the predicted 1962 WPSHRL index) is calculated with this new model using the SST data of the 2nd sample. This procedure is repeated until the 61st prediction (i.e., the predicted 2021 WPSHRL index) has been calculated”.

Reference

Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, USA, 2016; pp. 89-208.

Volpe, V.; Manzoni, S.; Marani, M. Leave-One-Out Cross-Validation. Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2011; pp. 24-45.

Round 2

Reviewer 1 Report

I thank the authors for addressing my comments. I'm satisfied with their replies and have no further comments.

Author Response

Response to Reviewer 1

We thank the reviewer for the positive comments. The reviewer’s comments are in italics and our responses are in the standard fond below.

 

General Assessment:

I thank the authors for addressing my comments. I'm satisfied with their replies and have no further comments.

 

Reply: Thanks.

Reviewer 2 Report

The modified version addressed all the issues I raised and the authors satisfactorily incorporated all my comments. So I recommend this modified version for publication after a minor revision. 

Conclusion should come after discussion section. 

Author Response

Response to Reviewer 2

We thank the reviewer’s comments for improving this manuscript. The reviewer’s comments are in italics and our responses are in standard fond below.

 

General Assessment:

The modified version addressed all the issues I raised and the authors satisfactorily incorporated all my comments. So I recommend this modified version for publication after a minor revision.

Conclusion should come after discussion section.

 

Reply: Thanks. In the revised manuscript, the conclusion section is after the discussion section. Correspondingly, the order of references is changed and a transition sentence was rewritten.

Back to TopTop