Next Article in Journal
Construction Material Selection by Using Multi-Attribute Decision Making Based on q-Rung Orthopair Fuzzy Aczel–Alsina Aggregation Operators
Previous Article in Journal
Development of a Novel Object Detection System Based on Synthetic Data Generated from Unreal Game Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time Series and Non-Time Series Models of Earthquake Prediction Based on AETA Data: 16-Week Real Case Study

1
The Key Laboratory of Integrated Microsystems, Peking University Shenzhen Graduate School, Shenzhen 518055, China
2
Faculty of Engineering, Shenzhen MSU-BIT University, Shenzhen 518172, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(17), 8536; https://doi.org/10.3390/app12178536
Submission received: 1 August 2022 / Revised: 22 August 2022 / Accepted: 23 August 2022 / Published: 26 August 2022

Abstract

:
The Key Laboratory of Integrated Microsystems (IMS) of Peking University Shenzhen Graduate School has deployed a self-developed acoustic and electromagnetics to artificial intelligence (AETA) system on a large scale and at a high density in China to comprehensively monitor and collect the precursor anomaly signals that occur before earthquakes for seismic prediction. This paper constructs several classic time series and non-time series prediction models for comparison and analysis in order to find the most suitable earthquake-prediction model among these models. The long short-term memory (LSTM) neural network, which gains the best results in earthquake prediction based on AETA data extracted from the precursor anomaly signals, is selected for real-earthquake prediction for 16 consecutive weeks.

1. Introduction

Earthquakes can cause huge damage to the natural environment and human society. If the three aspects of earthquakes can be accurately predicted, i.e., time, epicenter, magnitude, the damage can be largely reduced.
Acoustic and electromagnetics to artificial intelligence (AETA) is a multi-component earthquake-monitoring and prediction system developed by the Key Laboratory of Integrated Microsystems of Peking University Shenzhen Graduate School, which is used for earthquake prediction by collecting underground earthquake precursor anomaly signals, including electromagnetic (EM) signals and geoacoustic (GA) signals [1]. The importance of EM signals and GA signals to earthquake prediction has been revealed by numerous studies during the last forty years [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. The AETA team has completed a large-scale deployment in China and has collected over 50 TB of data which laid a good foundation for the establishment of earthquake-prediction models.
Previously, many scholars attempted to use these algorithmic models for earthquake prediction in the field of seismology. Some of these researchers applied non-time series models to predict earthquakes. In non-time series models, the temporal data were converted to non-temporal data, i.e., the two-dimensional temporal feature matrix was reduced and compressed into a one-dimensional feature vector, and then the extracted temporal features were added to construct a one-dimensional sample set. Zhang, Y. et al. proposed an earthquake disaster image information anomaly detection model based on scale-invariant feature transform (SIFT) feature and support vector machine (SVM) classification in 2019 [17]. Jozinovic et al. proposed a method based on deep convolutional neural networks (CNN) to predict the degree of ground shaking when an earthquake occurred in 2020 [18]. Xiong, P. et al. used light gradient boosting machine (LightGBM) in a five-fold cross-validation test on the benchmarking datasets, which shows a strong capability in discriminating electromagnetic pre-earthquake perturbations in 2020 [19]. Wang, L. et al. used an efficient seismic slope stability analysis approach by introducing LightGBM to analyze the seismic stability of a hypothetical embankment in 2021 [20]. Saad constructed a random forest (RF) model to detect the location of an upcoming earthquake in 2022 [21]. The non-time series models developed in this paper include neural network (NN), SVM, gradient boosting decision tree (GBDT), RF and LightBGM models represented by integrated trees.
Some experts use time series models for earthquake forecasting. Compared with non-time series models, time series models can preserve the richness of the data, and can learn the asymptotic changes of various features in the time scale, which can detect the earthquake precursor anomaly signals. Kanarachos et al. introduced a signal processing algorithm, which combined wavelets, neural networks and the Hilbert transform to predict earthquake activity in 2017 [22]. Zhou, Y. et al. combined convolutional and recurrent neural networks to pick phases from archived continuous waveforms in 2019 [23]. In 2019, Titos et al. also used the gated recurrent unit (GRU) model to exploit temporal and frequency information from continuous seismic data to detect and classify continuous sequences of volcanic seismic events at the Deception Island Volcano, Antarctica [24]. In 2020, Jena et al. developed a recurrent neural network (RNN) model to create an earthquake probability map for the eastern region of India, including the coastal state of Odisha [25]. Xu, Y. et al. proposes a framework based on a long short-term memory (LSTM) neural network architecture for real-time regional seismic damage assessment in 2021 [26]. In 2021, Yan, X. et al. utilized two LSTM models to simulate and forecast hydrological variations based on hydrological time series of data from a monitoring site to identify possible precursors to the Lijiang earthquake [27]. In 2021, Huang, Y. et al. introduced a moving-steps strategy and established three recurrent neural network models: simple-RNN, LSTM, and GRU models to the prediction of the slope dynamic response [28]. The main time series models established in this paper include the LSTM prediction model, the GRU, and the CNN+GRU stacking prediction model. Xue, J. et al. applied deep learning to extract SESs and develop a novel deep learning network based on geoelectric field characteristics by combining the LSTM in 2022 [29]. This paper analyzes and compares various non-time series models and time series models in order to find a more suitable and simpler model for earthquake prediction based on AETA data.
In Section 2, this paper introduces the AETA system developed by the IMS lab and the process of constructing the dataset. In Section 3, the construction of the non-time series and time series models are introduced respectively. Section 4 shows the results of each prediction model. In Section 5, the prediction results of non-time series models and time series models are compared and analyzed, and LSTM in the time series model gets the best results in earthquake prediction based on AETA data. Finally, the research work of this paper is summarized in the conclusion section.

2. AETA System

2.1. AETA Devices and Data Acquisition

AETA, the multi-component earthquake monitoring and prediction system, was developed by the Key Laboratory of Integrated Microsystems of Peking University Shenzhen Graduate School for earthquake prediction. This system consists of two sensors, a terminal, a cloud platform, and a data analysis system. The two sensors are an electromagnetic sensor and a geoacoustic sensor, both of which are used to collect electromagnetic and geoacoustic signals, respectively. Then they transmit data to the data-processing terminal through cables, undergo sampling, compression, and filtering processes, and finally upload the data to the cloud server through the network for subsequent data analysis, the capture of earthquake precursor anomaly signals and research of seismic-related activities [30].
The electromagnetic sensor mainly monitors the electromagnetic signal band containing a very low frequency (VLF) and an ultra-low frequency (ULF), with a frequency range of 0.1 Hz to 10 kHz, an amplitude range of 0.1 to 1000 nT, a sensitivity of >20 mV/[email protected] Hz to 10 kHz, a resolution of 18 bits, and a sampling rate of 500 Hz at low frequency and 30 kHz at full frequency [31]. The frequency range of the geoacoustic signals monitored by the geoacoustic sensor is 0.1 Hz to 50 kHz, with a resolution of 18 bits, the sensitivity of 3LSB/pa@ 0.1 Hz to 50 kHz, and the sampling rate of 500 Hz at low frequencies and 150 kHz at full frequencies [32].
The AETA sensor system is low-cost and convenient for large-scale, high-density deployment. Up to now, nearly 300 sets of AETA devices have been deployed in some of the most seismically active regions in China, such as Sichuan, Yunnan, and the surrounding provinces. The AETA system has accumulated over 50 TB of data over 5 years of observation. The rich observation data has laid a good foundation for the establishment and comparative analysis of earthquake-prediction models.

2.2. Data Set Construction

AETA raw data contains electromagnetic signals and geoacoustic signals. Most of the anomalies of AETA data occur within n days before an earthquake, where n is 27 for electromagnetic signals and n is 10 for geoacoustic signals. The reason for the value of n can be confirmed in Section 3. This paper selects the feature data of the first n days as the input sample and the last 7 days as the output sample. The current time is T 0 , the first n days are T n , and the last 7 days are T + 7 , earthquake-prediction tasks are defined as follows:
y ~ 1 = F 1 ( x T n , x T n + 1 ,   ,   x T 1 ) , y ~ 1 [ 0 , 1 ]
y ~ 2 = F 2 ( x T n , x T n + 1 ,   ,   x T 1 )
y ~ 2 = [ y ~ m a g , y ~ l a t , y ~ l o n ] ,   y ~ m a g [ 3.5 , 8 ] ,   y ~ l a t [ 22 , 34 ] ,   y ~ l o n [ 98 , 107 ] ,
where y ~ 1 is the result of earthquake prediction, and F 1 is the earthquake prediction model; y ~ 2 is the prediction result of magnitude and epicenter location, and F 2 is the corresponding prediction model, x represents the input feature data, and y ~ m a g , y ~ l a t , y ~ l o n represent the predicted magnitude, and epicenter latitude and longitude, respectively.
In terms of the extraction range of the dataset, a typical earthquake-prone region in China has been selected for earthquake prediction in this paper. To be specific, this region is located in the Sichuan–Yunnan region ( 22.00 °   N ~ 34.00 °   N ,   98.00 °   E ~ 107.00 °   E ) , which has suffered a total of 206 earthquakes with magnitude larger than 3.5 from January 2017 to January 2021, according to the China Earthquake Network (http://www.ceic.ac.cn/history, accessed on 10 March 2021). Figure 1 shows the distribution of stations and earthquakes with magnitude above Ms3.5 in the Sichuan–Yunnan region over 5 years. Moreover, the epicenter distribution shows a typical regional clustering effect.
In order to improve the accuracy of prediction, the Sichuan–Yunnan region is divided into three areas according to the clustering feature of earthquake distribution. The three areas are marked out with different color boxes in Figure 1. Area1 is marked out with green box, area2 is marked out with brown box and area3 is marked out with pink box. The range of the stations are expressed in Equations (4)–(7) in order to reduce the effect of random deviation generated by the division of regions and allows the station data in the overlapping regions to be comprehensively trained on both regional models.
area 1 = ( 30 °     Δ N ~ 34 ° + Δ N ,   102 °     Δ E ~ 106 ° + Δ E )
area 2 = ( 26 °     Δ N ~ 30 ° + Δ N ,   102 °     Δ E ~ 106 ° + Δ E )
area 3 = ( 24 °     Δ N ~ 28 ° + Δ N ,   98 °     Δ E ~ 102 ° + Δ E )
Δ = 1 °
Earthquake-prediction models are built in each of the three regions with the sample matrices obtained by sliding windows [33]. The step of the sliding window is set to 1 day, all features are arranged in time sequence by day, and the one-dimensional sequence sample set can be generated along the time dimension according to this sliding window. In order to enhance the robustness of the data, this paper checks the data loss rate for each sliding window matrix, and if the loss rate > 0.3, this sample is to be discarded. The loss rate is chosen as 0.3, which is an experimental result. This paper tried to use 0.1~0.5 as the threshold, the step size of which was set to 0.05. The amount of data was sufficient and the validation set obtained was good enough when 0.3 was chosen as the threshold. Figure 2 shows the regional sample composition process.
This paper builds the AETA feature library with a total of 95 kinds of featured data whose validity has been verified [34]. The detailed information is shown in Appendix A.
In addition, due to the infrequent occurrence of earthquakes, the imbalance of positive and negative samples must be considered when constructing the prediction model. Otherwise, it will greatly affect the training of the model, resulting in the model being more likely to predict no earthquakes. For the problem of the lack of few positive samples, the synthetic minority oversampling technique (SMOTE) algorithm must be applied when constructing the sample set for the non-time series prediction model.
To generate new samples by algorithm synthesis, SMOTE iterates through the sparse number of unbalanced samples, calculates the distance of each sample x from all the remaining samples, and gets the k samples closest to it. Then, according to the imbalance ratio of positive and negative samples, some samples are randomly selected from the obtained k samples to be added to the original sample. Finally, a new sample is generated based on each selected sample x ^   and the original sample x [35]. The equation is as follows:
x new = x + rand ( 0 , 1 )   ×   ( x ^     x ) .
In contrast, the SMOTE algorithm cannot be used directly for the time series prediction model because its samples are two-dimensional matrices. Therefore, this paper designs a two-dimensional matrix SMOTE algorithm to generate new two-dimensional samples. The aim of this is to compress two-dimensional samples into one-dimensional samples in time sequence, then process them by using the SMOTE algorithm, and finally reconstitute the two-dimensional samples in time sequence. The use of the SMOTE algorithm allows for a more productive and balanced model training with a ratio of positive and negative samples between 1:1 and 1:1.5. Figure 3 shows the two-dimensional matrix SMOTE algorithm.
For each station with different installation time and different number of total samples, 85% of the total samples of each station are taken as the training set and 15% as the validation set to build the seismic prediction model and the magnitude and epicenter prediction model respectively.

3. Model Construction

The AETA raw signal: electromagnetic signals and geoacoustic signals, both of which are time series signals. However, it has not been verified whether the featured data after feature extraction retains temporal information [34]. Therefore, this paper constructs both time series and non-time series models for earthquake prediction.

3.1. Non-Time Series Prediction Model

AETA feature data contains rich information of earthquake precursor anomaly signals and have high correlations with earthquakes. Therefore, robust non-time series models can commonly be used to identify the hidden outliers in the featured data for earthquake prediction.

3.1.1. LightGBM

For processing non-time series data, GBDT is often harnessed by researchers because of the distributed training feasibility and low memory consumption. So far, there have been various extended models, including the categorical boosting (CatBoosting) and the extreme gradient boosting (XGBoost) [36], but their efficiency and scalability are not ideal when the amount of data is large. LightGBM is more suitable for seismic prediction that are based on large amounts of data in terms of decision making and scalability by using gradient-based one-side sampling (GOSS), exclusive feature bundling (EFB) [37].

3.1.2. NN

NN is a complex network consisting of multiple layers of neural cells fully connected in parallel. Neural networks can be divided into three main layers, with the first layer as the input layer, the last layer as the output layer, and all the middle ones as hidden layers. To be more specific, the training data is input from the input layer, propagated forward along the network, and the loss is calculated in the output layer. As far as the structure is concerned, the neural network layers are interconnected in parallel, and a particular neuron is interconnected with all neurons in the preceding layer and all neurons in the following layer [38]. Each local neuron is a perceptron consisting of a linear expression z =   w i x i + b with an activation function σ ( z ) . The back propagation (BP) algorithm is regarded as the most basic algorithm for the training of a neural network, which updates the neuron weight matrix w and the bias vector b by the gradient descent (GD) algorithm.
The commonly used activation functions are sigmoid, tanh, relu, and elu. The general loss function includes cross-entropy, mean absolute deviation, and mean square deviation. Among them, cross-entropy is often used in classification tasks, whereas mean absolute deviation and mean square deviation are commonly used in regression tasks.

3.1.3. Other Models

In addition to the two models mentioned above, other well-performing algorithms in machine learning tasks include the SVM and RF. SVM is mainly used for binary classification, seeking to maximize the classification interval, with linear and nonlinear [39]; RF is a kind of bagging integrated tree, taking the principle of minority rule; it is not easy to overfit, and has strong generalization ability [40].

3.2. Time-Series Prediction Models

Time-series prediction models contain the LSTM prediction model, the GRU prediction model, and the CNN+GRU stacking prediction model. These models are fully trained on the sample set to predict the three elements of earthquakes. This paper selects the most suitable model among them and the optimal parameter set for seismic prediction by grid search and a five-fold cross-validation method.

3.2.1. LSTM

LSTM is a time series network that takes the previous output and the current input as the input for the next time, thus allowing the model to take into account historical or future information of the data when performing prediction tasks. Compared with the traditional time-series model RNN, LSTM incorporates the concept of “gates”, which is used to solve the problem of retaining long sequences of historical information and long-range gradient transfer. To be specific, LSTM introduces cell states, input gates, forgetting gates, and output gates, which are used for storing information about historical data, inputting current and historical information, controlling the information that needs to be forgotten, and output predicted results, respectively. The schematic diagram of LSTM network architecture is shown in Figure 4.
The input and output of the LSTM are controlled by gates at each moment. The parameter matrix is gradient-updated by the loss of the final loss function, which affects the weights of the gates. Through the feedback regulation of the three gates, important information can be selectively retained, redundant information forgotten, and then beneficial information passed on. The data are processed as shown in the following equations:
f t = σ ( W f · [ h t 1 , x t ] + b f )
i t = σ ( W i · [ h t 1 , x t ] + b i )
C ˜ t = t a n h ( W c · [ h t 1 , x t ] + b C )  
h t = σ ( W o · [ h t 1 , x t ] + b o ) t a n h ( C t )  
C t = f t C t 1 + i t C ˜ t ,
where x t is the input, WS is the weight parameter, b is bias parameter, σ ( z ) is an activation function, C t refers to the previously stored information, h t 1 is the output at the previous moment, and h t is the output.
Figure 5 shows the training structure of the LSTM model in this paper, which mainly contains three LSTM layers with 32 neurons and one fully connected layer with the suitable number of neurons. The LSTM model extracts the anomaly information from input data for n consecutive days and uses it for time sequence analysis.

3.2.2. GRU

GRU has only two gates: the reset gate and the update gate [41]. Because the number of gates is reduced by one, its structure is correspondingly more simplified. The role of the update gate of GRU incorporates the forgetting gate and the input gate in LSTM, and the principle is similar to that of LSTM. Thus, the effects of the two models are not much different on several tasks. The GRU network architecture schematic is shown in Figure 6.
The data are processed as shown in the following equations:
z t = σ ( W z · [ h t 1 , x t ] )  
r t = σ ( W r · [ h t 1 , x t ] )  
h ˜ t = t a n h ( W · [ r t h t 1 , x t ] )  
h t = ( 1 z t ) h t 1 + z t h ˜ t ,
where x t is the input, WS is the weight parameter, σ ( z ) is an activation function, h t 1 is the output at the previous moment, and h t is the output.

3.2.3. CNN+GRU

CNN uses convolutional kernels to effectively grasp local information and perceive the whole from the local by convolutional operation, which has an excellent performance in multidimensional image processing.
This paper proposes a CNN+GRU stacking method for earthquake prediction. First, the one-dimensional convolution kernel performs convolution operation along the time dimension to extract the anomaly information from the two-dimensional feature matrix for n consecutive days. One of the dimensions of the two-dimensional feature matrix is composed of n, and the other is the number of feature data.
After that, the data extracted by multiple convolutional kernels are stitched horizontally to form a new two-dimensional anomaly data matrix. Finally, it is input to the GRU temporal network for classification or regression tasks. The overall architecture of the model is shown in Figure 7.

3.3. Model Parameters

As for the non-time series model, the two-dimensional feature matrix compressed into a one-dimensional dense vector in the time dimension by the principal components analysis (PCA) algorithm, which can reduce the dimension to show the main components of feature data [42]. Then the time-series features extracted by the Tsfresh algorithm are added to non-time series model as the input [43]. The temporality of the compressed data is to be excluded from consideration, so that additional temporal features are added to the feature matrix as compensation.
In addition, the cross-validation method is used to increase the robustness of the model [44], which can make the results more convincing by dividing the total sample into k mutually exclusive subsets and being trained accordingly. Figure 8 shows the schematic diagram of the five-fold cross-validation.
For the time series model, the two-dimensional time series matrix sample needs to be constructed by the time-dimensional sliding window. The optimal model is preserved by observing the change in the loss function, through cross-validation. In terms of the cross-validation of time series data, the problem of data leakage needs to be considered, which needs to be addressed by sliding windows. The quality of the data directly affects the model prediction effect, and the amount of information contained is different in different time scales of the sliding window. If the time scale is too short, the amount of information obtained becomes insufficient. On the other hand, if the time scale is too long, it means that too much useless information must be obtained. Therefore, choosing a suitable sliding window is vital to the final results of the model.
In terms of earthquake prediction, a problem is that there are usually more no-earthquake samples than samples with earthquakes. As a result, the prediction model tends to identify “almost” samples as no-earthquake samples to get a higher area under ROC curve (AUC) index. Therefore, it is necessary to introduce the receiver operating characteristic (ROC) curve when predicting earthquakes in order to make the prediction results more accurate. The ROC curve has the true positive rate (TPR) as the vertical axis and the false positive rate (FPR) as the horizontal axis. AUC is the area under the ROC curve. When the AUC is closer to 1, it indicates that the effect of prediction model is better [45,46,47]. The TPR, FPR, and AUC calculation method is defined as follows:
TPR = TP TP + FN
FPR = FP TN + FP
AUC = 1 2 i = 1 m 1 ( x i + 1     x i ) · ( y i + y i + 1 ) ,
where TP is true positive, FP is false positive, TN is true negative, FN is false negative, and x i + 1 , x i , y i , y i + 1 represent the FPR and TPR under different thresholds of the ROC curve.
Taking the processing of electromagnetic signals as an example, in this paper, eight control experiments were done for the sliding window time scale parameter with an interval of 5 days each time. This paper verifies the effect of the model and counts the number of stations with an AUC index greater than 0.65. It is obvious that the effect of the model increases at first but decreases as the time scale of the sliding window becomes longer. It achieves the best effect in the time range of [24 days, 30 days], and finally the sliding window size is set to 27 days for electromagnetic signals according to the results in Figure 9a. Similarly, the sliding window size is determined to be 10 days for the geoacoustic signals as shown in Figure 9b.

3.4. Softmax-AUC Index Weighting Method

There are different prediction effects for different stations despite using the same model. Thus, this paper proposes the method of multi-station softmax-AUC with index weighting, which can integrate the information of each station for seismic prediction. The AUC index on the validation set can measure the prediction effects of each station. Therefore, this paper uses the softmax-AUC index weighting method to select suitable stations with AUC above 0.65. The prediction result of each station is 0 or 1. Specifically, 0 means no earthquake has occurred, and 1 means an earthquake has occurred, and the weight is normalized by e AUC i . The threshold is set to 0.5, if the risk value (risk_value) is bigger than the threshold (threshold_value), the region is identified to have an earthquake, and vice versa. The equations are defined as follows:
station _ risk _ value i = e AUC i j e AUC j pred i
risk _ value = i station _ risk _ value i
area _ pred = { 1   if   risk _ value   >   threshold _ value 0   if   risk _ value     threshold _ value ,
where station _ risk _ value i is the risk index of the station, pred i is prediction result of station, threshold _ value is the threshold which is set to 0.5, area_pred is the final regional prediction results.

3.5. Model Evaluation Indicators

This paper focus not only on the accuracy of earthquake prediction, but also on the missed prediction and false prediction. Therefore, this paper utilizes the indicators of PA, PP, RP, PN, and RN to evaluate the goodness of the model. The relevant indicators are explained separately below. If these indicators get closer to 1, it indicates that the prediction effect of the model is better. The threshold is set to 0.5, the region is identified to have an earthquake if the risk value exceeds threshold. Based on this judgment condition, the number of predicted earthquakes and predicted no-earthquakes can be counted. According to the prediction results, the indicators can be calculated to illustrate the effects of the prediction model.
Precision All (PA) refers to the total accuracy, the number of correct prediction results divided by the total number of predictions.
PA = TP + TN TP + FN + FP + TN
Precision positive (PP) is the accuracy of predicting earthquakes, the number of correct prediction results divided by the number of predicted earthquakes.
PP = TP TP + FP
Recall positive (RP) is the recall rate, the number of correct prediction results divided by the total number of actual earthquakes.
RP = TP TP + FN
Precision negative (PN) is the accuracy of predicting no-earthquakes, the number of correct prediction results divided by the total number of predicted no-earthquakes.
PN = TN FN + TN
Recall negative (RN) is the recall rate of no-earthquakes, the number of correctly prediction results divided by the total number of actual no-earthquakes.
RN = TN FP + TN
where TP is true positive, FP is false positive, TN is true negative, and FN is false negative.
The magnitude prediction, epicenter latitude and longitude prediction belong to the ternary regression task. This paper uses two vital indicators—magnitude of absolute mean deviation (mag_mae) and distance average deviation (distance_average)—to evaluate the effect of the model which focuses on the absolute deviation of magnitude prediction and the distance deviation of the epicenter prediction. The two related indicators are explained below:
Magnitude of absolute mean deviation (mag_mae) represents the average of the absolute deviation between the predicted magnitude and the actual magnitude. It can reflect the accuracy of the magnitude prediction. The calculation method is defined as follows:
mag_mae   =   1 m i = 1 m | y i     y ^ i | ,
where y i represent the actual magnitude and y ^ i represents the predicted magnitude.
Distance average deviation (distance_average) represents the mean value of the difference between the actual and predicted positions of the epicenter. It can reflect the accuracy of the epicenter prediction. The calculation method is defined as follows:
distance_average = 1 m i = 1 m geodesic ( ( y i _ l a t ,   y i _ l o n ) ,   ( y ^ i _ l a t ,   y ^ i _ l o n ) ) ,
where y i _ l a t ,   y i _ l o n represents the actual epicenter latitude and longitude, respectively and y ^ i _ l a t ,   y ^ i _ l o n represents the predicted epicenter latitude and longitude, respectively.

4. Results

This paper compares the effects of the models based on the four indicators: AUC, RP, distance _ average , and mag _ mae . To be specific, the AUC metric represents the overall performance of the model for earthquake prediction, including correct prediction, false prediction, and missed prediction. The RP measures the recall of earthquake prediction and its expression is the same as TPR. The distance_average and mag_mae evaluate the deviation of epicenter and magnitude prediction, respectively. The higher the number of stations in the same index, the better the model effect is indicated.

4.1. Prediction Results of Non-Time Series Models

The results of the non-time series models were verified in the Sichuan–Yunnan region, which are shown in Table 1. The data in the table represent the number of corresponding stations.
The results of the five non-time series prediction models show that the LightGBM model and the NN model have better prediction results than the other models. The two indicators of the LightGBM model, AUC and distance_average, are better than those of the NN model, but the RP and mag_mae are worse than those of the NN model.

4.2. Prediction Results of the Time Series Models

The results of the time series models are verified in the Sichuan–Yunnan region, which are shown in Table 2. The data in the table represent the number of corresponding stations.
Overall, the three time series prediction models all has great prediction results. The two indicators of the LSTM model, AUC, and distance_average, are better than those of other models. In the next section, the paper compares the prediction results of the above eight models in detail.

5. Discussion

5.1. Comparison of Non-Time Series Models and Time Series Models

To determine a more suitable earthquake prediction model for the AETA data, this paper compares the effects of three time series models and five non-time series models on the validation set, as shown in Figure 10. The first five are non-time series prediction models, namely LightGBM, NN, SVM, GBDT, RF, and the last three are time series prediction models, namely LSTM, GRU, and CNN+GRU.
The input of the time series prediction model is two-dimensional time series feature data, which completely contains the information of AETA raw signals to the maximum extent. In contrast, the input data of the non-time series prediction model is compressed, and no longer retains the time series information of the raw data. Therefore, it is difficult for the model to learn the changes of the time dimension.
According to the bar chart, it is obvious that the overall effect of the three time series prediction models is significantly better than that of the five non-time series prediction models in four indicators. Moreover, the LSTM model has the best performance among these prediction models. The LSTM prediction effects of each station are shown in Appendix B. Prediction effects of the LSTM model for magnitude and epicenter of each station are shown in Appendix C.

5.2. Real-Earthquake Prediction

LSTM, which is the model with the optimal effect among these several prediction models, was chosen to predict the earthquake with a magnitude larger than 3.5 in the Sichuan–Yunnan region ( 22 . 00 °   N ~ 34 . 00 °   N ,   98 . 00 °   E ~ 107 . 00 °   E ) from April 2021 to July 2021 for 16 consecutive weeks. Each prediction was made every Sunday and includes Y/N prediction for next 7 days in the target region, if it was a Y prediction, then the location and magnitude would be given as well.
The whole prediction process is shown in Figure 11. When making a prediction for the next week’s earthquake, the features are first input into the model of all stations in each region to get the risk_value for each region. If the risk_value is less than the threshold_value for the region, it means there is no expected earthquake and the prediction is finished. On the contrary, if it is judged to be an earthquake, then the prediction of magnitude and epicenter latitude and longitude will start.
LSTM is used for real-earthquake prediction for 16 consecutive weeks based on the multi-station decision mechanism method. Most of the large prediction deviation occurred in the first month, as it was difficult to set a reasonable regional risk threshold at the beginning. In order to reduce prediction deviation, this paper made several historical projections of the model, adjusted the regional risk threshold and optimized the model, which finally improved the model prediction results. AETA is also continuously collecting new data, and the model is incrementally learned in batches every two weeks. Then the new collected data is used for sample set construction. Every month, the model is learned in a full batch, and the training and testing sets are regenerated with all the data and the whole model is retrained. Finally, we save the best model with the best effect on the validation set.
This paper focuses on the earthquakes with the largest magnitude within a week. Table 3 shows the results of the real-earthquake prediction for 16 consecutive weeks; the letter “N” means no earthquakes.
The number of correct predictions is 10, the number of false predictions is 2, and the number of missed predictions is 4. The real-earthquake prediction achieves great results with an accuracy of 0.625, which is the number of correct prediction results divided by the total number of weeks. Some prediction results have a small deviation, such as the predictions in the eighth, eleventh, twelfth, thirteenth and sixteenth weeks. In addition, the model has won the top three in the second AETA earthquake prediction competition, which can also prove the effectiveness of our model. The website of the competition is https://competition.aeta.io/, accessed on 1 September 2021.
However, there are still some actual earthquakes that are not predicted. The predicted results may be inaccurate sometimes when predicting earthquakes near Yunnan. After research and discussion, the reasons can be attributed to the fact that the stations in Sichuan province are installed with high density and concentrated distribution, so that the stations can capture the anomaly signals in time, whereas the stations in Yunnan province are mostly scattered, resulting in the lack of abundant monitoring data and anomaly signals. Therefore, the number of stations installed near Yunnan or the weight of Yunnan in the model needs to be increased in the future.

6. Conclusions

AETA, as a multi-component earthquake detection and prediction system, has been developed independently by the Key Laboratory of Integrated Microsystems (IMS) of Peking University Shenzhen Graduate School. After five years of data acquisition, a large amount of data has been accumulated. This paper constructs several non-time series and time-series models for earthquake prediction based on the feature data extracted from the AETA raw signals. After comparison and analysis, it is confirmed that the LSTM model can achieve the best results for earthquake prediction. Then, the LSTM model is selected for real-earthquake prediction for 16 consecutive weeks. In addition, this paper proposes the method of multi-station softmax-AUC with index weighting to solve the situation that the prediction effect of the same model varies in different stations. This model has placed in the top three of the second AETA earthquake prediction competition.

Author Contributions

Conceptualization, C.W. and S.Y.; methodology, C.W. and C.Y.; software, C.W. and C.L.; validation, C.W., S.Y. and X.W.; formal analysis, C.W.; investigation, C.W. and C.L.; resources, S.Y. and X.W.; data curation, C.W., C.L. and C.Y.; writing—original draft preparation, C.W.; writing—review and editing, S.Y. and X.W.; visualization, C.W. and C.L.; supervision, S.Y. and X.W.; project administration, S.Y. and X.W.; funding acquisition, S.Y. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a fundamental research grant from Shenzhen Science and Technology Program, grant number is JCYJ20200109120404043 and Youth Innovation Talent Project of Guangdong Province Universities, grant number is 2021KQNCX112.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. AETA Feature Library.
Table A1. AETA Feature Library.
TypeFeatureMeaningNumber of EM FeatureNumber of GA Feature
Time domain
features
abs_meanMean of absolute value22
varVariance21
powerPower21
skewSkewness21
kurtKurtosis21
abs_maxMaximum absolute value21
abs_top_xAbsolute maximum x% of position42
energy_sstdstandard deviation of short-time energy21
energy_smaxShort-time maximum energy21
s_zero_rateShort-time average over-zero rate01
s_zero_rate_maxShort-time maximum over-zero rate01
Frequency domain featurespower_rate_atobPower from a~bHz in the frequency spectrum1111
frequency_centerCenter of gravity frequency11
mean_square_frequencyMean square frequency11
variance_frequencyFrequency variance11
frequency_entropyEntropy of the spectrum11
Wavelet transformslevelx_absmeanMean value after the reconstruction of layer x44
levelx_energyEnergy after the reconstruction of layer x44
levelx_energy_svarVariance of the energy value after the reconstruction of layer x44
levelx_energy_smaxMaximum value of energy after the reconstruction of layer x44
Total 5144

Appendix B

There are six indicators, AUC, PA, PP, RP, PN, RN, to evaluate the accuracy of the model and the missed prediction and false prediction. The evaluation indicators of the models are described in detail in Section 3.
Table A2. Prediction results of LSTM on area1.
Table A2. Prediction results of LSTM on area1.
No.StationAUCPAPPRPPNRN
1DJY0.680.680.640.810.740.55
2SMSD0.680.680.601.001.000.37
3QC0.690.690.690.710.690.67
4WC0.670.670.730.550.620.79
5BX0.780.780.830.710.740.85
6GZYJ0.800.800.870.710.750.89
7EB0.660.660.601.001.000.33
8GYCT0.680.690.660.840.750.53
9JC0.820.820.760.960.940.69
10DF0.760.760.691.001.000.52
11QCYD0.760.770.960.540.700.98
12QCPS0.780.780.870.670.720.90
13CZ0.810.810.731.001.000.62
14PWHY0.780.770.940.600.680.95
15SPMJ0.680.680.720.590.660.77
16PWBM0.740.750.681.001.000.49
17JCAN0.660.660.960.330.600.99
18YAYJ0.690.690.700.670.680.71
19HS0.770.770.691.001.000.55
20MXDX0.690.700.890.440.630.95
21JZG40.710.710.720.670.700.75
22JZG50.730.730.710.800.750.65
23JZG20.700.700.650.820.770.57
24PWNB0.690.680.650.750.720.62
25JZG10.680.680.680.710.680.66
26WXZZ0.800.800.721.001.000.60
27HYA0.670.650.581.001.000.34
28DL0.800.800.790.760.810.83
29BK0.680.680.700.640.660.72
30HBY0.720.720.750.650.700.79
31REG0.880.880.830.960.950.79
32EMHW0.950.950.911.001.000.90
33JYZJJ0.930.930.871.001.000.86
34LSSW0.690.700.820.470.650.91
35RXCS0.960.960.931.001.000.93
36ZGDA0.770.770.681.001.000.55
37MYBC0.860.860.771.001.000.72
Table A3. Prediction results of LSTM on area2.
Table A3. Prediction results of LSTM on area2.
No.StationAUCPAPPRPPNRN
1MB0.650.650.590.730.720.58
2LB0.760.760.770.790.750.72
3ML0.730.750.810.570.720.89
4EMS0.700.680.590.890.850.50
5XJX0.660.660.600.680.720.64
6DF0.650.650.680.660.620.65
7XCXM0.650.650.700.590.610.71
8LDDZ0.650.660.690.540.640.77
9YAYJ0.660.680.621.001.000.32
10LSBS0.690.690.780.520.640.85
11HYA0.850.840.751.001.000.70
12MSQS0.670.700.680.520.710.83
13EMGQ0.680.730.950.370.690.99
14MBMZ0.680.630.920.410.530.95
15MBRD0.750.760.770.800.750.70
16MBYJ0.690.690.650.630.730.74
17WTQ0.770.770.690.790.830.74
18NJWYYLZ0.660.660.610.710.710.60
19YBYX0.980.980.951.001.000.95
20LSFZJZ0.750.760.760.650.750.84
21ZGDA0.750.750.690.760.800.74
Table A4. Prediction results of LSTM on area3.
Table A4. Prediction results of LSTM on area3.
No.StationAUCPAPPRPPNRN
1TH0.690.690.621.001.000.38
2CX0.800.800.711.001.000.61
3QJ0.680.690.680.730.690.64
4LJSD0.780.780.780.790.780.77
5SPI0.850.860.781.001.000.71
6DHZ0.680.670.620.870.790.49
7DR0.730.730.870.530.660.92
8DLSL0.770.740.630.960.950.58
9JN0.850.850.860.850.850.86
10YX0.680.680.900.400.620.95
11KM0.700.710.830.500.660.90
12LJYS0.670.670.611.001.000.33
13LJDZ0.760.760.681.001.000.52
14JZS0.910.910.970.860.870.97
15LJLD0.690.690.860.470.620.92
16TC0.790.790.730.930.900.65
17DQZ0.680.680.670.730.700.63
18JP0.870.870.791.001.000.73
19HH0.830.830.751.001.000.66
20TCMZ0.800.800.900.670.750.93
21LJNL0.680.680.770.530.630.83
22YYLG0.650.640.571.001.000.31
23XCH0.870.870.791.001.000.74
24DLHZ0.920.910.841.001.000.83
25XGLL0.900.900.831.001.000.79

Appendix C

There are two indicators, mag_mae and distance_average, to evaluate the absolute magnitude deviation and the epicenter deviation of the model. The evaluation indicators of the models are described in detail in Section 3.
Table A5. Prediction results of LSTM model for magnitude and epicenter on area1.
Table A5. Prediction results of LSTM model for magnitude and epicenter on area1.
No.StationMag_MaeDistance_Average (km)
1DJY0.2698.83
2SMWJ0.1657.97
3LXSM0.44119.46
4WC0.1296.04
5GYCT0.08102.62
6SP0.3250.16
7QCYD0.27100.69
8QCPS0.0850.48
9YAYJ0.1450.47
10QCCB0.1844.30
11HS0.2547.65
12JZG20.1519.53
13LSBS0.1550.04
14MSQS0.4048.30
15HBY0.1782.37
16REG0.3872.72
17WTQ0.2626.32
18NJWYYLZ0.1326.88
19JYZJJ0.0914.30
20LSSW0.1318.91
21RXCS0.2763.67
22LSFZJZ0.1016.96
23LSJJRMZF0.0813.22
24ZGDA0.0821.58
25MYBC0.1095.15
26SMAS0.1395.60
Table A6. Prediction results of LSTM model for magnitude and epicenter on area2.
Table A6. Prediction results of LSTM model for magnitude and epicenter on area2.
No.StationMag_MaeDistance_Average (km)
1CX0.2577.79
2SMWJ0.2580.03
3QW0.4082.78
4GAX0.2093.71
5YYYT0.1397.71
6EB0.5779.23
7MS0.2285.35
8DF0.3375.27
9YM0.1396.95
10LDDZ0.5079.81
11KM0.2160.83
12CZ0.4060.74
13MNLZ0.1994.53
14HYA0.4688.26
15YSHX0.1396.26
16MBQB0.1797.19
17MBMZ0.3957.86
18MBSK0.4137.05
19GYCT0.1880.16
20SMAS0.2291.43
21MBYJ0.1585.54
22JYZJJ0.2473.20
23RXCS0.3058.51
24LSFZJZ0.2283.11
25LSJJRMZF0.2278.03
26ZGDA0.2078.62
27YBCNQXJ0.6459.24
28YBXWSHC0.2796.84
Table A7. Prediction results of LSTM model for magnitude and epicenter on area3.
Table A7. Prediction results of LSTM model for magnitude and epicenter on area3.
No.StationMag_MaeDistance_Average (km)
1TH0.0642.34
2XC0.26101.01
3DC0.1599.96
4DHZ0.2342.08
5XCXM0.68117.21
6DLSL0.27113.03
7YL0.3055.52
8YM0.3144.14
9HA0.06109.35
10YX0.1679.27
11LJYS0.0578.47
12LJGC0.0454.87
13DQZ0.0359.05
14HH0.0314.53
15LJDD0.0642.34
16TCMZ0.26101.01
17LJHP0.1599.96
18TCRH0.2342.08
19LJNL0.68117.21
20XCH0.27113.03

References

  1. Wang, X.; Yong, S.; Xu, B.; Liang, Y.; Bai, Z.; An, H.; Zhang, X.; Huang, J.; Xie, Z.; Lin, K.; et al. Research and Implementation of Multi-component Seismic Monitoring System AETA. Acta Sci. Nat. Univ. Pekin. 2018, 54, 487–494. [Google Scholar]
  2. Varotsos, P.; Alexopoulos, K. Physical properties of the variations of the electric field of the earth preceding earthquakes, I. Tectonophysics 1984, 110, 73–98. [Google Scholar] [CrossRef]
  3. Frasersmith, A.C.; Bernardi, A.; McGill, P.R.; Ladd, M.E.; Helliwell, R.A.; Villard, O.G. Low-frequency magnetic-field measurements near the epicenter of the ms-7.1 Loma-Prieta earthquake. Geophys. Res. Lett. 1990, 17, 1465–1468. [Google Scholar] [CrossRef]
  4. Huang, Q.; Ikeya, M. Seismic electromagnetic signals (SEMS) explained by a simulation experiment using electromagnetic waves. Phys. Earth Planet. Inter. 1998, 109, 107–114. [Google Scholar] [CrossRef]
  5. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Magnetic field variations associated with SES. Proc. Jpn. Acad. Ser. B Phys. Biol. Sci. 2001, 77, 87–92. [Google Scholar] [CrossRef]
  6. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Electric Fields that “Arrive’’ before the Time Derivative of the Magnetic Field prior to Major Earthquakes. Phys. Rev. Lett. 2003, 91, 148501. [Google Scholar] [CrossRef]
  7. Huang, Q. Controlled analogue experiments on propagation of seismic electromagnetic signals. Chin. Sci. Bull. 2005, 50, 1956–1961. [Google Scholar] [CrossRef]
  8. Uyeda, S.; Nagao, T.; Kamogawa, M. Short-term earthquake prediction: Current status of seismo-electromagnetics. Tectonophysics 2009, 470, 205–213. [Google Scholar] [CrossRef]
  9. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Identifying long-range correlated signals upon significant periodic data loss. Tectonophysics 2011, 503, 189–194. [Google Scholar] [CrossRef]
  10. Potirakis, S.M.; Karadimitrakis, A.; Eftaxias, K. Natural time analysis of critical phenomena: The case of pre-fracture electromagnetic emissions. Chaos 2013, 23, 23117. [Google Scholar] [CrossRef]
  11. Han, P.; Hattori, K.; Hirokawa, M.; Zhuang, J.; Chen, C.-H.; Febriani, F.; Yamaguchi, H.; Yoshino, C.; Liu, J.-Y.; Yoshida, S. Statistical analysis of ULF seismomagnetic phenomena at Kakioka, Japan, during 2001–2010. J. Geophys. Res. Space Phys. 2014, 119, 4998–5011. [Google Scholar] [CrossRef]
  12. Hayakawa, M.; Schekotov, A.; Potirakis, S.; Eftaxias, K. Criticality features in ULF magnetic fields prior to the 2011 Tohoku earthquake. Jpn. Acad. Ser. B Phys. Biol. Sci. 2015, 91, 25–30. [Google Scholar] [CrossRef] [PubMed]
  13. Han, P.; Hattori, K.; Huang, Q.; Hirooka, S.; Yoshino, C. Spatiotemporal characteristics of the geomagnetic diurnal variation anomalies prior to the 2011 Tohoku earthquake (Mw 9.0) and the possible coupling of multiple pre-earthquake phenomena. J. Asian Earth Sci. 2016, 129, 13–21. [Google Scholar] [CrossRef]
  14. Sarlis, N.V. Statistical Significance of Earth’s Electric and Magnetic Field Variations Preceding Earthquakes in Greece and Japan Revisited. Entropy 2018, 20, 561. [Google Scholar] [CrossRef]
  15. Sarlis, N.V.; Varotos, P.A.; Skordas, E.S.; Uyeda, S.; Zlotnicki, J.; Nagao, T.; Rybin, A.; Lazaridou-Varotsos, M.S.; Papadopoulou, K.A. Seismic electric signals in seismic prone areas. Earthq. Sci. 2018, 31, 44–51. [Google Scholar] [CrossRef]
  16. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Order Parameter and Entropy of Seismicity in Natural Time before Major Earthquakes: Recent Results. Geosciences 2022, 12, 225. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Guo, H.; Yin, W.; Zhao, Z.; Ran, Q. Detection Method of Earthquake Disaster Image Anomaly Based on SIFT Feature and SVM Classification. J. Seismol. Res. 2019, 42, 265–272. [Google Scholar]
  18. Jozinovic, D.; Lomax, A.; Stajduhar, I.; Michelini, A. Rapid prediction of earthquake ground shaking intensity using raw waveform data and a convolutional neural network. Geophys. J. Int. 2020, 222, 1379–1389. [Google Scholar] [CrossRef]
  19. Xiong, P.; Long, C.; Zhou, H.Y.; Battiston, R.; Zhang, X.M.; Shen, X.H. Identification of Electromagnetic Pre-Earthquake Perturbations from the DEMETER Data by Machine Learning. Remote Sens. 2020, 12, 3643. [Google Scholar] [CrossRef]
  20. Wang, L.; Wu, J.; Zhang, W.; Wang, L.; Cui, W. Efficient Seismic Stability Analysis of Embankment Slopes Subjected to Water Level Changes Using Gradient Boosting Algorithms. Front. Earth Sci. 2021, 9, 807317. [Google Scholar] [CrossRef]
  21. Saad, O.M.; Chen, Y.F.; Trugman, D.; Soliman, M.S.; Samy, L.; Savvaidis, A.; Khamis, M.A.; Hafez, A.G.; Fomel, S.; Chen, Y.K. Machine Learning for Fast and Reliable Source-Location Estimation in Earthquake Early Warning. IEEE Geosci. Remote Sens. Lett. 2022, 19, 8025705. [Google Scholar] [CrossRef]
  22. Kanarachos, S.; Christopoulos, S.R.G.; Chroneos, A.; Fitzpatrick, M.E. Detecting anomalies in time series data via a deep learning algorithm combining wavelets, neural networks and Hilbert transform. Expert Syst. Appl. 2017, 85, 292–304. [Google Scholar] [CrossRef]
  23. Zhou, Y.; Yue, H.; Kong, Q.; Zhou, S. Hybrid Event Detection and Phase-Picking Algorithm Using Convolutional and Recurrent Neural Networks. Seismol. Res. Lett. 2019, 90, 1079–1087. [Google Scholar] [CrossRef]
  24. Titos, M.; Bueno, A.; Garcia, L.; Benitez, M.C.; Ibanez, J. Detection and Classification of Continuous Volcano-Seismic Signals with Recurrent Neural Networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1936–1948. [Google Scholar] [CrossRef]
  25. Jena, R.; Pradhan, B.; Alamri, A.M. Susceptibility to Seismic Amplification and Earthquake Probability Estimation Using Recurrent Neural Network (RNN) Model in Odisha, India. Appl. Sci. 2020, 10, 5355. [Google Scholar] [CrossRef]
  26. Xu, Y.; Lu, X.; Cetiner, B.; Taciroglu, E. Real-time regional seismic damage assessment framework based on long short-term memory neural network. Comput. Aided Civil Infrastruct. Eng. 2021, 36, 504–521. [Google Scholar] [CrossRef]
  27. Yan, X.; Shi, Z.M.; Wang, G.; Zhang, H.; Bi, E. Detection of possible hydrological precursor anomalies using long short-term memory: A case study of the 1996 Lijiang earthquake. J. Hydrol. 2021, 599, 126369. [Google Scholar] [CrossRef]
  28. Huang, Y.; Han, X.; Zhao, L. Recurrent neural networks for complicated seismic dynamic response prediction of a slope system. Eng. Geol. 2021, 289, 106198. [Google Scholar] [CrossRef]
  29. Xue, J.; Huang, Q.; Wu, S.; Nagao, T. LSTM-Autoencoder Network for the Detection of Seismic Electric Signals. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5917012. [Google Scholar] [CrossRef]
  30. Yong, S.; Wang, X.; Zhang, X.; Guo, Q.; Wang, J.; Yang, C.; Jiang, B.H. Periodic electromagnetic signals as potential precursor for seismic activity. J. Cent. South Univ. 2021, 28, 2463–2471. [Google Scholar] [CrossRef]
  31. Bao, Z.; Zhao, J.; Huang, P.; Yong, S.; Wang, X. Deep Learning-Based Electromagnetic Signal for Earthquake Magnitude Prediction. Sensors 2021, 21, 4434. [Google Scholar] [CrossRef] [PubMed]
  32. Yong, S.; Wang, X.; Pang, R.; Jin, X.; Zeng, J.; Han, C.; Xu, B.X. Development of Inductive Magnetic Sensor for Multi-component Seismic Monitoring System AETA. Acta Sci. Nat. Univ. Pekin. 2018, 54, 495–501. [Google Scholar]
  33. Carmona-Cabezas, R.; Gomez-Gomez, J.; de Rave, E.G.; Jimenez-Hornero, F.J. A sliding window-based algorithm for faster transformation of time series into complex networks. Chaos 2019, 29, 103121. [Google Scholar] [CrossRef] [PubMed]
  34. Bao, Z.; Yong, S.; Wang, X.; Yang, C.; Xie, J.; He, C. Seismic Reflection Analysis of AETA Electromagnetic Signals. Appl. Sci. 2021, 11, 5869. [Google Scholar] [CrossRef]
  35. Hussein, A.S.; Li, T.R.; Yohannese, C.W.; Bashir, K. A-SMOTE: A New Preprocessing Approach for Highly Imbalanced Datasets by Improving SMOTE. Int. J. Comput. Intell. Syst. 2019, 12, 1412–1422. [Google Scholar] [CrossRef]
  36. Liang, W.; Luo, S.; Zhao, G.; Wu, H. Predicting Hard Rock Pillar Stability Using GBDT, XGBoost, and LightGBM Algorithms. Mathematics 2020, 8, 765. [Google Scholar] [CrossRef]
  37. Zhang, D.; Gong, Y. The Comparison of LightGBM and XGBoost Coupling Factor Analysis and Prediagnosis of Acute Liver Failure. IEEE Access 2020, 8, 220990–221003. [Google Scholar] [CrossRef]
  38. Abdi, H. A neural network primer. J. Biol. Syst. 1994, 2, 247–281. [Google Scholar] [CrossRef]
  39. Tsang, I.W.; Kwok, J.T.; Cheung, P.M. Core vector machines: Fast SVM training on very large data sets. J. Mach. Learn. Res. 2005, 6, 363–392. [Google Scholar]
  40. Speiser, J.L.; Miller, M.E.; Tooze, J.; Ip, E. A comparison of random forest variable selection methods for classification prediction modeling. Expert Syst. Appl. 2019, 134, 93–101. [Google Scholar] [CrossRef]
  41. Zhang, W.; Li, H.; Tang, L.; Gu, X.; Wang, L.; Wang, L. Displacement prediction of Jiuxianping landslide using gated recurrent unit (GRU) networks. Acta Geotech. 2022, 17, 1367–1382. [Google Scholar] [CrossRef]
  42. Liu, Y.; Yong, S.; He, C.; Wang, X.; Bao, Z.; Xie, J.; Zhang, X. An Earthquake Forecast Model Based on Multi-Station PCA Algorithm. Appl. Sci. 2022, 12, 3311. [Google Scholar] [CrossRef]
  43. Christ, M.; Braun, N.; Neuffer, J.; Kempa-Liehr, A.W. Time Series Feature Extraction on basis of Scalable Hypothesis tests (tsfresh-A Python package). Neurocomputing 2018, 307, 72–77. [Google Scholar] [CrossRef]
  44. Santos, M.S.; Soares, J.P.; Abreu, P.H.; Araujo, H.; Santos, J. Cross-Validation for Imbalanced Datasets: Avoiding Overoptimistic and Overfitting Approaches. IEEE Comput. Intell. Mag. 2018, 13, 59–76. [Google Scholar] [CrossRef]
  45. Hosmer, D.W.; Lemeshow, S. Applied Logistic Regression; John Wiley & Sons, Ltd.: New York, NY, USA, 2000. [Google Scholar]
  46. Fawcett, T. An introduction to ROC analysis. Pattern Recogn. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  47. Sarlis, N.V.; Christopoulos, S.R.G. Visualization of the significance of Receiver Operating Characteristics based on confidence ellipses. Comput. Phys. Commun. 2014, 185, 1172–1176. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Distribution of earthquakes and stations in Sichuan–Yunnan region.
Figure 1. Distribution of earthquakes and stations in Sichuan–Yunnan region.
Applsci 12 08536 g001
Figure 2. The process of sample construction.
Figure 2. The process of sample construction.
Applsci 12 08536 g002
Figure 3. Two-dimensional matrix SMOTE algorithm.
Figure 3. Two-dimensional matrix SMOTE algorithm.
Applsci 12 08536 g003
Figure 4. LSTM architecture schematic.
Figure 4. LSTM architecture schematic.
Applsci 12 08536 g004
Figure 5. LSTM model structure.
Figure 5. LSTM model structure.
Applsci 12 08536 g005
Figure 6. GRU architecture schematic.
Figure 6. GRU architecture schematic.
Applsci 12 08536 g006
Figure 7. CNN+GRU stacking network architecture.
Figure 7. CNN+GRU stacking network architecture.
Applsci 12 08536 g007
Figure 8. Schematic diagram of five-fold cross-validation.
Figure 8. Schematic diagram of five-fold cross-validation.
Applsci 12 08536 g008
Figure 9. (a) Effect of window size for electromagnetic signals on prediction model. (b) Effect of time window size for geoacoustic signals on prediction model.
Figure 9. (a) Effect of window size for electromagnetic signals on prediction model. (b) Effect of time window size for geoacoustic signals on prediction model.
Applsci 12 08536 g009
Figure 10. Overall results of the eight prediction models.
Figure 10. Overall results of the eight prediction models.
Applsci 12 08536 g010
Figure 11. Decision process of earthquake-prediction model.
Figure 11. Decision process of earthquake-prediction model.
Applsci 12 08536 g011
Table 1. The results of non-time series models.
Table 1. The results of non-time series models.
Model AUC     0.65 RP     0.70 Distance _ Average     100   km Mag _ Mae     0.25
LightGBM68435236
NN57494739
SVM51394341
GBDT36423943
RF45374132
Table 2. The results of time series models.
Table 2. The results of time series models.
Model AUC     0.65 RP     0.70 Distance _ Average     100   km Mag _ Mae     0.25
LSTM84556447
GRU82586148
CNN+GRU79495640
Table 3. The results of real-earthquake prediction.
Table 3. The results of real-earthquake prediction.
Actual
Magnitude
Predicted MagnitudeActual EpicenterPredicted Epicenter
1th week
(5 April 2021–11 April 2021)
NNNN
2th week
(12 April 2021–18 April 2021)
NMs4.0N ( 28.38 °   N ,   104.76 °   E )
3th week
(19 April 2021–25 April 2021)
NNNN
4th week
(26 April 2021–2 May 2021)
NNNN
5th week
(3 May 2021–9 May 2021)
Ms3.6N ( 32.4 °   N ,   104.02 °   E ) N
6th week
(10 May 2021–16 May 2021)
Ms4.7N ( 24.43 °   N ,   99.24 °   E ) N
7th week
(17 May 2021–23 May 2021)
Ms6.4Ms3.9 ( 25.67 °   N ,   99.87 °   E ) ( 28.41 °   N ,   104.65 °   E )
8th week
(24 May 2021–30 May 2021)
Ms4.1Ms4.5 ( 25.74 °   N ,   99.95 °   E ) ( 25.59 °   N ,   99.95 °   E )
9th week
(31 May 2021–6 June 2021)
NMs4.1N ( 25.64 °   N ,   99.98 °   E )
10th week
(7 June 2021–13 June 2021)
Ms5.1N ( 24.34 °   N ,   101.91 °   E ) N
11th week
(14 June 2021–20 June 2021)
Ms4.2Ms4.2 ( 24.33 °   N ,   101.91 °   E ) ( 24.53 °   N ,   99.41 °   E )
12th week
(21 June 2021–27 June 2021)
Ms3.8Ms4.0 ( 32.2 °   N ,   104.94 °   E ) ( 24.31 °   N ,   101.87 °   E )
13th week
(28 June 2021–4 July 2021)
Ms4.6Ms3.9 ( 24.31 °   N ,   101.89 °   E ) ( 32.08 °   N ,   104.57 °   E )
14th week
(5 July 2021–11 July 2021)
Ms4.7N ( 24.43 °   N ,   99.24 °   E ) N
15th week
(12 July 2021–18 July 2021)
Ms4.8Ms3.9 ( 32.97 °   N ,   103.84 °   E ) ( 28.12 °   N ,   104.64 °   E )
16th week
(19 July 2021–25 July 2021)
Ms4.1Ms4.0 ( 29.28 °   N ,   105.44 °   E ) ( 28.14 °   N ,   104.69 °   E )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.; Li, C.; Yong, S.; Wang, X.; Yang, C. Time Series and Non-Time Series Models of Earthquake Prediction Based on AETA Data: 16-Week Real Case Study. Appl. Sci. 2022, 12, 8536. https://doi.org/10.3390/app12178536

AMA Style

Wang C, Li C, Yong S, Wang X, Yang C. Time Series and Non-Time Series Models of Earthquake Prediction Based on AETA Data: 16-Week Real Case Study. Applied Sciences. 2022; 12(17):8536. https://doi.org/10.3390/app12178536

Chicago/Turabian Style

Wang, Chenyang, Chaorun Li, Shanshan Yong, Xin’an Wang, and Chao Yang. 2022. "Time Series and Non-Time Series Models of Earthquake Prediction Based on AETA Data: 16-Week Real Case Study" Applied Sciences 12, no. 17: 8536. https://doi.org/10.3390/app12178536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop