Next Article in Journal
Research on Adaptive Bidirectional Droop Control Strategy for Hybrid AC-DC Microgrid in Islanding Mode
Previous Article in Journal
Experimental Investigation of Magnetic Abrasive Finishing for Post-Processing Additive Manufactured Inconel 939 Parts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning

School of Intelligence Science and Technology, Beijing University of Civil Engineering and Architecture, Beijing 102600, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8246; https://doi.org/10.3390/app15158246
Submission received: 29 June 2025 / Revised: 19 July 2025 / Accepted: 20 July 2025 / Published: 24 July 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

To achieve more precise and effective microseismic magnitude estimation, a classification model based on transfer learning with an improved deep residual network is proposed for predicting microseismic magnitudes. Initially, microseismic waveform images are preprocessed through cropping and blurring before being used as inputs to the model. Subsequently, the microseismic waveform image dataset is divided into training, testing, and validation sets. By leveraging the pretrained ResNet18 model weights from ImageNet, a transfer learning strategy is implemented, involving the retraining of all layers from scratch. Following this, the CBAM is introduced for model optimization, resulting in a new network model. Finally, this model is utilized in seismic magnitude classification research to enable microseismic magnitude prediction. The model is validated and compared with other commonly used neural network models. The experiment uses microseismic waveform data and images of magnitudes 0–3 from the Stanford Earthquake Dataset (STEAD) as training samples. The results indicate that the model achieves an accuracy of 87% within an error range of ±0.2 and 94.7% within an error range of ±0.3. This model demonstrates enhanced stability and reliability, effectively addressing the issue of missing data labels. It validates that using ResNet transfer learning combined with an attention mechanism yields higher accuracy in microseismic magnitude prediction, as well as confirming the effectiveness of the CBAM.

1. Introduction

Predicting earthquake magnitude is a highly challenging yet crucial issue in seismology [1]. The diverse properties of seismic waves—such as low-frequency and high-frequency energy, surface waves, and body waves—are closely related to magnitude measurement, reflecting distinct source characteristics, earthquake sizes, and ranges of epicentral distances. The challenges in earthquake magnitude prediction include the limited number of monitoring stations [2], short epicentral distances, and scarcity of training samples. In seismological research, the reliability of marker data varies significantly, with a lack of high-quality marker datasets as ground truth and a shortage of standardized benchmarks. These issues may lead to incompleteness in current seismic data and low accuracy in magnitude prediction.
The continuous development and application of machine learning in seismology have enabled the processing of large volumes of data, covering seismic phase picking [3], earthquake recognition [4], the determination of earthquake source mechanisms [5], and earthquake magnitude prediction. As a result, machine learning has been widely adopted in earthquake data processing, model selection and development, and result analysis. With the advancement of seismology and machine learning, researchers have made the following progress:
In China, Liu Tao [6] used a convolutional neural network (CNN) with seismic acceleration information as the model input for training and testing. This method achieved an accuracy of 92.3%; however, overfitting occurred during training. Additionally, earthquake acceleration time records contain only limited magnitude information, and there is a problem of missing original label data.
To address overfitting caused by large datasets, Lin Binhua [7] constructed a CNN magnitude prediction model with 3 s waveform input. By transforming the prediction problem into a classification problem and testing the model with new samples in 2019, it was concluded that the accuracy within a magnitude error range of ±0.3 was satisfactory. Nevertheless, this method did not analyze the effects of noise or waveform data beyond the 3 s window.
Zhu Jingbao [8] also utilized waveforms for magnitude prediction, selecting wave characteristic parameters such as amplitude and period as inputs to construct a Support Vector Machine (SVM)-based magnitude prediction model. Compared with traditional methods, this approach yielded a smaller prediction error (approximately 0.295), improved the prediction of microearthquakes, reduced the influence of epicentral distance, and demonstrated reliability across different events.
Chen-hui Wang [9] employed a Generalized Regression Neural Network (GRNN) for earthquake magnitude prediction, using seven parameters including the cumulative earthquake frequency and released energy as inputs. Through principal component analysis and a particle swarm algorithm, an optimized model was developed. This model effectively reduced the dimensionality, enhancing the prediction accuracy and computational efficiency, with an average error of approximately 5.17% between the predicted and actual values—outperforming both backpropagation (BP) neural networks and standard GRNNs.
Mousavi [10] designed a regressor combining a convolutional neural network (CNN) and a Recurrent Neural Network (RNN) to predict earthquake magnitudes using waveform front-end and amplitude information. This regressor is insensitive to data normalization, enabling it to leverage waveform amplitude information during training. The model can predict local magnitudes with an average error close to zero and a standard deviation of approximately 0.2. Lomax [11] used a CNN model to predict three-component 20 Hz broadband waveforms within 50 s, incorporating the event distance, azimuth, depth, and amplitude. The final convolutional layer of this model has fewer nodes than the output classifications, effectively compressing and transmitting relevant input information, which helps reduce output noise. However, due to the use of long waveforms and limited data, real-time prediction is challenging, and the model is prone to overfitting during CNN training. Chen Wanghao [12] et al. combined a CNN with a Long Short-Term Memory network (LSTM) to frame earthquake magnitude prediction as a classification problem. By introducing new magnitude prediction information as supplementary content and leveraging the strengths of both convolutional and recurrent architectures, their CNN-LSTM model fully exploits the LSTM’s ability to learn and extract temporal features, thereby improving the magnitude prediction accuracy.
The attention mechanism selectively filters crucial information from large datasets, focusing computational resources on salient features while ignoring irrelevant data [13]. The Convolutional Block Attention Module (CBAM) [14,15] is a lightweight, general-purpose attention module that can be seamlessly integrated into any CNN architecture and trained end to end with the underlying network. To address issues such as missing training samples, overfitting in traditional neural networks, and limitations in feature extraction by single-model approaches, this study proposes a transfer learning framework that transfers knowledge from related domains to target classification tasks. Specifically, we adopt a residual shrinkage network transfer learning method, pretraining a ResNet18 model [16] on the ImageNet dataset and augmenting it with an attention mechanism and soft thresholding modules to enhance the capture of effective feature information. The extracted feature vectors are then fed into a classifier for training. By transforming the prediction problem into a classification task and using seismic waveforms as input, this approach achieves improved prediction accuracy.

2. Data Preprocessing

2.1. Sample Selection

This study draws on data from the Stanford Earthquake Dataset (STEAD), a globally recognized labeled seismic dataset. From this dataset, local seismic waveforms were selected, each containing 35 attributes (labels). These primarily include the receiver network code, receiver code, receiver type and location, source time function, epicenter location, magnitude type, and arrival times of P-waves and S-waves. Objective attributes such as the seismic station location were excluded, and an analysis of magnitude correlation was conducted. Through feature selection, the data dimensionality was reduced to save costs and enhance the performance of the classification model. In this experiment, the results of the Kendall rank correlation coefficient were used to determine the final experimental parameters as shown in Table 1, which consisted of four attributes: the magnitude, P-wave arrival time, S-wave arrival time, and end time. The absolute values of the correlation coefficients between the latter three attributes and magnitude were 0.67, 0.75, and 0.86, respectively.
Earthquakes with a magnitude of 3.0 or lower are generally categorized as microearthquakes or weak earthquakes. Given that the accuracy of magnitude classification decreases significantly when the magnitude difference exceeds one unit, this study focuses on predicting microearthquake magnitudes to better capture seismic events with subtle waveform differences. A total of 9000 microseismic signal waveforms with a magnitude of 3.0 or lower were selected. These samples were derived from associated waveforms in continuous time series archived by the Data Management Center of the Seismological Cooperative Research Society [16]. Each waveform starts 5 to 10 s before the P-wave arrival and ends at least 5 s after the S-wave arrival.
In the waveform images, the arrival times of P-waves and S-waves, as well as the end time of the waveform, were marked to facilitate subsequent magnitude classification based on the images. The ratio of the training set to the test set was 8:2.

2.2. Magnitude Label

Earthquake magnitude predictions generally allow for an error margin of ±0.3 [17]. Therefore, earthquake magnitude prediction can be treated as a classification task to categorize earthquakes by magnitude. In this study, magnitudes are classified at intervals of 0.1, resulting in 30 magnitude categories, with labeled data available for each category. This classification strategy was specifically designed to align with the granularity of the dataset: the original microseismic data are annotated with a precision of 0.1 magnitude, so the 0.1-interval classification directly corresponds to the annotation intervals. The specific classification is presented in Table 2.
In the actual prediction process, if the model identifies a sample as label 10, it corresponds to the seismic magnitude range (0.9, 1.0] as shown in Table 2. In this study, the final predicted value is specified as the median of this range, which is 0.95. The potential magnitude error is ±0.05, falling within the acceptable range.

2.3. Data Preprocessing

To address overfitting in the training process, this study employs data augmentation techniques tailored to waveform line plots generated from microseismic time-series data. The specific procedures are as follows:
First, images like Figure 1 are subjected to random scaling and cropping. Each 224 × 224 waveform line plot—derived from time-series data where the x-axis represents a 0–2000 ms duration (sampling rate: 1000 Hz) and the y-axis amplitude is normalized to the range [−1, 1] via min–max scaling—is resized to 256 × 256, followed by cropping a 224 × 224 region from the center and converting it into a tensor. Scaling is restricted to ±10% of the original dimensions to preserve fine details such as the abrupt amplitude changes of P-waves and S-waves.
Next, waveform-specific image enhancements are applied, including horizontal flipping, which simulates symmetric variations in a time-series distribution while maintaining the proportional mapping of x-axis time intervals; rotation within a ±5° range, to avoid distorting the temporal sequence integrity of waveform phases; and Gaussian noise injection (with a signal-to-noise ratio ≥ 30 dB), to mimic residual high-frequency noise that remains uneliminated by the preprocessing soft thresholding algorithm.
Batch normalization is adopted to stabilize feature learning, with attention modules appended after the nonlinear activation of each convolutional layer. Composed of channel and spatial attention submodules, these modules recalibrate feature weights across channel and spatial dimensions—specifically emphasizing critical waveform segments (e.g., phase arrival times corresponding to x-axis time points) and discriminative amplitude features (within the y-axis range [−1, 1]) while suppressing irrelevant noise. During testing, images undergo identical preprocessing but no augmentation. Training with augmented data ensures the model’s robustness against rotational or flipping variations in unseen data, thereby effectively mitigating overfitting.

3. Network Model Construction

3.1. Residual Network

Neural networks can continuously and automatically extract image features from local to global scales, enabling functions such as image recognition [18]. Deep learning-based networks can capture richer and more complex features and often adapt well to new tasks. However, deeper networks are more challenging to train due to the vanishing gradient problem. Therefore, this study employs the residual network (ResNet), which addresses this issue by introducing shortcut connections. This innovation allows the network to significantly increase its depth, thereby enhancing accuracy. A schematic of the residual network structure is shown in Figure 2. In this research, the ResNet18 architecture was utilized to extract feature vector sequences.
To prevent accuracy degradation in deeper layers, the input x is directly incorporated into the output via the residual connection H ( x ) = F ( x ) + x , where F ( x ) = 0 represents the identity mapping. This architecture enables efficient feature propagation without increasing the number of network parameters or computational complexity. Leveraging this advantage, the ResNet18 model was selected for its simplicity and modularity, which align well with the characteristics of the seismic waveform dataset. Residual networks are widely used in deep learning and have demonstrated strong performance in image processing tasks.
The pretrained ResNet18 model, initialized with weights from ImageNet (a dataset containing 12 million images across 1000 categories), was fine-tuned for this study. During transfer learning, the weights and biases of all layers except the final classification layer were frozen. For the microseismic dataset, only the neurons in the last layer were re-initialized and trained to adapt to the new classification task. This approach allows the model to retain general visual features while quickly adapting to domain-specific patterns, thereby ensuring optimal classification performance.

3.2. Attention Mechanism

In neural networks, attention mechanisms enhance critical components of input data while suppressing irrelevant details, enabling focus on subtle yet important features.
The CBAM, a hybrid attention mechanism integrating both channel and spatial attention, has its structure illustrated in Figure 3. The channel attention module first processes input feature maps, followed by the spatial attention module, thereby achieving seamless fusion of the two mechanisms.
For the channel attention module, input features are subjected to parallel global maximum pooling and global average pooling to compress the spatial dimensions. The resulting outputs are processed through a shared multi-layer perceptron (MLP)—with the number of neurons matching the number of input channels—to generate channel-wise attention weights via sigmoid activation. These weights refine features by emphasizing critical amplitude-related channels. The calculation of channel attention is as follows:
M C ( F ) = σ ( M L P ( F A v g C ) + M L P ( F M a x C ) )
For the spatial attention module, channel-wise global average pooling and maximum pooling generate two 1 × H × W feature maps, which are concatenated and processed via 1 × 1 convolution and sigmoid activation to produce spatial attention weights. These weights highlight key temporal segments (e.g., P-wave/S-wave arrival times) within the waveform. The two-dimensional feature maps generated via channel-wise pooling are as follows:
F A v g S R 1 × H × W F M a x S R 1 × H × W
The formula for calculating spatial attention is
M S ( F ) = σ ( f 1 × 1 ( F A v g C ; F M a x C ) )
By sequentially applying channel and spatial attention, the CBAM enables the network to precisely capture task-relevant features in microseismic waveforms. Unlike conventional mechanisms (e.g., SENet) that focus solely on channels, the CBAM retains both inter-channel interactions and spatial dependencies, addressing limitations in waveform feature extraction by optimizing parameters from multi-dimensional perspectives.

3.3. Residual Shrinkage Network Construction

Soft thresholding is a fundamental operation in many signal denoising methods. It sets features with absolute values below a predefined threshold to zero while shrinking larger values toward zero. The threshold, a hyperparameter that must be carefully tuned, directly influences the denoising efficacy. The input–output relationship of soft thresholding is defined as
z = x τ x > τ 0 τ x τ x + τ x < τ

3.4. Microshock Magnitude Classification Model

Convolutional neural networks (CNNs) learn hierarchical image features through successive convolution and pooling layers. However, increasing the network depth often leads to the vanishing gradient problem and performance degradation, which slows down convergence and reduces the classification accuracy. To address this issue, this study adopts a residual neural network (ResNet) architecture, which introduces shortcut connections between convolutional layers to form residual blocks. These connections enable the network to learn identity mappings, thereby preventing the loss function from increasing even as the depth grows.
Transfer learning, particularly through pretraining and fine-tuning, has proven highly effective in image classification tasks. By leveraging pretrained weights from a large source domain (e.g., ImageNet), the model shown in Figure 4 can rapidly adapt to the target domain with a limited amount of labeled data. Specifically, we pretrain the ResNet on a source dataset, transfer the learned feature extractors, and retrain only the final fully connected layers on the target microseismic dataset. This approach exploits the similarities in the feature space between domains, thus enabling efficient knowledge transfer.
To further enhance performance, an attention mechanism is integrated to dynamically weight feature channels, which helps focus on discriminative information and mitigate overfitting. The resulting transfer learning framework effectively leverages ResNet’s capability for hierarchical feature extraction and attention-guided channel selection to improve the accuracy of microseismic magnitude classification.
Transfer Learning Strategy Selection: Three common transfer learning approaches were considered. Last-layer fine-tuning: Only update the parameters of the final fully connected layer (classification head) while keeping all other layers frozen. Scratch training: Randomly initialize all weights and train the entire network from scratch. Full-model fine-tuning: Use the pretrained ResNet18 weights as initialization, modify the classification layer, and fine-tune all layers on the target dataset [19].
Given the substantial domain shift between ImageNet (natural images) and microseismic waveform images, we adopted ResNet18 pretrained on ImageNet with full fine-tuning as the base model for this study. This approach was chosen for the following reasons:
Rationale for Pretrained Model Selection: Despite the inherent differences between natural images and waveform line plots, the low-level convolutional kernels of the pretrained ResNet18 (e.g., edge and texture detectors) can function as effective initial extractors for local waveform features. For instance, these kernels are capable of capturing critical amplitude jumps in waveforms, which are indicative of P-wave and S-wave arrivals.
Fine-Tuning Strategy: By updating the weights of all layers during training, the higher layers of the network can adapt to waveform-specific patterns, such as temporal correlations between different seismic phases. This strategy effectively alleviates the domain mismatch between natural images and seismic data.
Advantages Over Training from Scratch: Preliminary tests indicated that utilizing pretrained weights yields significantly better performance compared to training from scratch. The pretrained initialization provides a more robust starting point, enabling the model to converge more rapidly and achieve higher accuracy. This approach leverages the strengths of transfer learning while allowing the model to fully adapt to the unique characteristics of microseismic waveform data.
Training with Cross-Entropy Loss: The model was trained using the cross-entropy loss function to minimize the divergence between the predicted and ground-truth probability distributions. This loss function is defined as
C r o s s - e n t r o p y l o s s = 1 N i = 1 N p i log q i
where p i and q i are taken as the ground truth and predicted probabilities, respectively.
This choice facilitates efficient training and mitigates issues related to neural network saturation, ensuring stable convergence even when adapting to a novel domain.

3.5. Improvement Based on the Attention Mechanism Model

Incorporating residual modules into the network enhances training efficiency, as the skip connections between low-level and high-level layers within each residual module facilitate backpropagation without compromising performance. Furthermore, by integrating structures such as the CBAM (Convolutional Block Attention Module) with residual modules (as illustrated in Figure 5), the model can be designed with fewer parameters while maintaining comparable task-specific performance.
The improved residual model addresses the issues of vanishing gradients and performance degradation—problems in which repeated multiplication during backpropagation can lead to extremely small gradients. Following each residual block, an average pooling operation is applied, and the resulting feature embeddings are fed into a classifier independently, enabling multi-task training.

4. Classification of Experimental Results and Analysis

4.1. An Analysis of the Comparative Experimental Results of the Transfer Learning Model

First, experiments were conducted using different neural network models without transfer learning. Subsequently, to evaluate the training performance of the ResNet18 transfer learning model, the test set accuracy, training accuracy, and F1-score were adopted as evaluation metrics. Their respective definitions are as follows:
A c c u r a c y = T P + T N P + N P r e c i s i o n = T P T P + F P R e c a l l = T P T P + F N = T P P f 1 = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
Here, TP (True Positives) refers to the number of correctly predicted positive cases, FP (False Positives) denotes the number of incorrectly predicted positive cases, and FN (False Negatives) represents the number of incorrectly predicted negative cases. Additionally, P stands for the total number of actual positive cases, and N for the total number of actual negative cases. The F1-score value range is from 0 to 1. Unlike accuracy, it comprehensively incorporates the results of both precision and recall in its calculation. A high F1-score can only be achieved when both precision and recall are high, making it a more balanced metric for evaluating classification performance.
Figure 6 presents the evaluation metrics for model training using three transfer learning strategies. This experiment focused on residual networks, comparing the training process curves of different classification models to verify the effectiveness of the employed transfer learning strategies. In the figure, the training curves show dotted lines representing the test set accuracy, while solid lines correspond to F1-scores. As the number of training epochs gradually increases, these metrics stabilize, indicating that the models possess good generalization ability.
Specific values are provided in the accompanying Table 3. Among the strategies evaluated, the model with the lowest performance (presumably the first strategy, i.e., fine-tuning only the last layer without full transfer learning) achieved an accuracy of 84.5%, precision of 82.9%, and recall of 91.1%.
The model trained with random initialization (second strategy) yielded an accuracy of 90.1%, precision of 90.0%, and recall of 90.8%.
The model utilizing full-layer weight fine-tuning (third strategy) achieved the highest performance: 92.7% accuracy, 91.2% precision, and 90.8% recall.
Compared to the poorest-performing model, the third strategy demonstrated improvements of 8.2% in accuracy, 8.3% in precision, and 9.7% in recall. Consequently, the full-layer weight fine-tuning strategy was selected for subsequent studies.

4.2. The Magnitude Classification Model of Transfer Learning

In this study, transfer learning was compared with different neural network models. This experiment transformed the magnitude prediction task into a classification problem. Figure 7 presents the evaluation metrics for transfer learning using different strategies. This study compared the training process curves of various models to verify the effectiveness of the proposed method.
First, various neural network models were trained, and their classification performance was compared using evaluation metrics specific to classification tasks. In this study, comparisons were conducted with traditional magnitude classification models, CNN models, and the ResNet18 model proposed herein, with additional comparative training results from the AlexNet and VGG16 models included. The respective classification evaluation metrics on the test set for these five models are presented in Figure 7.
As indicated in the figure, the magnitude prediction network utilizing residual networks for transfer learning reached a stable state more rapidly and achieved the highest accuracy. This demonstrates that under the configured settings, training with residual networks yields superior performance. Both the training set accuracy and test set accuracy improve as the number of training iterations increases, while the time required to converge to a high level is shorter. This addresses issues such as vanishing gradients and exploding gradients caused by network deepening. The residual mapping ensures that network performance does not degrade, enabling the model to retain shallow features while learning deep features.
Notably, the mapping mechanism of ResNet18 does not introduce additional parameters or computational overhead but significantly enhances the effectiveness of network training. Consequently, ResNet18 was selected as the basic framework for network design in transfer learning, with the third fine-tuning strategy adopted. The detailed model results are presented in Table 4.

4.3. Introducing the Training Results of the Attention Mechanism Model

In magnitude classification, an attention mechanism was introduced for model optimization. Comparative experiments were conducted using the AlexNet, VGGNet, and ResNet networks with different attention mechanisms integrated. The classification evaluation metrics of the experimental results are presented in Figure 8, which shows the respective classification performance of residual networks incorporating the SE attention mechanism and ECA attention mechanism.
As observed from the figure, the model’s accuracy, recall, and F1-score all improved after the introduction of the attention mechanism.
Figure 9 illustrates the performance of four models trained with the CBAM under the following training configuration: a learning rate initialized at 0.001 with step-wise decay (reduced by 0.1 every 10 epochs), a batch size of 32 (optimized for NVIDIA RTX 3090 memory constraints), 50 total epochs with early stopping triggered after 5 consecutive epochs without a validation accuracy improvement (actual convergence occurred at ~30 epochs), and the Ranger optimizer with default parameters (β1 = 0.9, β2 = 0.999, weight decay = 1 × 10−5). As shown, the CBAM-enhanced models demonstrate improved performance by integrating both spatial and channel attention modules. The training curves exhibit a similar shape to those of the transfer-learned ResNet18, but the CBAM approach achieves superior generalization and higher test set accuracy. Specifically, the training accuracy stabilizes around 30 epochs, aligning with the early stopping criterion and indicating efficient convergence under the specified hyperparameters.

5. Experimental Results and Analysis of the Prediction Model

5.1. Prediction Model Results

In this experiment, seismic magnitudes were categorized into 30 classes within the range of 0–3.0, in intervals of 0.1. A total of 300 waveform images were selected for prediction, with 10 waveform samples randomly selected from each class. All prediction images underwent the same preprocessing procedures as the training and test sets. Each waveform image was annotated with the arrival times of P-waves and S-waves, as well as the end time of the waveform, to support subsequent magnitude classification.
Accuracy is defined as the proportion of correctly predicted labels in the entire test dataset. The ResNet model achieved high accuracy on the seismic magnitude waveform test dataset, indicating its strong performance in magnitude classification. This confirms that the shortcut connection mechanism in ResNet effectively mitigates performance saturation in deep networks.
The bar chart as shown in Figure 10 provides a clear visual comparison of accuracy across different magnitude classes. Notably, the classification performance for magnitudes greater than 2.0 was excellent, with accuracy exceeding 93% for all such classes. The overall classification accuracy reached 96.3%, outperforming other models in both accuracy and effectiveness.

5.2. Size Prediction Results and Analysis

In this study, to more intuitively illustrate the error distribution of magnitude estimates across different models, we generated magnitude prediction result plots, with the result deviations recorded as shown in Figure 11. The red dots in the figure represent the predicted magnitudes, based on a total of 300 selected data points for prediction. For clear comparison, auxiliary lines y = ( x + 0.3 ) and y = (x − 0.3) were added to the figure.
To further validate the efficacy of the proposed model, we compared its prediction results with those of traditional microseismic magnitude prediction models, as well as CNN, AlexNet, VGGNet, and ResNet18 architectures. Traditional CNN models are commonly employed for earthquake classification with magnitude intervals of 0.5 or 1. However, our findings indicate that in more granular magnitude classification tasks (with intervals of 0.1), the traditional CNN model exhibited the lowest accuracy, with the majority of predictions deviating beyond ±0.3 and displaying the highest degree of dispersion.
While transfer learning implementations using AlexNet and VGGNet yielded marginally better results than the CNN approach for refined earthquake magnitude prediction, their predictions still contained substantial errors, characterized by consistent bias. In contrast, predictions from the improved residual network demonstrated the smallest deviations (within ±0.3), minimal dispersion, and the highest concentration of data points clustered around the diagonal line, indicating superior accuracy.
A comparative experiment with ResNet50 revealed that while its accuracy, prediction performance, and classification evaluation metrics were comparable to those of ResNet18, it incurred longer training times. Consequently, ResNet18 was selected as the optimal model for this study.

5.3. ResNet Comparison of Different Layers

In this study, preference was given to networks with lower complexity that maintained satisfactory performance, as opposed to more complex ones. To identify the optimal network architecture for the classification task, we compared results across different ResNet variants. Table 5 presents the classification performance of ResNet18, ResNet34, and ResNet50.
As shown in the table, all three residual networks achieved high accuracy (around 90%). Specifically, ResNet18 yielded the highest accuracy at 93.72%, followed by ResNet34 at 91.76%, with ResNet50 achieving the lowest at 89.67%. This indicates that increasing the number of network layers does not necessarily improve model performance.
Figure 12 presents the classification and prediction results of transfer learning using ResNet34 and ResNet50. As shown, both residual networks exhibited small prediction errors and low data dispersion. However, individual predictions showed significant deviations—for instance, ResNet50 had a case where the predicted value differed from the actual value by two magnitude units. This suggests that while ResNet34 and ResNet50 (with 34 and 50 layers, respectively) can learn more complex features, their larger parameter counts make them prone to overfitting. Additionally, deeper networks have higher computational costs and require more device memory.
Thus, the selection of the network depth should be based on the actual microseismic magnitude detection equipment and error tolerance requirements.

5.4. An Analysis of the Experimental Results

In the final experiment, a strategy involving randomly initializing all model weights and training all layers from scratch was adopted, with the integration of the CBAM and Ranger optimizer. ResNet18 was selected as the pretrained model for transfer learning to ultimately perform microseismic magnitude classification and prediction. In this study, the allowable error thresholds were set at ±0.2 and ±0.3, and a total of 300 microseismic waveforms were used for prediction in the experiment. Detailed data from the six experiments are presented in Table 6.
Among the schemes, the one designed in this study yielded 284 samples with errors within ±0.3, corresponding to an accuracy rate of 94.7%; 261 samples with errors within ±0.2, with an accuracy rate of 87%; and an experimental variance of 0.1362. Compared with the traditional magnitude prediction method using the CNN model, this scheme showed 70 more samples with errors within ±0.3, 95 more samples with errors within ±0.2, and a variance reduction of 0.8191, thus emerging as the top-performing approach among the six schemes.

6. Conclusions and Outlook

In this study, ResNet18 was employed for spatial feature extraction, with transfer learning accelerating convergence and enhancing model performance. The integration of the CBAM further enabled the model to focus on critical spatial and channel features, improving the identification accuracy by capturing spatial-channel interdependencies and thereby effectively addressing challenges in waveform feature extraction. Testing on 300 randomly selected waveforms yielded an average accuracy of 97.6% for the proposed model, providing valuable insights for microseismic magnitude prediction and contributing to seismological research. Notably, the model achieved a favorable balance between performance and complexity: compared to ResNet50, its parameter size was reduced by 60%, and the training time per epoch was shortened by 40% (12 min vs. 20 min), making it more adaptable to microseismic monitoring devices with limited resources. It should be acknowledged that this study focused on verifying core performance; while the model selection logic already reflects efficiency considerations, metrics such as FLOPs and inference latency have not been quantified. These will be supplemented in future work to provide a more comprehensive evaluation of the model’s practical applicability.
This study primarily focused on waveform feature extraction. Future work will proceed in two directions: first, leveraging the model’s speed and accuracy to enable applications in real-time prediction scenarios; second, enhancing model efficiency by reducing model parameters and computational overhead through adaptive weight adjustments of influencing factors, while simultaneously enhancing generalization ability.
Practical prediction is influenced by multiple factors, including the source depth, regional attenuation, and station location. Additionally, magnitude measurement methods vary with the source depth. Given that the dataset in this study consisted of waveform images, future research should also consider the impact of abnormal waveforms and noise. Potential improvements include incorporating diverse waveform patterns, adding waveform preprocessing modules for optimization, using modified residual networks with transfer learning for magnitude classification, and ultimately developing new technical solutions for magnitude prediction.

Author Contributions

Methodology, H.W. (Huaixiu Wang); Formal analysis, H.W. (Haomiao Wang); Investigation, H.W. (Haomiao Wang); Writing—original draft, H.W. (Haomiao Wang); Writing—review & editing, H.W. (Huaixiu Wang) and H.W. (Haomiao Wang). All authors have read and agreed to the published version of the manuscript.

Funding

National Key Research and Development Program of China 2023YFC3008904-02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, J.; Song, J.; Li, S. Scale estimation of May 21–22 May 2021 based on deep convolutional neural network. J. Geophys. 2022, 65, 10. [Google Scholar]
  2. Ma, M.; Wang, Y.; Wang, Z.; Zhao, Q.; Pan, D. A magnitude estimation method based on the cumulative absolute displacement value. J. Nat. Disasters 2022, 31, 93–101. [Google Scholar]
  3. Kuang, W.; Yuan, C.; Zhang, J. Real-time determination of earthquake focal mechanism via deep learning. Nat. Commun. 2020, 15, 121–130. [Google Scholar] [CrossRef] [PubMed]
  4. Devries, P.M.R.; Viégas, F.; Wattenberg, M.; Meade, B.J. Deep learning of aftershock patterns following large earthquakes. Nature 2018, 8, 71–80. [Google Scholar] [CrossRef] [PubMed]
  5. Apriani, M.; Wijaya, S.K.; Daryono. Earthquake Magnitude Estimation Based on Machine Learning: Application to Earthquake Early Warning System. J. Phys. Conf. Ser. 2021, 1951, 012057. [Google Scholar] [CrossRef]
  6. Liu, T.; Dai, Z.; Chen, S.; Fu, L. Deep learning-based seismic magnitude classification. J. Seismol. 2022, 44, 656–664. [Google Scholar]
  7. Lin, B.; Jin, X.; Kang, L.C.; Wei, Y.; Li, J.; Zhang, Y.; Chen, H.; Zhou, S. Study on seismic magnitude determination based on convolutional neural network. J. Geophys. 2021, 64, 3600–3611. [Google Scholar]
  8. Zhu, J.; Song, J.; Li, S. Study on rapid estimation of earthquake warning magnitude based on support vector machine. Vib. Shock. 2021, 40, 126–134. [Google Scholar]
  9. Wang, C.; Yuan, Y.; Liu, L.; Chen, K.; Wu, H. Optimizing seismic magnitude prediction of generalized regression neural network based on principal component analysis. Sci. Technol. Eng. 2022, 22, 12733–12738. [Google Scholar]
  10. Mousavi, S.M.; Beroza, G.C. A Machine Learning Approach for Earthquake Magnitude Estimation. Geophys. Res. Lett. 2019, 47, e2019GL085976. [Google Scholar] [CrossRef]
  11. Lomax, A.; Michelini, A.; Jozinovi, D. An Investigation of Rapid Earthquake Characterization Using Single-Station Waveforms and a Convolutional Neural Network. Seismol. Res. Lett. 2019, 90, 517–529. [Google Scholar] [CrossRef]
  12. Chen, W.; Yin, L.; Li, F.; Yang, Z.; Ma, B. Study on seismic magnitude prediction based on CNN-LSTM. Sci. Technol. Innov. 2020, 11, 53–54. [Google Scholar] [CrossRef]
  13. Feng, X.; Gao, X.W.; Luo, L. A ResNet 50-Based Method for Classifying Surface Defects in Hot-Rolled Strip Steel. Mathematics 2021, 9, 2359. [Google Scholar] [CrossRef]
  14. Jiang, M.; Song, L.; Wang, Y.; Li, Z.; Song, H. Fusion of the YOLOv4 network model and visual attention mechanism to detect low-quality young apples in a complex environment. Precis. Agric. 2021, 23, 559–577. [Google Scholar] [CrossRef]
  15. Hashash, E.F.E.; Shiekh, A.H.R. A Comparison of the Pearson, Spearman Rank and Kendall Tau Correlation Coefficients Using Quantitative Variables. Asian J. Probab. Stat. 2022, 20, 36–48. [Google Scholar] [CrossRef]
  16. Nan, F.; Zhang, F.; Gong, G.; Lai, A.; Huang, S.; Fen, Y. Preliminary analysis of magnitude degree in automatic rapid report of earthquakes in Xinjiang region. Seismol. Geomagn. Obs. Res. 2019, 40, 45–52. [Google Scholar]
  17. Wu, D.; Wang, Y.; Han, M.; Song, L.; Shang, Y.; Zhang, X.; Song, H. Using a CNN-LSTM for basic behaviors detection of a single dairy cow in a complex environment-ScienceDirect. Comput. Electron. Agric. 2024, 18, 215. [Google Scholar] [CrossRef]
  18. Yu, Q.; Zhang, J.; Wei, X.; Zhang, Q. Liver tumor segmentation based on cascade separable void residue U-Net. J. Appl. Sci. 2021, 39, 378–386. [Google Scholar]
  19. Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in transformer. Int. J. Appl. Earth Obs. Geoinf. 2021, 34, 15908–15919. [Google Scholar]
Figure 1. Example of microseismic waveform sample.
Figure 1. Example of microseismic waveform sample.
Applsci 15 08246 g001
Figure 2. Residual network structure diagram.
Figure 2. Residual network structure diagram.
Applsci 15 08246 g002
Figure 3. Structural diagram of Convolutional Block Attention Module.
Figure 3. Structural diagram of Convolutional Block Attention Module.
Applsci 15 08246 g003
Figure 4. Transfer learning structure diagram.
Figure 4. Transfer learning structure diagram.
Applsci 15 08246 g004
Figure 5. A structural diagram of the model after the introduction of the CBAM.
Figure 5. A structural diagram of the model after the introduction of the CBAM.
Applsci 15 08246 g005
Figure 6. Evaluation metrics of network model classification models for different transfer learning strategies. (A) Evaluation indices of TL-M1 network classification model. (B) TL-M2 network classification model evaluation indicators. (C) Evaluation indices of TL-M3 network classification model.
Figure 6. Evaluation metrics of network model classification models for different transfer learning strategies. (A) Evaluation indices of TL-M1 network classification model. (B) TL-M2 network classification model evaluation indicators. (C) Evaluation indices of TL-M3 network classification model.
Applsci 15 08246 g006
Figure 7. Evaluation index plots of different network classification models.
Figure 7. Evaluation index plots of different network classification models.
Applsci 15 08246 g007
Figure 8. Evaluation index plot of SE attention mechanism and ECA attention mechanism in ResNet model classification.
Figure 8. Evaluation index plot of SE attention mechanism and ECA attention mechanism in ResNet model classification.
Applsci 15 08246 g008
Figure 9. An index map for the classification and evaluation of the CBAM-ResNet model.
Figure 9. An index map for the classification and evaluation of the CBAM-ResNet model.
Applsci 15 08246 g009
Figure 10. Bar graph of training accuracy evaluation indicators.
Figure 10. Bar graph of training accuracy evaluation indicators.
Applsci 15 08246 g010
Figure 11. Schematic diagram of magnitude prediction results of different network models.
Figure 11. Schematic diagram of magnitude prediction results of different network models.
Applsci 15 08246 g011
Figure 12. The magnitude prediction results for residual networks with different numbers of layers.
Figure 12. The magnitude prediction results for residual networks with different numbers of layers.
Applsci 15 08246 g012
Table 1. Results table of seismic characteristic correlation coefficient.
Table 1. Results table of seismic characteristic correlation coefficient.
Receiver LatitudeReceiver LongitudeWave Uncertainty WeightsP-Wave Propagation TimeP-Wave Arrival Time
Kendall rank correlation coefficient0.110.040.250.28−0.67
S-wave uncertainty weightsS-wave arrival timeSource level uncertainty timeFocal latitudeSource longitude
Kendall rank correlation coefficient−0.450.750.150.36−0.14
Time residualMaximum azimuth clearanceSeismic level uncertainty kilometersFocal depthSource vertical uncertainty kilometers
Kendall rank correlation coefficient0.310.250.46−0.36−0.25
Source stop timeFocal angleHypocentral distanceBack bearingNoise–signal ratio
Kendall rank correlation coefficient−0.86−0.330.220.140.27
Table 2. Magnitude classification and labels.
Table 2. Magnitude classification and labels.
LabelEarthquake MagnitudeLabelEarthquake MagnitudeLabelEarthquake Magnitude
10.0~0.1111.0~1.1212.0~2.1
20.1~0.2121.1~1.2222.1~2.2
30.2~0.3131.2~1.3232.2~2.3
40.3~0.4141.3~1.4242.3~2.4
50.4~0.5151.4~1.5252.4~2.5
60.5~0.6161.5~1.6262.5~2.6
70.6~0.7171.6~1.7272.6~2.7
80.7~0.8181.7~1.8282.7~2.8
90.8~0.9191.8~1.9292.8~2.9
100.9~1.0201.9~2.0302.9~3.0
Table 3. Evaluation indicators of different strategy models for transfer learning.
Table 3. Evaluation indicators of different strategy models for transfer learning.
MethodPrecision%Accuracy%Recall%
TL-M184.582.981.1
TL-M290.190.090.8
TL-M392.791.290.8
Table 4. Results table of magnitude classification indicators.
Table 4. Results table of magnitude classification indicators.
MethodPrecision%Accuracy%Recall%
CNN67.863.562.1
Alexnet + TL-M387.586.683.9
VGGnet + TL-M388.387.387.1
ResNet18 + TL-M184.582.981.1
ResNet18 + TL-M290.190.090.8
ResNet18 + TL-M392.791.290.8
Table 5. Comparison of classification results for ResNet18, ResNet34, and ResNet50.
Table 5. Comparison of classification results for ResNet18, ResNet34, and ResNet50.
ModelAccuracy
ResNet180.9372
ResNet340.9176
ResNet500.8967
Table 6. Table of magnitude prediction results.
Table 6. Table of magnitude prediction results.
Calculated
Sample Number
Error < 0.3 Sample NumberError < 0.2 Sample NumberVariance
CNN3002141660.9553
Alexnet3002692100.5895
VGGnet3002732410.271
ResNet18 + CBAM3002842610.1362
ResNet34 + CBAM3002792490.2219
ResNet50 + CBAM3002852550.1893
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Wang, H. Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning. Appl. Sci. 2025, 15, 8246. https://doi.org/10.3390/app15158246

AMA Style

Wang H, Wang H. Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning. Applied Sciences. 2025; 15(15):8246. https://doi.org/10.3390/app15158246

Chicago/Turabian Style

Wang, Huaixiu, and Haomiao Wang. 2025. "Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning" Applied Sciences 15, no. 15: 8246. https://doi.org/10.3390/app15158246

APA Style

Wang, H., & Wang, H. (2025). Research on Microseismic Magnitude Prediction Method Based on Improved Residual Network and Transfer Learning. Applied Sciences, 15(15), 8246. https://doi.org/10.3390/app15158246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop