Previous Article in Journal
Performance Optimization of a Silica Gel–Water Adsorption Chiller Using Grey Wolf-Based Multi-Objective Algorithms and Regression Analysis
Previous Article in Special Issue
Algorithmic Efficiency Analysis in Innovation-Driven Labor Markets: A Super-SBM and Malmquist Productivity Index Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Machine Learning Model for Blast-Induced Peak Particle Velocity Estimation in Surface Mining: Application of Sparrow Search Algorithm in ANN Optimization

1
Department of Earth Resources Engineering, Kyushu University, Fukuoka 819-0395, Japan
2
Department of Mining Engineering, Botswana International University of Science and Technology, Private Bag 16, Palapye 10071, Botswana
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(9), 543; https://doi.org/10.3390/a18090543
Submission received: 31 July 2025 / Revised: 20 August 2025 / Accepted: 20 August 2025 / Published: 27 August 2025

Abstract

Blast-induced ground vibrations present substantial safety and environmental hazards in surface mining operations. This study proposes and evaluates the Sparrow Search Algorithm-optimized ANN (SSA-ANN) against artificial neural network (ANN), Genetic Algorithm-optimized ANN (GA-ANN), and empirical formula (USBM) to estimate peak particle velocity (PPV). In addition, the input parameters include key blasting design parameters and rock mass features (GSI and UCS). The SSA-ANN demonstrated superior prediction accuracy, attaining an average R2 of 0.51 using bootstrap validation, surpassing GA-ANN (0.41) and standard ANN (0.26). Furthermore, the incorporation of GSI enhanced the model’s geotechnical sensitivity. These results illustrate that the application of SSA-ANN alongside comprehensive rock mass characteristics can substantially decrease uncertainty in PPV prediction, therefore enhancing safety within the blast area and improving vibration control methods in blasting operations.

1. Introduction

Rock fragmentation by the use of explosives is a widely used practice in mining and civil engineering works. Blasting is the process of breaking the rock mass using explosives. It is the cheapest, readily available, and fastest method to break rock mass and advance operations in mining and civil works. Nonetheless, the blasting process results in adverse environmental effects such as airblast, ground vibration, backbreaks, overbreak, flyrock, etc. [1,2,3,4]. The energy generated during blasting, about 20–30%, actually breaks the rock mass, while the rest propagates through the rest of the rock mass and affects the surrounding infrastructure. However, the most concerning adverse effect induced by blasting is ground vibration, which can damage structures in nearby communities, such as bridges, roads, tunnels, and dams, as well as potentially affect the blasting operations if not controlled. Ground vibrations are an inevitable part of blasting and are unavoidable. Therefore, it is important to precisely make predictions of ground vibration before carrying out blasting activities. According to the literature, peak particle velocity (PPV) is a commonly used parameter to assess the impact damage of ground vibrations on structures and is typically measured in terms of millimeters per second (mm/s) [5,6].
In the past, many researchers have widely adopted several empirical formulas to model and predict PPV. The commonly used parameters were the distance between the blasting face and the monitoring station (D), the maximum charge per delay (W), and specific site constants (k and b) that describe the characteristics of propagating media, geology, and blast design. Even though empirical formulas are convenient and simple to use, they have many shortcomings, as they normally underestimate or overestimate PPV. Moreover, they are only applicable to specific site conditions, and when applied to different locations with different rock types, blasting techniques, etc., their prediction accuracy diminishes and becomes less reliable. Furthermore, they do not incorporate many parameters that affect the blasting process. Therefore, it is very important to ensure safe blasting operations in mining and civil works by accurately predicting PPV. Zhang et al. [7] predicted PPV using a novel integration approach based on the extreme learning machine (ELM) and multi-verse optimization (MVO). 137 blasting datasets were obtained from a copper mine in China to test the proposed model. The traditional ELM, multi-perceptron (MLP) neural network, and USBM (United States Bureau of Mines) models were used as a benchmark for comparison with the optimized MVO–ELM model. The MVO-ELM model had better prediction accuracy compared to the MLP and USBM models.
Furthermore, the results indicated that D, W, and the rock mass integrity coefficient have a great impact on predicting PPV. Saadat et al. [8] illustrated that empirical models, as well as multiple linear regression (MLR) analysis, predict PPV inaccurately. They used 69 blasting datasets from the Gol-E-Gohar (GEG) iron mine in Iran. Four blasting parameters were used to develop all the predictive models, including the artificial neural network (ANN) model. The artificial neural network model had a predictive superiority over empirical models and MLR. Monjenzi et al. [9] also demonstrated that multivariate regression analysis and empirical models perform poorly in predicting PPV. An artificial neural network model was developed, and MLR and empirical models were used as benchmarks to validate the proposed artificial neural network model. W, D, hole depth, and stemming were regarded as input parameters. The proposed artificial neural network model had better performance in predicting PPV compared to MLR and empirical models. Also, the sensitivity results indicated that D is the most impactful parameter on PPV, while the stemming length is the least impactful parameter.
Moreover, researchers who attempted to predict PPV using different influencing parameters grouped them into two categories: controllable and non-controllable factors. Controllable factors include blast design parameters (hole depth, hole diameter, spacing, burden, number of holes, stemming, and subdrilling) and explosive parameters (total charge, maximum charge per delay, delay time, and explosive type). Non-controllable factors are those that cannot be changed, and these include geological (rock type, joint orientation, degree of weathering, spacing, bedding planes, faults, and mineral composition) and geographic conditions (topography and surrounding landscape). Studies have recently considered using geological conditions along with blast design parameters to predict PPV [1].
Advancements in computer technology have led to the growing application of artificial intelligence models in engineering, as they have the robust capability to solve nonlinear continuous function challenges that traditional empirical models face. These models have demonstrated encouraging results. As the understanding of the constraints of empirical models increases, a growing number of researchers have adopted the use of artificial intelligence models to predict PPV by effectively including additional parameters that influence PPV. Improvement of prediction of PPV can be notable by the use of the following artificial intelligence models: hybrid learning machines, artificial neural networks, support vector machines (SVM), fuzzy logic, gene programming, decision trees, etc. Gu et al. [10] predicted PPV using an intelligent hybrid model of extreme gradient boosting with distinct optimization techniques: Runge–Kutta Optimizer (RUN), Equilibrium Optimizer (EO), Gradient-Based Optimizer (GBO), and Reptile Search Algorithm (RSA). Furthermore, they utilized the following input parameters: charge quantity per hole (CQH/kg), total charge quantity (TCQ/kg), distance from bursting, drilling depth (DP/m), point to measuring point (DBM/m), borehole diameter (BD/mm), spacing (S/m), minimum bore (MB/m), row spacing (RS/m), and depth displacement (DD/m). The results show that the GBO-optimized XGBoost model had better prediction accuracy as compared to other machine learning models and empirical formulas.
Moreover, Fan et al. [11] demonstrated the prediction superiority of an artificial neural network to solve complex nonlinear functions. The artificial neural network model was optimized using different algorithms, and the grasshopper optimization algorithm (GOA) proved to be suitable for optimizing the artificial neural network. The input parameters were as follows: distance from the blast face, the maximum charge per delay, height difference, and acoustic wave velocity (AWV). The GOA-ANN model yielded the root mean square error (RMSE), mean absolute error (MAE), and determination coefficient (R2) of 0.240, 0.198, and 0.978, respectively. Also, Xie et al. [12] predicted PPV using a support vector regression (SVR) machine and a random forest (RF) optimized by a particle swarm optimization algorithm (PSO). The results indicated that the PSO-SVR and PSO-RF models can improve the prediction accuracy of PPV, as both models had superior performance, with R2 of 0.9645 and 0.8048, respectively. The USBM model was used as a benchmark for the proposed models and performed unsatisfactorily with an R2 value of 0.7877. Table 1 illustrates a summary of some past studies on PPV.
The accuracy of PPV predictions in blasting operations has been enhanced due to the implementation of different machine learning methods. However, most of these methods are not without underlying limits. One of the primary challenges lies in their convergence behavior during training, where models commonly converge to local optima, resulting in suboptimal generalization performance [7,8,9]. As a result, the reliability of their predictions could vary under different blasting conditions. Therefore, to mitigate these constraints, numerous studies have utilized metaheuristic optimization algorithms, such as Genetic Algorithm (GA), PSO, and ICA, to optimize the weights and biases of ANN [10,11,12].
This study presents two novel approaches that utilize the Sparrow Search Algorithm (SSA) to optimize artificial neural networks for predicting peak particle velocity and the Geological Strength Index (GSI), which is incorporated as a direct input parameter along with blast design parameters. The Sparrow Search Algorithm is a novel swarm intelligence technique derived from the foraging and anti-predation behaviors of sparrows [17,18], and provides a dynamic and adaptive balance between exploration and exploitation, thereby effectively mitigating the early convergence challenges common in other metaheuristic approaches. In addition, compared to previous approaches that rely solely on uniaxial compressive strength (UCS), Rock Quality Designation (RQD), or rock type [9,19,20], GSI offers a more comprehensive description of rock mass quality by considering both block structure and the conditions of discontinuity surfaces [21,22,23,24]. Moreover, the integration of SSA-ANN optimization and GSI-based geological input provides a geologically informed, globally optimized approach to predict PPV with improved accuracy. In addition, GA-ANN, ANN, and the empirical formula (USBM) will be used as benchmarks against the SSA-ANN model. To the best of our knowledge, this is the first study to apply the SSA-ANN model in the prediction of blast-induced PPV, while also incorporating the GSI as a geomechanically input parameter along with blast design parameters. This dual novel approach establishes a new geotechnical and computational approach for accurately modeling PPV in blasting operations.
The paper is structured as follows. Section 2 describes the materials and methods, including the data acquisition, correlation matrix, and description of each methodology employed. Section 3 presents the proposed SSA-ANN model, the GA-ANN model, the ANN model, and the USBM predictor methodology. Section 4 reports the results and discusses their implications. Finally, Section 5 offers the conclusions.

2. Materials and Methods

2.1. Data Acquisition

Blasting datasets were collected from the blasting operation at Bela Bela Granite Quarry in Gaborone, Botswana, at approximately 24°32′18″ S latitude and 26°02′13.8″ E longitude. The parameters that are highly influential in affecting the PPV collected are as follows:
  • Maximum Charge Per Delay (X1, kg);
  • Powder Factor (X2, kg/m);
  • Distance (X3, m);
  • Hole depth (X4, m);
  • GSI (X5);
  • Number of holes (X6);
  • UCS (X7, MPa);
  • Rock Density (X8, kg/m3).
PPV is regarded as the output parameter. These datasets were used for developing and validating the proposed model. Table 2 shows the statistics of the data: the minimum, maximum, and mean values. Moreover, the statistics show the variation in the blasting parameter datasets. The maximum charge per delay (X1) ranges from 111.63 to 329.20 kg while other parameters, such as the distance from the blast-face to the monitoring face (X3) and the number of blastholes (X6), show comparable degrees of variation. These significant variations in the datasets demonstrate the diversity of the field conditions and many other parameters that impact PPV. Furthermore, the development of the PPV prediction model depends on the variability in the datasets, as detailed by these descriptive statistics. In addition, the early analysis of the datasets ensures that important characteristics are sufficiently represented and that the model can be applied effectively in various scenarios by defining the range and distribution of the influencing parameters. The result provides vital insights for later modeling.
Moreover, a correlation matrix was conducted to determine the strength and direction of linear relations among the input parameters and the PPV, as illustrated in Figure 1. The maximum charge per delay (X1) and rock density (X8) showed moderate positive relationships with PPV, whereas distance from the blasting face to the monitoring face (X3) showed a severe negative association. In addition, with regard to the hole depth (X4), powder factor (X2), and other variables such as UCS (X7) showed weak to moderate relationships.
These findings demonstrate that there is a possibility of nonlinear relationships among the input parameters and PPV; hence, the linear regression cannot be effectively employed. Therefore, it is important to use hybrid predictive modeling techniques such as SSA-ANN. The SSA improves the neural network’s exploration-exploitation abilities, enabling efficient convergence and strong predictive performance. This hybrid SSA-ANN model is highly effective in solving nonlinear problems. Table 3 shows some past studies in which the SSA was used in solving nonlinear problems. These studies cover numerous application fields of the SSA and demonstrate its superior prediction performance, convergence efficiency, and generalization abilities in comparison with other techniques.

2.2. Methodology

As mentioned above, the main aim of this work is to estimate PPV using SSA-ANN and to compare its results against GA-ANN, basic ANN, and the empirical formula (USBM). The description of each method is as follows.

2.2.1. Artificial Neural Network Design

ANN is a data-driven approach that mathematically simulates neural networks and draws inspiration from how the human brain processes information. Neurons are interconnected and organized in layers. The best ANN type, which is widely known, is the MLP, which has three layers, namely, the input layer, the output layer, and the hidden layers [29]. These layers are interlinked by nodes that allow information signals to be transmitted from one layer to another. The learning process of MLP contains two steps: propagation and backward propagation. The first step involves receiving signals from the outside world into the input layer, which processes them and feeds them forward to the hidden layer. In the hidden layer, the data received is processed by operating the bias, summation, and activation functions. There is no general rule for choosing the number of hidden layers; the trial-and-error method is widely used until the desired output is achieved and it is adopted in this study. The second step occurs when the output layer does not achieve the desired output, and it is called back-propagation. Back-propagation occurs when the estimated error values within each hidden layer are calculated backward, resulting in changes in the previous layer’s weights. These eventually lead to a progressive decrease in total error. This process continues until the error converges to a desirable level.

2.2.2. Sparrow Search Algorithm

The Sparrow Search Algorithm is a novel optimization technique that is based on swarm intelligence and was introduced by Xue and Shen [17]. SSA simulates the foraging behavior and anti-predation techniques of sparrows in their natural environment. In addition, the algorithm draws its inspiration from the collective intelligence shown by birds in foraging and evading dangers by merging different roles within the population to improve global search and convergence behaviors. The population in SSA is categorized into three distinct behavioral roles:
  • Producers: These are individuals responsible for identifying food sources. They thoroughly explore the solution space, directing individuals towards favorable locations. Their range of motion is affected by both global optimal fitness and random changes.
  • Scroungers: These are individuals who follow producers to exploit to the known food sources. Their migratory patterns demonstrate adaptability, and they typically gather around more optimal solutions.
  • Alert responders: These are a relatively small portion of the population tasked with identifying threats such as insufficient convergence or stagnation, thus resulting in the swarm evading local optima by sudden positional alterations.
Furthermore, the positioning updates for each group are guided by their duties and influenced by stochastic factors such as alert thresholds and random coefficients. This ensures that the model offers both exploratory diversity and convergence efficiency, and this makes SSA suitable for nonlinear optimization challenges. When compared to other conventional metaheuristics like GA, PSO, and GWO, SSA demonstrates improved convergence speed, greater resistance to launching conditions, and reduced vulnerability to local optima throughout different engineering applications illustrated in Table 3.

2.2.3. SSA-ANN Model

The SSA-ANN hybrid model integrates the global optimization capability of the SSA with the predictive versatility of an ANN to improve performance in modeling complex nonlinear variables. This combination of methods leverages SSA to optimize the initial weights and biases of the ANN before training; this removes reliance on gradient-based learning techniques such as backpropagation, which tend to be impacted by initial conditions and are vulnerable to premature convergence. The optimization methodology for SSA-ANN is as follows.
  • Data Normalization: All input and output parameters are normalized to the range [0, 1] by min-max normalization to ensure computation efficiency as well as consistent scaling of parameters. The dataset is transformed as follows:
S s   =   S S m i n S m a x S m i n
where S is the initial value of the input or output variable, Smin corresponds to the minimum value of S in the dataset, Smax indicates the maximum value of S in the dataset, and Ss represents the normalized value of S.
  • Encoding and Initialization: Each member in the SSA population represents a possible solution that contains a comprehensive array of ANN weights and biases. The population is randomly initialized within specified parameter limitations.
  • Objective Function: The predictive capability of each individual is determined by a fitness function that calculates the sum of absolute errors between the expected and actual output values.
f = 1 i = 1 n y ^ i y i
where f corresponds to the fitness value of the individual, and n represents the total number of samples, y ^ i represents the projected output value for the i-th sample, y i stands for the actual measured output value for the i-th sample and y ^ i y i denotes absolute prediction error.
  • Assignment of Roles and Update on Positions: The population is categorized into three behaviorally based subgroups, namely producers, scroungers, and alert responders, as described above. Moreover, each group changes its position according to the fundamental concepts of SSA, mimicking foraging and anti-predation behaviors.
  • Repetition and Convergence: The population changes throughout iterations by role-based modifications. Following each fitness evaluation, the algorithm checks the termination criterion: if it is not fulfilled, that is, the maximum iteration count has not been attained or any improvement in optimal fitness, the sparrow positions are updated, and the procedure repeats. Once the condition is satisfied, the individual demonstrating the optimal fitness value is kept, and its encoded ANN weights and biases are employed for the final model.
  • The model effectiveness is evaluated utilizing root mean square error (RMSE) and coefficient of determination (R2), as presented in Table 4. Figure 2 illustrates the flowchart for the SSA-ANN methodology. The pseudocode of the SSA-ANN model and description of variables used are shown in Algorithm 1 and Figure 3, respectively.
Algorithm 1 Pseudo-code of SSA-ANN
Inputs: X (inputs), y (PPV targets), MaxIterations, PopSize, SSA parameters: producer ratio (PD), scrounger ratio (SD), safety threshold (ST)

Outputs: w* (optimal ANN weights/biases), RMSE, R2, CR

1: Xs ← MinMaxScale(X); ys ← MinMaxScale(y)
2: NPopSize
3: Encode ANN weights+biases as vector wi; sample wi[d] ∼ U[L,U]
4: function Fitness(w)
5:   ŷsf(Xs; w)
6:   ŷ ← InverseMinMax(ŷs)
7:   return 1/Σ|ŷ − y|
8: end function
9: Evaluate all wi; set elite w*
10: for t = 1 to MaxIterations do
11:   Rank population by fitness
12:   P ← top ⌈PD·N
13:   V ← random ⌈SD·N
14:   S ← remaining
15:   Draw r2U(0,1)
16:   for each wiP do
17:     if r2 < ST then
18:      wi(t + 1) ← wi(t) · exp(− r α T )
19:     else
20:      wi(t + 1) ← wi(t) + N(0,σ2I)
21:     end if
22:   end for
23:   for each wjS do
24:     wj(t + 1) ← w* + |wj(t)−w*N(0,I)
25:   end for
26:   for each wkV do
27:     if Fitness(wk) < Fitness(w*) then
28:      wk(t + 1) ← w* + ρ ·|wk(t)−w*|, ρ U(0,1)
29:     else
30:      wk(t + 1) ← w* + 0.5·N(0,I)
31:     end if
32:   end for
33:   Clamp all wi(t + 1) to [L,U]
34:   Evaluate fitness; update w*; log best RMSE
35: end for
36:   y ^ s f i n a l f(Xs; w*)
37: ŷfinal ← InverseMinMax( y ^ s f i n a l )
38: RMSE ← 1 n (     y ^ f i n a l     y ) 2
39: R2 ← 1 − (     y ^ f i n a l     y ) 2 ( y   y ¯ ) 2 where n is the number of samples and ȳ is the mean of observed values.
40: return w*, RMSE, R2, CR

2.2.4. Genetic Algorithm

Genetic algorithms are stochastic methods for global search and optimization, derived from Darwin’s theory of natural selection and the mechanisms of biological evolution [30]. A procedure that maps the solution space to the encoding space is realized by using chromosomes to represent the problem’s solution. Furthermore, the selection, crossover, and mutation are used to create the population that represents the new solution. The mechanism of a roulette wheel is used in the selection process to pick new populations that have a higher capacity for adaptation. A crossover is a chromosomal swap that takes place within a population between two individuals. This switch takes place at a single point of disconnection for exchange, and it is a reflection of the information that is transferred during the biological genetic process. Mutation occurs when individuals are selected within a population with a given likelihood of adapting a gene in an individual’s code to its allele to develop a new individual. The roulette wheel mechanism is employed in the selection process to identify new populations with enhanced adaptive potential.

2.2.5. GA-ANN Model

GA-ANNs are used by many researchers to help solve complex nonlinear functions, which present a significant challenge to ANNs. Moreover, GA-ANNs aid in optimization problems as they can conduct global searches, thus reducing the likelihood of convergence to local minima. Furthermore, the GA-ANN hybrid model makes use of GA search method to determine the ideal weights and biases for the ANN, which ultimately results in an improvement in the ANN’s ability to predict outcomes. Figure 4 shows the general structure of the GA-ANN model. ANN is optimized using GA as follows:
1.
Normalization of input and output parameters using Equation (3)
G S = ( G G m i n ) / ( G m a x G m i n )
where G m i n is the minimum value, and G is the maximum value.
2.
The GA-ANN parameters are initialized
3.
A preliminary population was created using real number encoding.
4.
The function of fitness was determined. The fitness function applied in this study was the absolute error between the predicted value as determined by the neural network and the true value. Equation (4) is used to calculate the fitness function.
f = 1 Σ i     1 n w i ^ w i
where w i ^ and w i are representative of the values of the predicted and intended, respectively.
5.
Individuals were chosen for crossover and mutation using roulette, tournament selection, and norm geometric selection.
6.
The optimum individual was achieved via iterative selection, crossover, and mutation processes. Subsequently, the optimal weights and thresholds were acquired.
7.
The neural network receives the enhanced weights and thresholds for the training simulation.
8.
The last stage was the creation of several hybrid GA-ANN models
The developed models are normally evaluated using different performance metrics, and for this study, RMSE and R2 were used and presented in Table 4. β is the mean of the real values, n presents the total number of data, α, and ω are the real and predicted ith values.

2.2.6. Traditional Prediction Model

Several researchers have used empirical formulas as benchmarks for evaluating the effectiveness of their suggested predictive models. This study uses the United States Bureau of Mines (USBM) predictor equation proposed by Duvall and Fogelson [31] as a benchmark for comparison, because of its extensive practicality and reliability in estimating blast-induced PPV. In addition, the estimation of PPV by the USBM method depends on the principle of Scaled Distance (SD), which combines the maximum charge per delay and the distance from the blast face to the monitoring point. The SD is expressed as follows:
S D   =   D W
where D is the distance from the blast face (m) and W is the maximum charge per delay (kg). Then, PPV can be calculated with the following formula:
P P V   =   K ( S D ) B
B and K represent site-specific constants. This work has produced a PPV formula using the estimation of SD values as follows:
PPV = 1989.19 × SD−1.03

3. Implementation of ANN, GA-ANN, and SSA-ANN Models

The predictive models for PPV were developed on MATLAB software 2024a. A total of 55 datasets were obtained from the granite mine, with 45 randomly selected for training (80% of 45) and testing (20% of 45), while the remaining 10 datasets were assigned for independent validation of the models’ prediction abilities. The neural network input layer was composed of eight nodes representing important blast and rock mass parameters namely maximum charge per delay (X1), powder factor (X2), distance from the blast face (X3), hole depth (X4), GSI (X5), number of blast holes (X6), UCS (X7), and rock density (X8). In addition, the output layer had only one node corresponding to the expected PPV.
Furthermore, to facilitate effective training and convergence, all input and output data were normalized using min-max scaling within the range of 0 to 1, thereby mitigating scale mismatch among parameters while improving learning speed. Min-max normalization had been selected as it keeps the relative relationships across the parameters being used and scales them to a restricted range, which is useful for ANN training, where activation functions are sensitive to input quantity. In addition, although z-score standardization may be helpful with skewed distributions, the initial testing on our dataset demonstrated no substantial improvements in predicting accuracy, whereas min-max scaling provided a more straightforward, consistent scale across parameters. Following suggestions from previous studies [9,15,32,33,34,35], the ANN was set with a learning rate of 0.01, a maximum of 1000 training epochs, and a minimum mean squared error threshold of 1 × 10−5. Moreover, two frequently employed activation functions, Rectified Linear Unit (ReLU) and Hyperbolic Tangent (Tanh), were deployed across diverse configurations to determine the ideal hidden layer configuration. Several hidden neurons were adjusted between 20 and 50 to find the optimal architecture for accurate prediction. Hidden neurons (21, 27, 33, 44, and 48) were found to be the optimal and were used as the base for GA-ANN and SSA-ANN models. Two hybrid metaheuristic optimization methods were employed: GA-ANN and SSA-ANN to significantly improve the predictive effectiveness of the basic ANN model. These methods were applied to adjust the weights and biases of the artificial neural network to improve both convergence and accuracy. The population size for both the GA and SSA models was set to 200, the maximum number of iterations was set to 1000, and the search boundaries for initializing weight and bias were set to the range [−3, 3]. The hyperparameter configurations were established by a trial-and-error technique to find ideal parameters. To ensure unbiased comparison, the optimal ANN model configuration characterized by the number of hidden neurons and the chosen transfer function was maintained in both GA-ANN and SSA-ANN models. All models were evaluated based on RMSE and R2. The selection of algorithm parameters is shown in Table 5. Figure 5 demonstrates the overall workflow used in this study.

4. Results and Discussion

The objective of this study was to predict blast-induced PPV with the utmost accuracy and to determine the most influential parameter on PPV. Three predictive approaches, namely ANN, GA-ANN, and SSA-ANN models used to predict PPV and were thoroughly evaluated employing various hidden neuron configurations and transfer functions, as detailed in Table 6, Table 7 and Table 8. In addition, to ensure a fair comparison, identical hidden neurons and transfer functions were employed across all models. Each model was evaluated using the RMSE and R2 on both training and testing datasets.
The findings from Table 5 demonstrate that the performance of the basic ANN models was considerably inconsistent. Furthermore, the ANN model with 33 hidden neurons and a ReLU activation function obtained the highest testing accuracy among the ANN configurations, with a test RMSE and R2 of 2.832 and 0.594, respectively, whereas the model with 27 hidden neurons and a Tanh activation function was the least effective, as it generated higher test RMSE of 3.335 and a low R2 of 0.437. In addition, the model with the 48 hidden neurons and ReLU transfer function demonstrated a poor performance on the test dataset (RMSE = 3.444, R2 = 0.400), signaling potential overfitting despite reasonable training accuracy. The findings suggest that ANN has the potential to predict the PPV, but it is architecture-sensitive and lacks robustness across different configurations.
Furthermore, Table 6 illustrates the performance of GA-ANN models. The integration of the GA considerably enhanced the training and testing accuracy of the neural networks. The GA-ANN, with 48 hidden neurons and ReLU activation, achieved better prediction accuracy, resulting in a test RMSE of 2.385 and an R2 of 0.712. Moreover, other configurations, for example, those with 44 and 33 hidden neurons, exhibited better performance compared to their ANN equivalents. The improved performance is likely related to GA’s abilities to evade local minima and efficiently optimize the ANN weights and biases during training. However, results among configurations were reasonable, signaling that more attempts at optimization might improve performance.
Table 7 demonstrates that the SSA-ANN models consistently outperformed both the ANN and GA-ANN models in all configurations. The SSA-ANN model, with 48 hidden neurons and the ReLU activation function, showed an optimal performance, obtaining the lowest test RMSE of 2.252 and an impressive R2 of 0.743. Despite using smaller architectures, the SSA-ANN models with 21 and 27 hidden neurons exhibited substantial generalization abilities, achieving test R2 values of 0.689 and 0.637, respectively. These findings illustrate the effectiveness of the Sparrow Search Algorithm in improving neural network training.
Figure 6 illustrates the convergence rate for both SSA-ANN and GA-ANN. The SSA-ANN efficiently lowered RMSE and achieved stability much more quickly than GA-ANN. The SSA’s flexibility in the exploration-exploitation process enabled effective convergence to optimal weights, resulting in enhanced model predictive performance. The basic ANN models generated demonstrated R2 values that ranged from 0.400 to 0.594, while the GA-ANN models reported a range of 0.614–0.712.
In addition, the SSA-ANN models obtained the highest accuracy, with testing R2 values between 0.637 and 0.743. Although the increase in the number of hidden neurons additionally enhanced model performance, the selection of the method for optimization produced a more substantial impact on prediction accuracy. These findings validate that hybrid models, particularly SSA-ANN, provide a more reliable and efficient model for predicting PPV in blasting operations.
Further validation was carried out to test the ability to generalize the developed models by applying ten new blasting datasets that were entirely independent from both the training and testing processes. The datasets were used to evaluate the optimal configurations of the ANN, GA-ANN, and SSA-ANN models, together with the empirical USBM equation, to assess the accuracy of their predictions on new data. Figure 7 and Figure 8 demonstrates that the SSA-ANN model closely approached the measured PPV values in almost all instances, showing minimal variations across the validation datasets. Moreover, the ANN and GA-ANN models demonstrated a fair-to-satisfactory performance; however, both were sometimes under- or overestimated the PPV values, especially in datasets 5 and 10. In addition, the GA-ANN shows significant improvements compared to the standard ANN, correlating with prior findings from the testing phase.
The USBM model results were less satisfactory, as it was overestimating the PPV most of the time. Despite that, it provides an easy prediction because it is based on scaled distance and explosive weight. In addition, its lack of incorporation other parameters, like rock mass properties, limits its accuracy and adaptability across diverse geological conditions. In dataset 5, USBM projected a considerably lower PPV than measured, whereas in datasets 1 and 9, it overestimated PPV. These validation results support the superior performance of the SSA-ANN model, which showed notable fitting on both training and testing data while also sustaining high accuracy with new, unused datasets. This result illustrates the model’s significant ability to generalize and adapt, thus showing its viability for practical use in blasting operations.
In addition, bootstrap resampling was conducted to further improve the validation of the prediction performance of the SSA-ANN, GA-ANN, and standard ANN models. SSA-ANN obtained the highest mean R2 score of 0.51, demonstrating improved consistency and reliability in predicting PPV as shown in Figure 9. GA-ANN achieved a mean R2 of 0.41; the ANN model generated a lower mean R2 of 0.26.
To further demonstrate the efficacy of our proposed models, a non-parametric statistical test, namely the Wilcoxon Signed-Rank Test (WSRT), was conducted utilizing the Test RMSE values from many independent runs (12). WSRT evaluates the null hypothesis, asserting no significant difference in mean outcomes between the matched approaches. The p-values were below 0.05, confirming statistical significance. The SSA-ANN models consistently achieved a lower Test RMSE, averaging 2.50, compared to 2.63 for GA-ANN. As a result, this finding offers substantial evidence that the performance variance is not coincidental but rather representative of a considerable enhancement in the precision of prediction.
Moreover, through the use of the cosine amplitude approach (CAM), a sensitivity analysis was carried out for each of the eight input parameters. This methodology has been applied in past studies [15,36] to assess the significance of each parameter on PPV. A pairwise comparison between two elements, Xi and Xj, can be performed by using Equation (8), which results in the computation of RIJ, which is the outcome of the comparison [37]. RIJ symbolizes the relative significance of the input parameter, while the input parameters are represented by the symbol Xi, and the output parameter is denoted by the symbol Xj. The RIJ value of each input parameter varies between 0 and 1, and the highest RIJ value indicates the parameter that has the greatest significance on the PPV.
R I J   = Σ k = 1 m ( x i k × x j k ) Σ k = 1 m     x i k 2 × Σ k = 1 m x j k 2
The sensitivity analysis results, shown in Figure 10, illustrate that the monitoring distance from the blast face is the most prominent parameter affecting PPV, followed by maximum charge per delay, rock density, and GSI, respectively. Furthermore, parameters such as the number of blast holes and UCS significantly impact PPV. However, parameters such as hole depth and powder factor have demonstrated a relatively lesser impact. These findings illustrate the vital influence of rock mass conditions and blast design on PPV propagation.
Furthermore, despite the GSI being considered to be a better evaluation of rock mass quality compared to UCS or RQD [22,23,24,25], the sensitivity analysis performed in this study demonstrated that GSI was less important to distance, maximum charge per delay, and rock density in terms of its impact on PPV. Based on the initial evaluation, this may seem to be inconsistent with the justification for its inclusion. To further investigate GSI’s usefulness, an ablation test was performed in which the SSA-ANN was retrained without GSI as an input. The omission of GSI decreased the bootstrap mean R2 from 0.512 (95% CI: 0.446–0.574) to 0.468 (95% CI: 0.402–0.532), demonstrating an 8.6% reduction in the accuracy of prediction. The relatively small decline in performance validates that GSI enhances generalization, despite its inferior direct sensitivity ranking. Therefore, not considering it and other geomechanically properties may lead to unsatisfactory prediction and mitigation of blast-induced PPV.
Moreover, cross-disciplinary studies on vibration control have shown novel methodologies beyond the mining field. Liu et al. [38] demonstrated that broadband vibration suppression may be linked with energy harvesting via 1:2 internal resonance, providing conceptual inspiration for hybrid mitigation methodologies that may enhance predictive models in blasting operations.

5. Conclusions

This study proposed and analyzed three machine learning models, namely ANN, GA-ANN, and SSA-ANN, for the prediction of PPV from blasting operations, employing an extensive variety of input parameters that included blasting design parameters and geomechanically properties. The SSA-ANN model demonstrated superior performance across all settings, obtaining the highest R2 and the lowest RMSE on both test and validation datasets. Furthermore, SSA’s outstanding efficiency is because of its flexible stability between exploration and exploitation during the optimization process, therefore facilitating efficient exploration of the solution space and avoiding early convergence. The USBM model results were less satisfactory compared with other models.
Moreover, sensitivity analysis found that the distance from the blast face, maximum charge per delay, rock density, and GSI are the most significant parameters on PPV, whereas hole depth and powder factor exhibited the least significance. Also, the SSA-ANN model improved prediction accuracy and has shown stability on new data used for validation. In addition, bootstrap resampling further validated the model’s reliability, with SSA-ANN obtaining the highest mean R2 of 0.51 (95% CI: 0.45–0.57), outperforming GA-ANN with 0.41 (95% CI: 0.36–0.47) and standard ANN at 0.26 (95% CI: 0.20–0.32).
However, the study is limited by the relatively small dataset (55 total records, with 45 used for training/testing the models and 10 utilized for the validation of models) and the substantial number of input parameters. This combination increases the risk of overfitting and limits the generalizability of the results to different geological and operational settings. Therefore, future work with larger and more heterogeneous datasets is necessary to enhance the robustness and versatility of the model. In addition, enhancing the dataset to encompass supplementary lithologies, different geological conditions, and more blast design parameters could improve the prediction accuracy and practical use of the developed models. To the best of our knowledge, this is the first study applying SSA-ANN with GSI as one of the input parameters to predict PPV. The precise PPV prediction depends on high-quality data sourced from the mining industry, which also currently presents significant challenges. Moreover, it requires efficient data collection methods and standard preprocessing procedures. These initiatives are essential for ensuring that machine learning models maintain reliability and generalizability across diverse operational cases, while effectively managing large and different datasets. The results show that integrating the GSI as a parameter and employing the SSA-ANN model markedly improves the precision of PPV predictions, which may significantly minimize risks within the blast safety area, thus enhancing safety in blasting operations.

Author Contributions

Conceptualization, K.G. and T.S.; methodology, K.G. and H.S.; formal analysis, H.S. and A.H.; investigation, K.G. and A.H.; data curation, K.G.; writing—original draft preparation, K.G.; writing—review and editing, T.S. and A.H.; visualization, K.G. and T.S.; supervision, T.S. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

The research work is part of a doctoral program supported by JICA.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to federal collaboration requirements.

Acknowledgments

This research work was supported by the Japan International Cooperation Agency (JICA) and Bela Bela Quarries.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AcronymFull FormFirst Appearance (Page)
SSASparrow Search Algorithmp. 1
GAGenetic Algorithmp. 1
ANNArtificial Neural Network p. 1
USBMUnited States Bureau of Minesp. 1
PPVPeak Particle Velocity p. 1
GSIGeological Strength Index p. 1
UCSUniaxial Compressive Strength p. 1
SVMSupport Vector Machines p. 2
MCPDMaximum Charge Per Delayp. 2
DDistancep. 2
HDHole Depthp. 2
BHBench heightp. 3
BBurdenp. 3
SSpacingp. 3
RQDRock Quality Designation p. 3
STStemmingp. 3
MVRAMultivariate Regression Analysisp. 3
AWVAcoustic Wave Velocity p. 3
TCQTotal Charge Quantity p. 3
BDBorehole Diameterp. 3
RSRow spacingp. 3
CIConfidence Intervalp. 16

References

  1. Yan, Y.; Guo, J.; Bao, S.; Fei, H. Prediction of Peak Particle Velocity Using Hybrid Random Forest Approach. Sci. Rep. 2024, 14, 30793. [Google Scholar] [CrossRef] [PubMed]
  2. Navarro Torres, V.F.; Silveira, L.G.C.; Lopes, P.F.T.; de Lima, H.M. Assessing and Controlling of Bench Blasting-Induced Vibrations to Minimize Impacts to a Neighboring Community. J. Clean. Prod. 2018, 187, 514–524. [Google Scholar] [CrossRef]
  3. Murmu, S.; Maheshwari, P.; Verma, H.K. Empirical and Probabilistic Analysis of Blast-Induced Ground Vibrations. Int. J. Rock Mech. Min. Sci. 2018, 103, 267–274. [Google Scholar] [CrossRef]
  4. Yilmaz, O. The Comparison of Most Widely Used Ground Vibration Predictor Equations and Suggestions for the New Attenuation Formulas. Environ. Earth Sci. 2016, 75, 269. [Google Scholar] [CrossRef]
  5. Yin, Z.; Hu, Z.; Wei, Z.; Zhao, G.; Hai-feng, M.; Zhang, Z.; Feng, R. Assessment of Blasting-Induced Ground Vibration in an Open-Pit Mine under Different Rock Properties. Adv. Civ. Eng. 2018, 2018, 4603687. [Google Scholar] [CrossRef]
  6. Aladejare, A.E.; Lawal, A.I.; Onifade, M. Predicting the Peak Particle Velocity from Rock Blasting Operations Using Bayesian Approach. Acta Geophys. 2022, 70, 581–591. [Google Scholar] [CrossRef]
  7. Zhang, X.; Nguyen, H.; Choi, Y.; Bui, X.-N.; Zhou, J. Novel Extreme Learning Machine-Multi-Verse Optimization Model for Predicting Peak Particle Velocity Induced by Mine Blasting. Nat. Resour. Res. 2021, 30, 4735–4751. [Google Scholar] [CrossRef]
  8. Saadat, M.; Khandelwal, M.; Monjezi, M. An ANN-Based Approach to Predict Blast-Induced Ground Vibration of Gol-E-Gohar Iron Ore Mine, Iran. J. Rock Mech. Geotech. Eng. 2014, 6, 67–76. [Google Scholar] [CrossRef]
  9. Monjezi, M.; Ghafurikalajahi, M.; Bahrami, A. Prediction of Blast-Induced Ground Vibration Using Artificial Neural Networks. Tunn. Undergr. Space Technol. 2011, 26, 46–50. [Google Scholar] [CrossRef]
  10. Gu, Z.; Xiong, X.; Yang, C.; Cao, M.; Xu, C. Research on Prediction of PPV in Open Pit Mine Used on Intelligent Hybrid Model of Extreme Gradient Boosting. J. Environ. Manag. 2024, 371, 123248. [Google Scholar] [CrossRef]
  11. Fan, Y.; Yang, G.; Pei, Y.; Cui, X.; Tian, B. A Model Adapted to Predict Blast Vibration Velocity at Complex Sites: An Artificial Neural Network Improved by the Grasshopper Optimization Algorithm. J. Intell. Constr. 2025, 3, 1–19. [Google Scholar] [CrossRef]
  12. Xie, L.; Yu, Q.; Liu, J.; Wu, C.; Zhang, G. Prediction of Ground Vibration Velocity Induced by Long Hole Blasting Using a Particle Swarm Optimization Algorithm. Appl. Sci. 2024, 14, 3839. [Google Scholar] [CrossRef]
  13. Dzimunya, N.; Besa, B.; Nyirenda, R. Prediction of Ground Vibrations Induced by Bench Blasting Using the Random Forest Algorithm. J. S. Afr. Inst. Min. Met. 2023, 123, 123–132. [Google Scholar] [CrossRef] [PubMed]
  14. Kazemi, M.M.K.; Nabavi, Z.; Khandelwal, M. Prediction of Blast-Induced Air Overpressure Using a Hybrid Machine Learning Model and Gene Expression Programming (GEP): A Case Study from an Iron Ore Mine. AIMS Geosci. 2023, 9, 357–381. [Google Scholar] [CrossRef]
  15. Rana, A.; Bhagat, N.K.; Jadaun, G.P.; Rukhaiyar, S.; Pain, A.; Singh, P.K. Predicting Blast-Induced Ground Vibrations in Some Indian Tunnels: A Comparison of Decision Tree, Artificial Neural Network and Multivariate Regression Methods. Min. Met. Explor. 2020, 37, 1039–1053. [Google Scholar] [CrossRef]
  16. Jahed Armaghani, D.; Hasanipanah, M.; Tonnizam Mohamad, E. A Combination of the ICA-ANN Model to Predict Air-Overpressure Resulting from Blasting. Eng. Comput. 2016, 32, 155–171. [Google Scholar] [CrossRef]
  17. Xue, J.; Shen, B. A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  18. Yue, Y.; Cao, L.; Lu, D.; Hu, Z.; Xu, M.; Wang, S.; Li, B.; Ding, H. Review and Empirical Analysis of Sparrow Search Algorithm. Artif. Intell. Rev. 2023, 56, 10867–10919. [Google Scholar] [CrossRef]
  19. Khandelwal, M.; Singh, T.N. Prediction of Blast-Induced Ground Vibration Using Artificial Neural Network. Int. J. Rock Mech. Min. Sci. 2009, 46, 1214–1222. [Google Scholar] [CrossRef]
  20. Hajihassani, M.; Jahed Armaghani, D.; Marto, A.; Tonnizam Mohamad, E. Ground Vibration Prediction in Quarry Blasting through an Artificial Neural Network Optimized by Imperialist Competitive Algorithm. Bull. Eng. Geol. Environ. 2015, 74, 873–886. [Google Scholar] [CrossRef]
  21. Hoek, E.; Brown, E.T. Practical Estimates of Rock Mass Strength. Int. J. Rock Mech. Min. Sci. 1997, 34, 1165–1186. [Google Scholar] [CrossRef]
  22. Hoek, E.; Carlos, C.-T.; Brent, C. Hoek-Brown Failure Criterion-2002 Edition. In Proceedings of the NARMS-Tac, Toronto, ON, Canada, 7–10 July 2002; pp. 267–273. [Google Scholar]
  23. Marinos, P.; Hoek, E. Estimating the Geotechnical Properties of Heterogeneous Rock Masses Such as Flysch. Bull. Eng. Geol. Environ. 2001, 60, 85–92. [Google Scholar] [CrossRef]
  24. Sonmez, H.; Ulusay, R. Modifications to the Geological Strength Index (GSI) and Their Applicability to Stability of Slopes. Int. J. Rock Mech. Min. Sci. 1999, 36, 743–760. [Google Scholar] [CrossRef]
  25. Zhou, J.; Dai, Y.; Huang, S.; Armaghani, D.J.; Qiu, Y. Proposing Several Hybrid SSA—Machine Learning Techniques for Estimating Rock Cuttability by Conical Pick with Relieved Cutting Modes. Acta Geotech. 2023, 18, 1431–1446. [Google Scholar] [CrossRef]
  26. Yang, L.; Li, Z.; Wang, D.; Miao, H.; Wang, Z. Software Defects Prediction Based on Hybrid Particle Swarm Optimization and Sparrow Search Algorithm. IEEE Access 2021, 9, 60865–60879. [Google Scholar] [CrossRef]
  27. Tabatabaei, S.M.; Asadian-Pakfar, M.; Sedaee, B. Well Placement Optimization with a Novel Swarm Intelligence Optimization Algorithm: Sparrow Search Algorithm. Geoenergy Sci. Eng. 2023, 231, 212291. [Google Scholar] [CrossRef]
  28. Dui, S.; Zou, J.; Zheng, X.; Zhong, P. Solar Radiation Prediction Based on the Sparrow Search Algorithm, Convolutional Neural Networks, and Long Short-Term Memory Networks. Processes 2025, 13, 1308. [Google Scholar] [CrossRef]
  29. Monjezi, M.; Mohamadi, H.A.; Barati, B.; Khandelwal, M. Application of Soft Computing in Predicting Rock Fragmentation to Reduce Environmental Blasting Side Effects. Arab. J. Geosci. 2014, 7, 505–511. [Google Scholar] [CrossRef]
  30. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, UK, 1998; ISBN 9780262581110. [Google Scholar]
  31. Duvall, W.I.; Fogelson, D.E. Review of Criteria for Estimating Damage to Residences from Blasting Vibrations; United States Bureau of Mines: Washington, DC, USA, 1962; Volume 5968. [Google Scholar]
  32. Bashir, Z.A.; El-Hawary, M.E. Applying Wavelets to Short-Term Load Forecasting Using PSO-Based Neural Networks. IEEE Trans. Power Syst. 2009, 24, 20–27. [Google Scholar] [CrossRef]
  33. Yang, Z.L.; Bonsall, S.; Wang, J. Approximate TOPSIS for Vessel Selection under Uncertain Environment. Expert Syst. Appl. 2011, 38, 14523–14534. [Google Scholar] [CrossRef]
  34. Armaghani, D.J.; Mohamad, E.T.; Narayanasamy, M.S.; Narita, N.; Yagiz, S. Development of Hybrid Intelligent Models for Predicting TBM Penetration Rate in Hard Rock Condition. Tunn. Undergr. Space Technol. 2017, 63, 29–43. [Google Scholar] [CrossRef]
  35. Ding, S.; Su, C.; Yu, J. An Optimizing BP Neural Network Algorithm Based on Genetic Algorithm. Artif. Intell. Rev. 2011, 36, 153–162. [Google Scholar] [CrossRef]
  36. Zhou, Y.; Cao, R. The Artificial Neural Network Prediction Algorithm Research of Rail-Gun Current and Armature Speed Based on B-Dot Probes Array. Measurement 2019, 133, 47–55. [Google Scholar] [CrossRef]
  37. Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers, 5th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2011; ISBN 0470053046. [Google Scholar]
  38. Liu, C.; Wang, J.; Zhang, W.; Yang, X.-D.; Guo, X.; Liu, T.; Su, X. Synchronization of Broadband Energy Harvesting and Vibration Mitigation via 1:2 Internal Resonance. Int. J. Mech. Sci. 2025, 301, 110503. [Google Scholar] [CrossRef]
Figure 1. Correlation matrix of the blast datasets.
Figure 1. Correlation matrix of the blast datasets.
Algorithms 18 00543 g001
Figure 2. General structure of the SSA-ANN model.
Figure 2. General structure of the SSA-ANN model.
Algorithms 18 00543 g002
Figure 3. Pseudo-code of the SSA-ANN model.
Figure 3. Pseudo-code of the SSA-ANN model.
Algorithms 18 00543 g003
Figure 4. General structure of the GA-ANN model.
Figure 4. General structure of the GA-ANN model.
Algorithms 18 00543 g004
Figure 5. Framework used in this study. Arrows illustrate the order of tasks.
Figure 5. Framework used in this study. Arrows illustrate the order of tasks.
Algorithms 18 00543 g005
Figure 6. Convergence curve for SSA-ANN and GA-ANN.
Figure 6. Convergence curve for SSA-ANN and GA-ANN.
Algorithms 18 00543 g006
Figure 7. Comparison between the measured vs. predicted PPV by different models.
Figure 7. Comparison between the measured vs. predicted PPV by different models.
Algorithms 18 00543 g007
Figure 8. Measured PPV vs. predicted by different models.
Figure 8. Measured PPV vs. predicted by different models.
Algorithms 18 00543 g008
Figure 9. Mean R2 values from bootstrap resampling for each model.
Figure 9. Mean R2 values from bootstrap resampling for each model.
Algorithms 18 00543 g009
Figure 10. Sensitivity analysis of the influential parameters.
Figure 10. Sensitivity analysis of the influential parameters.
Algorithms 18 00543 g010
Table 1. Examples of machine learning-based research on blast-induced PPV.
Table 1. Examples of machine learning-based research on blast-induced PPV.
ReferenceAlgorithmInput ParametersNumber of Blasting
Datasets
Fan et al., 2025 [11]GOA-ANND, MCPD, HD, AWV110
Gu et al., 2024 [10]XGBoost optimized by RUN, EO, GBO, RSAMCPD, TCQ, D, HD, DBM, BD, S, MB, RS, DD197
Xie et al., 2024 [12]PSO-SVR and PSO-RFD, MCPD, ND, CW138
Dzimunya et al., 2023 [13]RFBH, MCPD, D, ST, B48
Kazemi et al., 2023 [14] XGBoost optimized via Grey Wolf Optimization (XGB-GWO)MCPD, D, RQD, ST, B, S66
Zhang et al., 2021 [7]ELM-MVOMCPD, D, RMIC137
Rana et al., 2020 [15]Decision Tree, ANN, MVRAMCPD, D, ST, HD, S80
Hajihassani et al., 2015 [16]Imperialist Competitive Algorithm (ICA)-ANNMCPD, D, RQD, ST, B, S77
Saadat et al., 2014 [8]ANNMCPD, D, HD, ST69
Table 2. Statistics of data.
Table 2. Statistics of data.
ParameterMinimumMaximumMean
X1 (kg)111.63329.20234.97
X2 (kg/m3)0.520.740.62
X3 (m)200.00450.00339.58
X4 (m)7.5312.8410.47
X550.0060.0055.63
X673.00354.00226.48
X7 (MPa)16.7658.9028.11
X8 (kg/m3)2427.642726.772533.27
PPV (mm/s)1.0117.103.65
Table 3. Past studies that used the SSA.
Table 3. Past studies that used the SSA.
ReferenceDescriptionResults
Zhou et al. (2022) [25]Prediction of specific energy in TBM disk cuttersThe SSA-ANN model achieved the most satisfactory prediction accuracy with a coefficient of determination (R2) of 0.976.
Yang et al. (2021) [26]Software Defects Prediction using PSO and SSAHigher convergence speed and more stable, accurate results.
Tabatabaei et al. (2021) [27]Well placement optimization using the SSASSA outperforms PSO, therefore highlighting its potential in optimizing well placements
Dui et al. (2020) [28]Solar Radiation Prediction Based on the SSA, Convolutional Neural Networks (CNN), and Long Short-Term Memory Networks (LSTMN)The SSA-CNN-LSTM model outperforms traditional LSTM and CNN-LSTM models in prediction accuracy, confirming the effectiveness of SSA in parameter optimization.
Table 4. Performance metrics.
Table 4. Performance metrics.
Statistical Performance MetricsEquationIdeal Value
RMSE R M S E = 1 n   i = 1 n ( α i β i ) 2 0
R2 R 2 = 1 i = 1 n ( α i ω i   ) 2 i = 1 n (   ω i ω ¯ ) 2 1
Table 5. Algorithm parameters.
Table 5. Algorithm parameters.
ParameterAlgorithmRange Used
Cr (Crossover probability)GA0.6–0.9
Mr (Mutation probability)GA0.01–0.1
ST (Security threshold)SSA0.5–0.9
PD (Explorer ratio)SSA0.1–0.3
SDaw (Danger-aware proportion by sparrows)SSA0.6–0.9
Table 6. ANN models and their performance evaluations.
Table 6. ANN models and their performance evaluations.
ModelHidden NeuronsTransfer FunctionTrain (RMSE)Train (R2)Test (RMSE)Test (R2)
121ReLU2.8760.3312.9060.573
227Tanh3.4210.0543.3350.437
333ReLU2.5850.462.8320.594
444Tanh2.6130.4483.0820.519
548ReLU2.6540.433.4440.4
Table 7. GA-ANN models and their performance evaluations.
Table 7. GA-ANN models and their performance evaluations.
ModelHidden NeuronsTransfer FunctionTrain (RMSE)Train (R2)Test (RMSE)Test (R2)
121ReLU3.1150.2162.7410.62
227Tanh2.8460.3452.7610.614
333ReLU2.5030.4942.750.617
444Tanh2.6340.4392.6040.657
548ReLU2.1830.6152.3850.712
Table 8. SSA-ANN models and their performance evaluations.
Table 8. SSA-ANN models and their performance evaluations.
ModelHidden NeuronsTransfer FunctionTrain (RMSE)Train (R2)Test (RMSE)Test (R2)
121ReLU2.7940.3692.4780.689
227Tanh2.7440.3912.6760.637
333ReLU2.3230.5642.6580.642
444Tanh2.6150.4472.4580.694
548ReLU2.4850.5012.2520.743
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gaopale, K.; Sasaoka, T.; Hamanaka, A.; Shimada, H. Hybrid Machine Learning Model for Blast-Induced Peak Particle Velocity Estimation in Surface Mining: Application of Sparrow Search Algorithm in ANN Optimization. Algorithms 2025, 18, 543. https://doi.org/10.3390/a18090543

AMA Style

Gaopale K, Sasaoka T, Hamanaka A, Shimada H. Hybrid Machine Learning Model for Blast-Induced Peak Particle Velocity Estimation in Surface Mining: Application of Sparrow Search Algorithm in ANN Optimization. Algorithms. 2025; 18(9):543. https://doi.org/10.3390/a18090543

Chicago/Turabian Style

Gaopale, Kesalopa, Takashi Sasaoka, Akihiro Hamanaka, and Hideki Shimada. 2025. "Hybrid Machine Learning Model for Blast-Induced Peak Particle Velocity Estimation in Surface Mining: Application of Sparrow Search Algorithm in ANN Optimization" Algorithms 18, no. 9: 543. https://doi.org/10.3390/a18090543

APA Style

Gaopale, K., Sasaoka, T., Hamanaka, A., & Shimada, H. (2025). Hybrid Machine Learning Model for Blast-Induced Peak Particle Velocity Estimation in Surface Mining: Application of Sparrow Search Algorithm in ANN Optimization. Algorithms, 18(9), 543. https://doi.org/10.3390/a18090543

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop