Next Article in Journal
Inversion of Biological Strategies in Engineering Technology: A Case Study of the Underwater Soft Robot
Previous Article in Journal
Polymorphism in Glu-Phe-Asp Proteinoids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18

by
Peiyang Wei
1,2,3,4,5,*,
Can Hu
2,
Jingyi Hu
2,
Zhibin Li
2,
Wen Qin
6,
Jianhong Gan
2,4,
Tinghui Chen
1,
Hongping Shu
2,4 and
Mingsheng Shang
3
1
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
School of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, China
3
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
4
Automatic Software Generation & Intelligence Service Key Laboratory of Sichuan Province, Chengdu 610225, China
5
Key Laboratory of Remote Sensing Application and Innovation, Chongqing 401147, China
6
School of Computer Science, Sichuan Normal University, Chengdu 610101, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 361; https://doi.org/10.3390/biomimetics10060361
Submission received: 26 April 2025 / Revised: 22 May 2025 / Accepted: 31 May 2025 / Published: 3 June 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

:
Hyper-parameters play a critical role in neural networks; they significantly impact both training effectiveness and overall model performance. Proper hyper-parameter settings can accelerate model convergence and improve generalization. Among various hyper-parameters, the learning rate is particularly important. However, optimizing the learning rate typically requires extensive experimentation and tuning, as its setting is often dependent on specific tasks and datasets and therefore lacks universal rules or standards. Consequently, adjustments are generally made through trial and error, thereby making the selection of the learning rate complex and time-consuming. In an attempt to surmount this challenge, evolutionary computation algorithms can automatically adjust the hyper-parameter learning rate to improve training efficiency and model performance. In response to this, we propose a black widow optimization algorithm based on Lagrange interpolation (LIBWONN) to optimize the learning rate of ResNet18. Moreover, we evaluate LIBWONN’s effectiveness using 24 benchmark functions from CEC2017 and CEC2022 and compare it with nine advanced metaheuristic algorithms. The experimental results indicate that LIBWONN outperforms the other algorithms in convergence and stability. Additionally, experiments on publicly available datasets from six different fields demonstrate that LIBWONN improves the accuracy on both training and testing sets compared to the standard BWO, with gains of 6.99% and 4.48%, respectively.

1. Introduction

Owing to its ability to efficiently extract features, scale well, and perform excellently, image recognition technology finds widespread use in fields such as image classification and medical image analysis [1,2,3,4,5]. ResNet, proposed by He et al. [2], addresses the degradation problem in deep neural networks by adding shortcut connections. This design enables the network to be deeper without easily overfitting. This model performs excellently in tasks such as image classification [3], which has become a significant benchmark in deep learning [6,7,8,9,10,11].
The training process of ResNet is significantly affected by its learning rate [4], which determines the step size for updating model parameters and directly influences the convergence speed [12,13,14,15,16], ultimately impacting the model’s performance. Compared to a fixed learning rate, an adaptive learning rate can dynamically adjust based on the model’s performance. This approach effectively addresses complex optimization problems, thereby improving the stability and efficiency of model training [5].
The learning rate determines the step size for updating model parameters, affecting both convergence speed and final performance. It may lead to slow convergence or suffer from a local optimum in a model with improper parameters, which makes it difficult to find a global optimum. Numerous researchers have conducted extensive studies to optimize the learning rate, improving training effectiveness [17,18,19,20,21].
Ma et al. propose an efficient optimization method to address function approximation problems, enabling the solution of partial differential equations with deep learning. They employ particle methods (PMs) and smoothed particle methods (SPMs) for spatial discretization, allowing for a smaller learning rate to ensure the convergence of the optimization algorithm [8]. Franchini et al. study a stochastic gradient algorithm that gradually increases the mini-batch size in a predefined manner, automatically adjusting the learning rate in a line search process that is either monotonic or non-monotonic [9]. Wang et al. adopt a learning rate scheduler with an incremental proportional–integral–derivative controller for optimizing the parameters of stochastic gradient descent (SGD) [10]. Qin et al. developed the Adaptive Parallel Stochastic Gradient Descent (AP-SGD) algorithm to minimize scheduling costs. This method achieved significant parallelism by integrating an adaptive momentum technique into the learning process, thereby speeding up convergence with adaptive learning rates and acceleration coefficients [11]. However, these methods still suffer from inappropriate hyper-parameters, which result in low computational efficiency [22,23,24,25,26,27].
To address the aforementioned issues, this study proposes a Lagrange interpolation black widow optimization algorithm (LIBWONN) based on Lagrange interpolation to further enhance its performance. By dynamically constructing an interpolation function with known data points during iteration, the parameters are adjusted to obtain an optimal learning rate.
In order to substantiate the efficacy of the algorithm proposed herein, nine optimization algorithms are deliberately selected as baselines, which comprehensively embrace four aspects. These aspects include “Optimization based on principles from physics and mathematics (PSEQADE [28])”, “Optimization based on evolutionary cycles (COVIDOA [29], LSHADE-cnEpSin [30])”, “Optimization based on behaviors observed in animals and plants (ALA [31], SDO [32], SASS [33], BOA [34], WOA [35])”, and “Optimization inspired by human activities (DOA [36], CBSO [37], AGSK [38])”. Each of these algorithms represents a different approach to solving optimization problems, ensuring a comprehensive comparison. We conduct comparative experiments to analyze their performance in terms of convergence speed, solution accuracy, and robustness. By comparing these algorithms, we aim to highlight the strengths and weaknesses of LIBWONN.
The key contributions of this study are listed below:
(a) This paper proposes a Lagrange interpolation-based black widow optimization algorithm (LIBWONN) to improve training efficiency and performance by optimizing the learning rate of ResNet18, which overcomes the limitations of the original BWO algorithm in avoiding local optima, achieving higher robust learning rate adjustments.
(b) Experiments are conducted on six publicly available datasets, with nine novel metaheuristic optimization algorithms selected as baselines. The experimental results demonstrate that LIBWONN outperforms the other algorithms and maintains good generalization and stability across multiple datasets.

2. Related Theoretical Description

2.1. ResNet18 Model

ResNet is a deep residual network model proposed by a research team at Microsoft Research. The number of convolutional layers in ResNet can be adjusted based on different tasks; adding more layers can improve accuracy to meet varying task requirements by increasing structural complexity with depth [38,39,40,41,42,43,44,45,46]. This paper adopts ResNet18, which is a commonly used ResNet model. The primary innovation of this model is the introduction of residual learning. Traditional deep neural networks frequently face issues with vanishing or exploding gradients, especially as the network depth increases, which makes training challenging. ResNet18 addresses these issues by using residual blocks, which allow shortcut connections between network layers, thus enabling the model to learn features more deeply and effectively. This can be represented by the formula below:
H ( x ) = F ( x ) + x ,
where H(x) is the expected output of the network, F(x) denotes the residual function, and x is the input signal or feature map.
From Figure 1, each residual block includes two main paths: one is direct identity mapping, where x is added directly to the output of the target function via a shortcut connection, as shown in Figure 2; the other is the path for learning the residual, which produces an output through two convolutional layers and the activation function ReLU.
The structure of ResNet18 consists of several major components, and Figure 3 represents the architecture of this model. The input layer accepts an image of size 224 × 224 × 3. This is followed by a 7 × 7 convolutional layer with a stride of 2, outputting 64 channels, and then a 3 × 3 max pooling layer with a stride of 2. In the residual block, the network is divided into four stages. Stage 1 contains two residual blocks with 64 channels; Stage 2 has two residual blocks with 128 channels and a stride of 2; Stages 3 and 4 contain two residual blocks with 256 and 512 channels, respectively, each with a stride of 2. After a global average pooling layer, the feature map is converted into a fixed-length feature vector, which is passed through a fully connected layer to produce a classification result [35,36,37,38,39,40,41].
The learning rate is a critical hyper-parameter for training ResNet18, as it directly affects the convergence speed and performance of the model. An appropriate learning rate can accelerate convergence, allowing the model to achieve better results within fewer training iterations. Setting the learning rate too high can cause the model to overshoot the optimal solution during loss function optimization, resulting in unstable or divergent training. Conversely, if the learning rate is too low, convergence may be slow, increasing training time and potentially trapping the model into local optima. For ResNet18, the depth structure with shortcut connections benefits from an appropriate learning rate, which helps to avoid the issue of vanishing gradients.
The effect of the learning rate on each layer of the ResNet18 model is captured by the equation below:
θ t + 1 = θ t η L ( θ t ) ,
where θ t represents the parameters of the current model, θ t + 1 represents the parameters of the updated model, and η is the learning rate, which controls the step size of parameter updating. L ( θ t ) is the gradient of the loss function.
For example, the influence on the convolutional connected layers can be represented by Equations (3) and (4).
W c o n v t ( l ) = W c o n v t ( l ) η L c o n v t ( l ) ( W c o n v t ( l ) ) ,
W l f c t + 1 = W l f c t η L l f c W l f c t .
In a comprehensive and rigorous attempt to authenticate the efficacy of the ResNet18 neural network model within the complex realm of image classification, a painstaking selection of five preeminent and canonical models is executed. These models include the Recurrent Neural Network (RNN) [47], the Convolutional Neural Network (CNN) [2], the Long Short-Term Memory network (LSTM) [48], the Generative Adversarial Network (GAN) [49], and the Visual Geometry Group network (VGG) [50]. Subsequently, the FASHION-MNIST dataset in synergy with the LIBWONN optimization algorithm is harnessed for the training process. The FASHION-MNIST dataset, characterized by its distinct features relevant to fashion-focused image classification tasks, provides an opportune medium for evaluating the models’ performance. In Table 1, the experimental results show that ResNet18 performed the best in both training and testing, with a test accuracy of 95.33%, which is about 4.28% higher than the worst-performing GAN. CNN and VGG showed performances close to ResNet18 but were slightly inferior. RNN and LSTM performed worse, with accuracies about 2% to 3% lower than ResNet18. GAN performed the worst, with higher training and testing losses. The residual connections in ResNet18 effectively avoid the vanishing gradient problem in deep networks, ensuring faster convergence and higher accuracy. Other models, such as RNN and LSTM, are more suitable for sequential data and perform worse in image classification tasks compared to CNN and ResNet. Although VGG also achieved good results, it lacks the residual structure of ResNet18, leading to a slightly worse training and testing performance. Therefore, ResNet18, with its unique structural advantages, outperforms other classic neural network models in handling image classification tasks.

2.2. Black Widow Optimization Algorithm

Inspired by the biological characteristics of the black widow spider, the black widow optimization (BWO) algorithm is a novel nature-inspired metaheuristic optimization method. The algorithm simulates the strategies male black widow spiders use to locate females, combining pheromone-guided actions with movement strategies within their web to effectively search and optimize solutions in complex problem spaces. The specific principles of the two strategies employed by the BWO are explained as follows:
  • Movement Strategy
The movement strategy of the black widow spider is one of its mobility strategies; it can be abstracted into linear and spiral movements within its web.
x i t + 1 = x t m x r 1 t ,   i f   r a n d ( ) 0.3 , x t cos ( 2 π β ) x i ( t ) ,   i n   o t h e r   c a s e ,
where x i t + 1 is the new position of the current search agent, x t is the best search agent from the previous iteration, m denotes a randomly generated floating-point number within the range [0.4, 0.9], x i ( t ) represents the i-th search agent, and β denotes a random floating-point number within the range [−1.0, 1.0].
When the spider moves linearly, it follows a determined direction for precise and effective searching in the current region. In contrast, spiral movement expands its search range, enhancing global exploration and helping to avoid local optima. The algorithm implements the black widow spider’s movement using a formula with random floating-point numbers, where m controls speed, and added randomness further enhances search diversity. By strategically combining both linear and spiral movements, the algorithm balances local exploitation and global exploration, ultimately improving its ability to solve complex optimization problems efficiently.
2.
Pheromone
Pheromones play a crucial role in the mating process of spiders, as male spiders tend to prioritize female spiders with higher pheromone levels. The formula for calculating pheromones is as follows:
p h e r o m o n e i = f i t n e s s m a x f i t n e s s i f i t n e s s m a x f i t n e s s m i n ,
where f i t n e s s m a x and f i t n e s s m i n are the worst and best fitness values in the global iteration, respectively. f i t n e s s i is the fitness of i-th search agent.
x i t = x t + 1 2 x r 1 t 1 σ x r 2 t ,
When the pheromone level is less than or equal to 0.3, the search agent x i ( t ) is updated by Equation (3). x r 1 t and x r 2 t are the random integers generated within the range from 1 to the maximum size of the search agents, σ 0 , 1 . Additionally, the pseudocode for the BWO algorithm can be found in Algorithm 1.
Algorithm 1: BWO
Input: MaxIter, pop, dim
  Operation
/* Initialization */
1.Initialize: MaxIter, pop, dim
2.Initialize: parameters m and β
/* Training Starts */
3.while iteration < Max Number of Iterations do
4.     if random < 0.3 then
5.
6.
7.
8.
9.

10.

11.
12.
13.
14.
15.
             X i n ew X * ( t ) m X r 1 ( t )
    else
             X i n e w X * ( t ) c o s ( 2 π β ) x i ( t )
    end if
    Calculate the pheromone for each search agent using the specified Equation (6)
    Revise search agents with low pheromone values using its Equation (7)
    Calculate X n e w fitness value of the new search agents
    if X n e w < X * then
             X * X n e w
end if
     i t e r a t i o n i t e r a t i o n + 1
16.end while
/* Operation Ending */
Output:  X * , the best optimal solution
The algorithm takes as input the maximum number of iterations (MaxIter), population size (pop), and dimension (dim). It first initializes the parameters m and β. During the iterative process, the algorithm selects different operations based on a set probability to maintain population diversity and search capability.
In each iteration, the pheromone of each individual is calculated, which reflects the individual’s fitness. Individuals with lower pheromone values update their positions to avoid becoming trapped in the local optima. After updating, the fitness values are recalculated, and the global best solution is updated if a better solution is found. After the iterations end, the algorithm outputs the best solution X*. This process simulates the hunting behavior of black widow spiders to efficiently search for the global optimum.

2.3. Lagrange Interpolation Method

The BWO algorithm has excellent performance in solving complex optimization problems, but it still has some limitations, as it is sensitive to critical parameters such as the population size during initialization, the maximum number of iterations, and the dimensionality of the problem. In some cases, the BWO algorithm may encounter local optima and exhibit an unstable convergence speed, which can lead to a decrease in solution quality and a slowdown in optimization speed.
To address the issues in optimization algorithms, F. Miao, Y. Wu et al. propose a quadratic interpolation whale optimization algorithm for solving high-dimensional feature selection problems [26,27], while Z. Li, S. Li et al. propose a novel cubic interpolated beetle antennae search (CIBAS)-based robot arm calibration algorithm [51].
Therefore, this study also introduces mathematical interpolation methods to optimize the BWO algorithm. We select six interpolation methods (Lagrange interpolation, Newton interpolation [42], spline interpolation [43], quadratic interpolation [44], linear interpolation [45], and Chebyshev interpolation [46]) and conduct extensive comparative experiments on the FASHION-MNIST dataset with the ResNet18 model. From the data in Table 2, it can be clearly seen that Lagrange interpolation performs the best in optimizing the black widow optimization (BWO) algorithm. It achieves the lowest training and testing losses, as well as the highest training and testing accuracies, indicating superior convergence and generalization abilities. Furthermore, this method achieves 0.95 in precision, recall, and F1 score, resulting in the best overall classification performance. In contrast, the other interpolation methods show slightly lower testing accuracy and F1 scores, with quadratic and linear interpolation performing particularly poorly. This indicates that Lagrange interpolation is significantly more effective in optimizing the BWO algorithm, leading to faster convergence, stronger generalization, and more stable classification performance across various problem domains, especially in complex and large-scale optimization tasks.
Lagrange interpolation is employed to enhance the position-updating process of each spider, thereby further optimizing the BWO algorithm. When a spider needs to update its position toward the global optimum, a new position is calculated through Lagrange interpolation, which adopts several known optimal spider positions. Then, the new position is compared with the global optimal position to decide whether to update. This approach mitigates excessive jumps in the spider’s movement, enhancing the algorithm’s convergence and robustness, which helps avoid local optima and improve global search performance.
The Lagrange interpolation polynomial is given by the following:
L x = j = 0 k y i l j x .
Each lj (x) is a Lagrangian polynomial, which is expressed as follows:
l j x = i = 0 , i j k x x i x j x i = x x 0 x j x 0 x x j 1 x j x j 1 x x j + 1 x j x j + 1 x x k x j x k ,
where xj is the position of the independent variable and yj is the value of the function at this position.
The Lagrange interpolation method calculates the value of an unknown point by using the values of three known points. The steps are roughly as follows:
  • Determining the coordinate information of the three known points.
A = x 0 , y 0 , B = x 1 , y 1 , C = x 2 , y 2 ,
2.
Calculating the Lagrange basis function.
l 0 x = x x 1 x x 2 x 0 x 1 x 0 x 2 ,
l 1 x = x x 0 x x 2 x 1 x 0 x 1 x 2 ,
l 2 x = x x 0 x x 1 x 2 x 0 x 2 x 1 .
3.
The Lagrange interpolation polynomial is obtained by adding three equations together.
L x = y 0 l 0 x + y 1 l 1 x + y 2 l 2 x .
By using the Lagrange interpolation method to optimize the BWO algorithm (LIBWONN), its convergence speed and local search capability are enhanced. Using Lagrange interpolation, a new optimal fitness is derived from the current optimal fitness, global optimal fitness, and previous optimal fitness. Furthermore, the new optimal fitness is evaluated against the global optimal fitness to determine if an update to the global optimal fitness is necessary.

2.4. The Model Design Based on the LIBWONN Method

We employ the LIBWONN algorithm to dynamically adjust the learning rate of the ResNet18 model, which improves training efficiency. This approach allows the model to converge quickly in the early stages and gradually reduce the learning rate as it approaches the optimal solution, thereby reducing training time and the number of iterations. Additionally, this approach helps avoid local optima and enhances the model’s overall performance. The adaptive learning rate also improves model stability, reducing the risk of overfitting and enhancing generalization ability. Meanwhile, dynamic adjustment simplifies hyper-parameter tuning, reduces manual intervention, and increases training efficiency. Moreover, the model structure is as follows:
The model optimization process shown in Figure 4 is based on the black widow optimization algorithm (LIBWONN), which simulates the hunting behavior of black widow spiders in nature to search for the optimal learning rate. The training data comes from a public dataset, which is normalized before being used to train a deep neural network. After each training session, the validation loss is calculated using a validation set. This loss value acts as a pheromone in the model and serves as a metric to evaluate the quality of the current learning rate, guiding the individuals during the optimization process.
The core of the black widow optimization algorithm lies in simulating the predatory behavior of a spider population for search and optimization. Each individual (i.e., spider) represents a potential learning rate, randomly selected within a defined boundary range. After training the model, the validation loss corresponding to each individual is calculated and used as its fitness value. The higher the fitness, the stronger the pheromone released, attracting other individuals to move closer and accelerating convergence toward the optimal solution.
In each iteration, individuals update their positions not only under the guidance of the current best and global best solutions, but also by incorporating Lagrange interpolation to enhance the algorithm’s search capability and convergence performance. Unlike traditional update strategies that rely solely on current fitness values for local adjustments, the Lagrange interpolation mechanism leverages trend information derived from historical optimal solutions to predict potentially better learning rate positions. Specifically, during the optimization process, three key points are recorded: the best learning rate and its corresponding validation loss from the previous iteration, the best learning rate and its loss from the current iteration, and the global best learning rate with its loss. These three points are treated as inputs to the Lagrange interpolation function, which constructs a polynomial to estimate the functional relationship between the learning rate and loss.
The interpolated result yields a predicted learning rate position that is likely to produce a lower validation loss. This position is then evaluated through training, and if its corresponding loss outperforms the current global best, it is adopted as the new global optimum and releases stronger pheromones to guide future searches. Since the interpolation integrates multiple historically optimal points and captures the underlying optimization trend, it not only improves search accuracy and speeds up convergence but also enhances the algorithm’s ability to escape local optima and maintain population diversity. As a result, the LIBWONN algorithm demonstrates superior performance in dynamically tuning the learning rate, ultimately improving model generalization and classification accuracy.
In this way, during each iteration, the LIBWONN algorithm not only relies on traditional fitness evaluation and position updates but also uses Lagrange interpolation to predict potentially better solutions. This accelerates the search for the optimal learning rate and helps the neural network achieve better performance during training. Ultimately, the optimized model performs significantly better on the validation set, effectively improving classification accuracy and generalization ability. Moreover, by incorporating historical optimal points into the interpolation process, the algorithm enhances its ability to escape local optima and explores a wider solution space more effectively.
Moreover, the pseudocode of the LIBWONN algorithm is in Algorithm 2.
Algorithm 2: LIBWONN
Input: MaxIter, pop, dim
Operation
/* Initialization */
1.Initialize: MaxIter, pop, dim
2.Initialize: parameters m and β
/* Training Starts */
3.while iteration < Max Number of Iterations do
4.     if random < 0.3 then
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
             X i n e w X * ( t ) m X r 1 ( t )
    else
             X i n e w X * ( t ) c o s ( 2 π β ) x i ( t )
    end if
        Calculate the pheromone for each search agent using the specified Equation (6)
    Revise search agents with low pheromone values using its Equation (7)
    Calculate X n e w fitness value of the new search agents
    if X n e w < X * then
             X * X n e w
    end if
     f n e w L a g r a n g e _ I n t e r p o l a t i o n ( f o l d , f b e s t , f c u r r e n t )
    if f n e w < f b e s t then
             f b e s t f n e w
    end if
     i t e r a t i o n i t e r a t i o n + 1
20.end while
/* Operation Ending */
Output:  X * , the best optimal solution
The primary steps of the enhanced LIBWONN algorithm are as follows:
Step 1: Initialize the population parameters, such as population size, maximum iterations, and population boundary range, randomly initializing the optimal position and optimal fitness. We calculate the pheromone value by using Equation (2) based on the pheromone strategy.
Step 2: Update spider positions by using Equation (1) according to the movement strategy. If the pheromone value is less than 0.3, the pheromone strategy updates the current individual position by using Equation (2).
Step 3: Adjust the current individual’s position and fitness.
Step 4: Update the global optimal position, optimal fitness, and pheromone.
Step 5: Use the current optimal fitness, global optimal fitness, and previous optimal fitness to calculate a new fitness value via Lagrange interpolation. Evaluate it against the global optimal fitness and update the global optimal fitness if needed.
Step 6: Verify whether the maximum iterations are completed. If not, proceed back to Step 2; otherwise, halt the iteration and return the optimal position and fitness.

3. Experiments and Analyses

3.1. Dataset

A total of six public datasets are used in the experiments, which cover various scenarios such as clothing, handwritten digits, and street-view images. Their details are as follows:
  • FASHION-MNIST [52]: A clothing image dataset with grayscale images sized 28 × 28 pixels contains 10 different clothing categories, such as T-shirts, trousers, dresses. The training set contains 60,000 samples, while the test set comprises 10,000 samples.
  • MNIST [53]: This dataset for handwritten digit recognition contains 28 × 28 pixel images of digits from 0 to 9, with 60,000 samples for training and 10,000 samples for testing.
  • Intel Image Classification [54]: This dataset includes images from six different categories, like buildings, forests, glaciers, mountains, oceans, and cities. All images are standardized to 150 × 150 pixels. The training set contains 14,034 samples, and the testing set has 3,000 samples.
  • SVHN [55]: A digit recognition dataset has street-view images with 32 × 32 pixels. It includes 10 categories corresponding to the digits 0–9, and has 73,254 training images and 26,032 test images.
  • RICE [56]: It covers five common rice varieties: Arborio, Basmati, Ipsala, Jasmine, and Karacadag. Image dimensions are 224 × 224 pixels with a total of 3,800 images.
  • CIFAR10 [57]: This dataset includes images from 10 different categories: Airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The images are 32 × 32 pixels with a training set of 50,000 images and a test set of 10,000 images.

3.2. Base Models

To validate the performance of the LIBWONN algorithm, nine novel and advanced optimization algorithms are selected for comparative experiments.
  • DOA: Dream Optimization Algorithm (DOA) is a novel metaheuristic algorithm inspired by human dreams. The algorithm combines a basic memory strategy with a forgetting and replenishing strategy to balance exploration and exploitation.
  • ALA: Artificial Lemming Algorithm (ALA) is a biologically inspired metaheuristic algorithm inspired by the four basic behaviors of lemmings in nature: long migrations, digging holes, foraging for food, and avoiding predators. The algorithm simulates the survival strategies of lemmings in complex environments, providing an effective search method for solving optimization problems.
  • SDO: Sled Dog Optimizer (SDO) is mainly inspired by the various behavior patterns of sled dogs, focusing on simulating the processes of dogs pulling sleds, training, and retiring to construct a mathematical model.
  • CBSO: Connected Banking System Optimizer (CBSO) is a population-based optimization algorithm that belongs to a multi-stage search strategy. It is inspired by the interconnectedness of banking systems, where different banks are connected in various ways, facilitating transactions and submissions.
  • PSEQADE: Quantum Adaptive Population State Evaluation Differential Evolution Algorithm (PSEQADE) is an improved quantum heuristic differential evolution algorithm. It adopts a quantum adaptive mutation strategy to reduce excessive mutation and introduces a population state evaluation framework to enhance convergence accuracy and stability.
  • COVIDOA: Coronavirus Optimization Algorithm (COVIDOA) is an evolutionary algorithm that simulates the biological lifecycle. It is inspired by the behavior of organisms at different stages such as growth, reproduction, and adaptation. The algorithm simulates the evolutionary process of individuals from youth to adulthood, adapting through mutation, recombination, selection, and reproduction based on environmental changes, thereby balancing global and local search capabilities.
  • SASS: Social-Aware Salp Swarm Algorithm (SASS) is a population-based optimization algorithm inspired by the behavior of sand particles in a sandstorm. It mainly simulates the collective movement of sand particles under the influence of wind to perform global optimization. The goal of SASS is to improve collaboration among individuals in the group using a social awareness model, enhancing the balance between exploration and exploitation in the search process.
  • LSHADE-cnEpSin: Latent Search Strategy Adaptive Differential Evolution with Compound Neighborhood-based Epistemic Population for Sine Function (LSHADE-cnEpSin) is an improved differential evolution algorithm. It enhances optimization performance, especially for high-dimensional complex problems, by using an adaptive mutation strategy and a control mechanism that balances global and local search.
  • AGSK: Adaptive Gaining Sharing Knowledge (AGSK) is an algorithm that simulates the human knowledge-sharing process. It enhances global search capability and local search efficiency by introducing an adaptation strategy based on successful historical positional information, making it suitable for solving complex optimization problems.
  • BOA: The Bobcat Optimization Algorithm (BOA) is a bio-inspired metaheuristic algorithm that simulates the natural hunting behavior of bobcats. It enhances the balance between global exploration and local exploitation by modeling two phases: the bobcat’s movement towards its prey (exploration) and the chase process to catch its prey (exploitation). BOA’s dual-phase position update strategy improves convergence speed and solution quality, making it effective for solving high-dimensional, complex, and constrained optimization problems.
  • WOA: The Wombat Optimization Algorithm (WOA) is a bio-inspired metaheuristic algorithm that simulates the foraging behavior of wild wombats and their evasive maneuvers against predators. The algorithm models two phases: the wombat’s position changes during foraging (exploration) and its movements when diving into tunnels to escape predators (exploitation), effectively balancing global search and local search.

3.3. Performance Verification

In an effort to rigorously validate the efficacy of the LIBWONN algorithm in adaptively optimizing the learning rate of the ResNet18 model, a set of comparative experiments are meticulously carried out. In these experiments, the LIBWONN algorithm is contrasted with nine other cutting-edge optimization algorithms. The algorithms selected are thus emblematic of the deep-learning domain, ensuring a comprehensive and probative evaluation of the LIBWONN algorithm’s performance in the optimization of RenNet18 model. The experiments involve six public datasets to ensure the comprehensiveness and reliability of the evaluation results. The chosen datasets cover various fields and tasks, which allows for testing the algorithms’ performance through diverse applications and validating the applicability of the LIBWONN algorithm.
In Figure 5, the LIBWONN algorithm performs excellently on multiple datasets. Specifically, LIBWONN shows the fastest convergence speed and achieves convergence by the 10th iteration on the FASHION-MNIST, MNIST, and SVHN datasets, where the final loss value reaches its minimum. This indicates that the LIBWONN algorithm is able to effectively capture the features of the data on these relatively simple datasets.
On the Intel Image Classification dataset, compared with the WOA model, LIBWONN’s initial convergence speed and final loss value are slightly inferior by 1.08%. This is mainly attributed to the higher complexity and diversity of the dataset, which requires more iterations for the model to adequately capture the data patterns. Despite its relatively weaker performance on this dataset, LIBWONN still demonstrates a good convergence capability and maintains a relatively low final loss value overall.
Overall, the LIBWONN model performs exceptionally well across multiple datasets, especially in tasks involving FASHION-MNIST, MNIST, and SVHN, where its accuracy reaches above 98%. Its efficient convergence speed and low loss values highlight its effectiveness in image classification. Moreover, when dealing with more complex datasets, LIBWONN also shows strong adaptability, indicating promising potential for further optimization and exploration in future research.
In Figure 6, we present a comparison of the training accuracy between different optimization algorithms and the proposed LIBWONN algorithm across five datasets. The results clearly demonstrate that LIBWONN performs exceptionally well, achieving high accuracy, requiring fewer training iterations, and exhibiting rapid convergence. Additionally, compared to the other optimization algorithms, LIBWONN offers enhanced stability and smoother training curves.
Notably, on the MNIST and SVHN datasets, the LIBWONN algorithm attains high accuracy within a short training time, highlighting its capability for fast convergence. However, on the Intel Image Classification dataset, its training accuracy exhibits more noticeable fluctuations. This can be attributed to the dataset’s greater diversity and complexity, as it comprises six different scene categories and intricate image structures, making generalization more challenging for the optimization algorithm. Factors such as noise, imbalanced sample distribution, and the limited size of the training dataset may lead the model to learn incorrect patterns, thereby causing accuracy fluctuations. In convolutional neural networks trained for image classification, dataset imbalance can further contribute to such variations. Nevertheless, when using models like ResNet, accuracy fluctuations typically smooth out over time rather than displaying random oscillations, which aligns with our expected trend.
Overall, the LIBWONN algorithm demonstrates outstanding performance across multiple datasets, particularly in terms of rapid convergence. However, when applied to complex datasets such as Intel Image Classification, the impact of dataset diversity and complexity on the model’s generalization ability must be carefully considered. By fine-tuning optimization parameters, employing data augmentation techniques, and leveraging other strategies, the model’s stability and accuracy can be further enhanced.
a c c u r a c y = T P + T N T P + T N + F P + F N ,
p r e c i s i o n = T P T P + F P ,
r e c a l l = T P T P + F N ,
F 1 s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l ,
W = min D i > 0 R i , D i < 0 R i ,
Q = 12 n k ( k + 1 ) j = 1 k R ¯ j 2 3 n ( k + 1 ) ,
TP (true positive): the prediction matches the actual positive class. TN (true negative): the prediction matches the actual negative class. FP (false positive): the prediction is positive, but the actual class is negative. FN (false negative): the prediction is negative, but the actual class is positive.
Accuracy, the most intuitive metric, represents the proportion of correct predictions out of all the samples. However, it does not always provide a complete picture of a model’s performance, particularly in imbalanced datasets where errors in the minority class have a minimal impact on overall accuracy. To quantitatively verify the performance of the LIBWONN algorithm, we select several evaluation metrics such as accuracy, precision, recall, and F1-score. These metrics are standard in the field of machine learning and are used to comprehensively assess the model’s predictive capabilities.
Precision measures the ratio of true positive predictions to the total positive predictions made by the model. This metric shows how well the model identifies actual positive cases. A high precision score means the model is more careful in predicting positives, reducing the likelihood of false positives.
Recall evaluates the percentage of true positives among all actual positive samples, reflecting the model’s ability to detect positive instances. A high recall means the model is adept at identifying all positive samples, thereby decreasing false negative errors.
F1-Score, being the harmonic mean of the precision and recall, is used to assess the balance between these metrics. It reaches its highest value of 1 when the precision and recall are equal, otherwise it diminishes. This metric is crucial in applications where both precision and recall need to be taken into account.
p-value represents the probability of observing the current data or more extreme results under the assumption that the null hypothesis is true, serving as a measure of the evidence against the null hypothesis. A lower p-value indicates a lower likelihood of the null hypothesis being true, providing a basis for its rejection. Typically, if the p-value is less than the significance level (e.g., 0.05), the result is considered statistically significant. This allows researchers to make data-driven decisions and draw conclusions about the relationships between variables. The mathematical formulas for the Wilcoxon signed-rank test and the Friedman test are shown in Equations (19) and (20), respectively. After standardization, hypothesis testing is conducted to validate the assumptions and ensure reliable results.
By thoroughly evaluating these metrics, we can better understand the performance of the LIBWONN algorithm across various datasets, as well as its potential and limitations in real-world applications, providing valuable insights into its practical applicability and robustness in different scenarios.
Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 show the performance metrics of each benchmark model on six public datasets. In terms of the three comprehensive performance indicators, namely precision, recall, and F1-score, LIBWONN achieves the best results, which indicates that the LIBWONN algorithm has strong learning capabilities and significant advantages in classification accuracy, recall, and overall performance. This demonstrates that LIBWONN is able to effectively capture patterns and make accurate predictions across various datasets. Its superior performance in both precision and recall highlights its ability to balance the trade-off between minimizing false positives and false negatives, making it a highly reliable model for real-world applications.
In the signed-rank test, the p-value for most cases is less than 0.05, indicating that there is a significant difference between the other algorithms and LIBWONN. This suggests that the superior performance of LIBWONN is not due to random chance but is statistically significant. A p-value smaller than 0.05 typically means that the null hypothesis can be rejected, confirming that LIBWONN consistently outperforms the other algorithms in terms of accuracy, recall, and F1-score. This strengthens the validity of LIBWONN as a more effective and reliable algorithm for solving optimization and classification tasks.
LIBWONN performs exceptionally well on the MNIST and RICE datasets, with its training and testing loss being relatively low, and its accuracy exceeding 99%. All metrics reach their highest levels, showing a 3.44% improvement in performance on the training set compared to BWO, followed by a 2.16% improvement on the test set. Additionally, LIBWONN achieves excellent results on relatively complex datasets such as SVHN, Fashion-MNIST, and Intel Image Classification. Although its performance slightly declines on the texture-rich Fashion-MNIST and the diverse natural scenes of the Intel Image Classification dataset, LIBWONN still maintains a low loss, high accuracy, and outstanding precision, recall, and F1 scores. These results demonstrate LIBWONN’s consistent performance across a wide range of datasets. In the SVHN dataset’s training set, LIBWONN trails BWO by just 0.37%.

3.4. Testing Functions

The CEC 2017 and CEC 2022 testing functions, presented at the IEEE Congress on Evolutionary Computation, serve as essential benchmarks for evaluating optimization algorithms. The CEC 2017 functions encompass various optimization problems, such as unimodal, multimodal, composite, and dynamic types. These functions are designed to test the algorithm’s performance under different complexities and characteristics, thus evaluating its capabilities in both static and dynamic environments. The CEC 2022 testing function introduces complex challenges, which includes intricate multimodal function, high-dimensional function, irregular function, discontinuous function, and constrained optimization problems. These new features are intended to better reflect the challenges of modern applications, which test algorithms’ performance in high-dimensional spaces and complex landscapes, as well as under constraints. CEC 2022 emphasizes improving the robustness and adaptability of algorithms, thus providing a more comprehensive testing environment.
  • Unimodal functions: Characterized by a single global optimum, they are used to test an algorithm’s local search proficiency and convergence speed. They are effective in evaluating the algorithm’s efficiency and accuracy in basic situations.
  • Multimodal functions: Characterized by several local optima, they test the performance of an algorithm to navigate out of local optima as well as its global search capability. These functions are adopted to measure the performance of the algorithm in complex and nonlinear environments.
  • Composite functions: They are composed of multiple functions with different characteristics, which simulate the complexity and diversity of real-world problems. They are adopted to verify the algorithm’s adaptability when handling mixed features and multiple levels of difficulty.
  • Dynamic change functions: Their objective function changes over time, and are used to evaluate the algorithm’s tracking and adaptability in dynamic environments. They are especially suitable for testing an algorithm’s ability to maintain its performance under continuously changing conditions.
The characteristics of these test functions lie in their diversity and complexity, thus allowing researchers to test the performance of algorithms through different dimensions and scenarios. The design of these functions takes into account various factors, including smoothness, differentiability, and the number of local optima. These factors directly affect the performance of optimization algorithms during the solution process. Moreover, Table 9 and Table 10 summarize the basic information regarding the CEC 2017 and CEC 2022 testing functions in our paper, including function numbers, function names, and global minimum values.
To thoroughly evaluate the performance of different optimization algorithms on the CEC test functions, we conducted multiple experiments using various algorithms to iteratively solve several test functions and plotted their fitness value curves. Figure 7 and Figure 8 illustrate the fitness changes over 200 iterations for different algorithms on the CEC2017 and CEC2022 test functions, providing a clear basis for subsequent analysis.
Figure 7 shows that LIBWONN and BWO exhibit rapid decreases in their fitness values within the first 50 iterations, demonstrating fast convergence speeds and strong search capabilities. Additionally, LIBWONN’s fitness curve shows smaller fluctuations, indicating better stability. This fast convergence makes these algorithms especially suitable for time-sensitive optimization problems, as they can quickly locate regions near the optimal solution. Although LIBWONN performs slightly worse than SDO and CBSO on the F3 and F7 test functions, it remains highly competitive overall and excels in handling complex and large-scale functions. Its efficient and stable characteristics give it advantages in practical applications, showing good robustness against parameter variations and environmental disturbances.
Figure 8 presents the performance of LIBWONN on the CEC2022 test functions, where it consistently demonstrates a rapid fitness decline, reflecting its excellent convergence ability and stability. The algorithm effectively balances exploration and exploitation, avoiding local optima and ensuring global optimization. Compared with the other algorithms, LIBWONN outperforms them in both convergence speed and final fitness values, showing strong adaptability, especially suited for high-dimensional and complex optimization tasks. Moreover, LIBWONN maintains stable performance across different test environments, highlighting its potential for applications in dynamic and complex scenarios.
To further quantify the algorithm’s performance, we evaluated it using three metrics: average value, best value, and standard deviation. The average fitness value reflects overall stability and reliability across multiple runs; a lower average indicates a consistent good performance. The best value measures the algorithm’s ability to find the optimal solution, while the standard deviation assesses the variability of its results, with smaller values indicating greater stability. Together, these metrics show that LIBWONN performs excellently across the various test functions, confirming its efficiency and broad applicability in complex optimization problems.
Combining the fitness curves and quantitative metrics, the LIBWONN algorithm demonstrates significant advantages in fast convergence, superior stability, and strong adaptability across multiple test functions, proving its potential and value in solving real-world complex optimization challenges.
The best value refers to the lowest fitness value achieved by the algorithm on multiple iterations, the optimal solution the algorithm can reach in a given environment. Moreover, the best value of the LIBWONN algorithm directly reflects its efficiency in exploring the solution space. A lower best value indicates that the algorithm is capable of effectively finding solutions close to the global optimum.
The standard deviation is an indicator for assessing the variability of the algorithm’s results, thus reflecting the stability of LIBWONN on multiple runs. A smaller standard deviation shows a higher consistency of the algorithm’s results in different experiments, which indicates that LIBWONN can maintain a stable optimization performance in the face of various problems. It is particularly important in practical applications to ensure the reliability and reproducibility of the algorithm.
From an extensive analysis of these three metrics, we can thoroughly evaluate the performance of the LIBWONN optimization algorithm and affirm its effectiveness in handling complex optimization tasks. These quantitative results provide us with a more solid theoretical foundation, thus aiding us in understanding the significant advantages of the LIBWONN algorithm in practical applications.
  • Average value (Ave):
A v e = i = 1 n x i n ,
where xi is the total number of samples.
2.
Best value (Best):
B e s t = min x 1 , x 2 , , x n .
3.
Standard deviation (Std):
S t d = i = 1 n x i x ¯ 2 n 1 .
Each test function is uniformly set at 20 dimensions, except for F13, F14, and F15 in CEC2017, which are set at 30 dimensions.
The detailed results of the CEC2017 test functions are presented in Table 11. From the average values of each test function, it is evident that LIBWONN and BWO demonstrate superior performance across most functions, particularly for functions F1, F3, and F4. Their average values are consistently lower than those of the other algorithms, indicating that these two algorithms excel in optimizing these specific problems. This performance highlights LIBWONN and BWO’s strong ability to explore the solution space, suggesting that their algorithmic designs are highly effective in quickly locating regions near the optimal solution.
In terms of stability, the analysis of the standard deviation shows that LIBWONN and BWO generally exhibit low standard deviations, underscoring their high stability throughout the optimization process. For example, the standard deviations of LIBWONN and BWO remain within an acceptable range for the functions F1, F3, and F4, demonstrating consistent and reliable results.
The comparison of the best values further underscores the advantage of LIBWONN. In several of the test functions, LIBWONN achieves the best values, securing the first rank, which emphasizes its exceptional overall performance. This result confirms the efficacy of LIBWONN in optimization tasks and demonstrates its excellent adaptability and competitiveness across various complex problems.
Table 12 presents the detailed results of the CEC2022 test functions. Analyzing these results reveals that the LIBWONN algorithm continues to excel in several of the test functions, particularly F1 and F2, where its average fitness is notably lower than that of the other algorithms, further showcasing its optimization capabilities for complex problems. From the perspective of standard deviation, LIBWONN maintains relatively low standard deviation values, indicating remarkable stability. This suggests that LIBWONN can consistently deliver reliable optimization results across multiple experiments. In summary, LIBWONN’s performance in the CEC2022 test functions reinforces the advantages observed in CEC2017, confirming its continued effectiveness and reliability in solving complex optimization challenges.

3.5. Ablation Study and Sensitivity Analysis

In an attempt to validate the contribution of each component within the LIBWONN algorithm to the performance of the ResNet18 model, and thereby to illustrate the efficacy of LIBWONN, two ablation experiments are carried out during the training of the ResNet18 model on the FASHION-MNIST dataset: (1) LIBWONN is substituted with a conventional optimization approach employing a fixed learning rate, and (2) the Lagrange interpolation in LIBWONN is removed, and only the BWO algorithm is used for the optimization effectiveness of LIBWONN.
The experimental results indicate that LIBWONN exhibits significant performance advantages over the other optimization methods, as shown in Table 13. When replacing LIBWONN with a conventional optimization method using a fixed learning rate, both training and testing losses increased substantially, suggesting that a fixed learning rate fails to adapt effectively to the characteristics of the dataset, resulting in suboptimal model training. When removing the Lagrange interpolation and using only BWO for optimization, the testing accuracy improved to 92.50%, surpassing that of the fixed learning rate approach but still falling short of the 95.33% achieved by LIBWONN. This result demonstrates that while BWO effectively optimizes the learning rate, the absence of Lagrange interpolation reduces its precision and global search capability, preventing the model from achieving optimal performance.
A sensitivity analysis experiment primarily checks the impact of population size and the number of iterations on the performance of LIBWONN in optimizing the ResNet18 model. The experimental results indicate that as the population size and the number of iterations increase, both the training loss and test loss gradually decrease, while the training accuracy and test accuracy continuously improve. This validates the optimization process. The result is shown in Table 14 and the line graph in Figure 9.
Specifically, when the population size reaches 100 and the number of iterations is set to 200, the training loss decreases to 0.2543, the test loss decreases to 0.3078, the training accuracy reaches 99.15%, and the test accuracy reaches 95.53%. In contrast, smaller population sizes (e.g., 10 or 30) and fewer iterations (e.g., 50 or 100) lead to a higher loss and lower accuracy, suggesting that a smaller search space and a shorter optimization process fail to sufficiently explore the optimal solution of the model.
However, when the population size and the number of iterations increase to 150 and 300, respectively, the reduction in training and test loss tends to level off, and the improvement in test accuracy becomes less significant. For example, with 200 iterations, increasing the population size from 100 to 150 results in only a slight test accuracy improvement from 95.53% to 95.60%. Similarly, with 300 iterations, increasing the population size from 100 to 150 raises the test accuracy only marginally from 95.55% to 95.62%. This indicates that once the population size and the number of iterations reach a certain threshold, further increasing computational resources provides limited optimization benefits, exhibiting a convergence effect.

4. Conclusions

This paper proposes an innovative adaptive learning rate method: a Lagrange interpolation-based black widow optimization algorithm developed to optimize the learning rate of the ResNet18 model, thereby significantly improving the training efficiency and performance of the model. The code developed in this paper is available at https://github.com/HJYJY/LIBWO (accessed on 30 May 2025).
To verify the effectiveness of LIBWONN, this paper selects nine new metaheuristic optimization algorithms (including DOA, ALA, SDO, CBSO, PSEQADE, COVIDOA, SASS, LSHADE-cnEpSin, AGSK) and conducts experiments on six public datasets covering various scenarios such as fashion, handwritten digits, and street-scene images. Additionally, 24 benchmark functions from CEC2017 and CEC2022 are chosen for the performance testing of the model. The experimental results demonstrate that LIBWONN performs excellently on multiple public datasets, thus significantly outperforming the original BWO algorithm without Lagrange interpolation.
By combining Lagrange interpolation techniques with the BWO algorithm, we effectively overcome the limitations of traditional learning rate optimization methods, especially in their convergence rate and generalization capability, which addresses the issue of BWO’s susceptibility to local optima. The LIBWONN algorithm dynamically constructs an interpolation function that can adaptively adjust the learning rate based on the training performance of the ResNet18 model. This mechanism enhances the robustness of the optimization process, which also effectively reduces the risk of oscillation and divergence caused by inappropriate learning rate choices during training.
Compared with traditional fixed learning rate methods and simple adaptive learning rate algorithms, LIBWONN can smoothly and effectively adjust the learning rate of the ResNet18 model, thereby accelerating convergence and improving final performance. This innovative approach provides new insights for training deep learning models that can significantly improve training results and overall model performance when handling complex tasks. Additionally, it offers a new solution for addressing the problem in which metaheuristic optimization algorithms suffer from local optima.
However, this method faces the issue of high computational costs, as each individual requires training and validation loss computation, which may significantly increase the training time when applied to deep neural networks and large-scale datasets. Moreover, LIBWONN may still suffer from local optima; although the pheromone mechanism guides the search, individuals may prematurely converge in complex high-dimensional search spaces, limiting the optimization performance and ultimately affecting the model’s learning capability and generalization ability.
Our current research mainly focuses on low-dimensional problems, but as optimization algorithms become more widely applied in practice, studying high-dimensional problems will become increasingly important. In the future, we plan to carry out more systematic and comprehensive experiments on high-dimensional tasks to evaluate the performance and efficiency of interpolation methods in complex environments, while also exploring potential optimization strategies to reduce computational costs.
Furthermore, considering practical deployment, future work will also explore how LIBWONN can be adapted for real-world applications such as embedded systems and real-time learning scenarios. These environments often demand lightweight models and rapid inference, which necessitates the development of simplified or accelerated versions of LIBWONN. Such efforts will help enhance the algorithm’s applicability in industrial and edge computing settings.

Author Contributions

Conceptualization, P.W.; methodology, H.S.; software, J.H.; validation, C.H. and W.Q.; formal analysis, P.W.; investigation, T.C.; resources, J.G. and C.H.; data curation, J.G.; writing—original draft preparation, P.W. and J.H.; writing—review and editing, P.W. and Z.L.; visualization, P.W. and J.H.; supervision, J.G. and H.S.; project administration, H.S. and M.S.; funding acquisition, P.W. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Funded Postdoctoral Research Program GZC20241900, Natural Science Foundation Program of Xinjiang Uygur Autonomous Region 2024D01A141, Tianchi Talents Program of Xinjiang Uygur Autonomous Region and Postdoctoral Fund of Xinjiang Uygur Autonomous Region, the Key Laboratory of Remote Sensing Application and Innovation (LRSAI-2025004), Sichuan University students innovation and entrepreneurship training program (S202410621082), and Chengdu University of Information Technology key project of education reform (JYJG2024206).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code used in this work can be retrieved from the following Github link: https://github.com/HJYJY/LIBWO (accessed on 30 May 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yuan, J.; Zhu, A.; Xu, Q.; Wattanachote, K.; Gong, Y. CTIF-Net: A CNN-transformer iterative fusion network for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 3795–3805. [Google Scholar] [CrossRef]
  2. He, K.; Zhang, X.; Ren, S.; Sun, R. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  3. Xu, W.; Fu, Y.-L.; Zhu, D. ResNet and its application to medical image processing: Research progress and challenges. Comput. Methods Programs Biomed. 2023, 240, 107660. [Google Scholar] [CrossRef] [PubMed]
  4. Croitoru, F.-A.; Ristea, N.-C.; Ionescu, R.T.; Sebe, N. Learning rate curriculum. Int. J. Comput. Vis. 2024, 132, 1–24. [Google Scholar] [CrossRef]
  5. Razavi, M.; Mavaddati, S.; Koohi, H. ResNet deep models and transfer learning technique for classification and quality detection of rice cultivars. Expert Syst. Appl. 2024, 247, 123276. [Google Scholar] [CrossRef]
  6. Peña-Delgado, A.F.; Peraza-Vázquez, H.; Almazán-Covarrubias, J.H.; Cruz, N.T.; García-Vite, P.M.; Morales-Cepeda, A.B.; Ramirez-Arredondo, J.M. A Novel bio-inspired algorithm applied to selective harmonic elimination in a three-phase eleven-level inverter. Math. Probl. Eng. 2020, 2020, 8856040. [Google Scholar] [CrossRef]
  7. Sauer, T.; Xu, Y. On multivariate Lagrange interpolation. Math. Comput. 1995, 64, 1147–1170. [Google Scholar] [CrossRef]
  8. Ma, Z.; Mao, Z.; Shen, J. Efficient and stable SAV-based methods for gradient flows arising from deep learning. J. Comput. Phys. 2024, 505, 112911. [Google Scholar] [CrossRef]
  9. Franchini, G.; Porta, F.; Ruggiero, V.; Trombini, I.; Zanni, L. A Stochastic gradient method with variance control and variable learning rate for deep learning. J. Comput. Appl. Math. 2024, 451, 116083. [Google Scholar] [CrossRef]
  10. Wang, Z.; Zhang, J. Incremental PID controller-based learning rate scheduler for stochastic gradient descent. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 7060–7071. [Google Scholar] [CrossRef]
  11. Qin, W.; Luo, X.; Zhou, M.C. Adaptively-accelerated parallel stochastic gradient descent for high-dimensional and incomplete data representation learning. IEEE Trans. Big Data 2024, 10, 92–107. [Google Scholar] [CrossRef]
  12. Shen, L.; Chen, C.; Zou, F.; Jie, Z.; Sun, J.; Liu, W. A unified analysis of AdaGrad with weighted aggregation and momentum acceleration. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 14482–14490. [Google Scholar] [CrossRef] [PubMed]
  13. Shao, Y.; Yang, J.; Zhou, W.; Sun, H.; Xing, L.; Zhao, Q.; Zhang, L. An improvement of adam based on a cyclic exponential decay learning rate and gradient norm constraints. Electronics 2024, 13, 1778. [Google Scholar] [CrossRef]
  14. Jia, X.; Feng, X.; Yong, H.; Meng, D. Weight decay with tailored adam on scale-invariant weights for better generalization. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 6936–6947. [Google Scholar] [CrossRef] [PubMed]
  15. Wilson, A.C.; Roelofs, R.; Stern, M.; Srebro, N.; Recht, B. The marginal value of adaptive gradient methods in machine learning. Proc. Adv. Neural Inf. Process. Syst. 2017, 30, 4149–4159. [Google Scholar]
  16. Gharehchopogh, F.S. Quantum-inspired metaheuristic algorithms: Comprehensive survey and classification. Artif. Intell. Rev. 2023, 56, 5479–5543. [Google Scholar] [CrossRef]
  17. Akgul, A.; Karaca, Y.; Pala, M.A.; Çimen, M.E.; Boz, A.F.; Yildiz, M.Z. Chaos theory, advanced metaheuristic algorithms and their newfangled deep learning architecture optimization applications: A review. Fractals 2024, 32, 2430001. [Google Scholar] [CrossRef]
  18. Wang, X.; Hu, H.; Liang, Y.; Zhou, L. On the mathematical models and applications of swarm intelligent optimization algorithms. Arch. Comput. Methods Eng. 2022, 29, 3815–3842. [Google Scholar] [CrossRef]
  19. Li, J.; Yang, S.X. Intelligent fish-inspired foraging of swarm robots with sub-group behaviors based on neurodynamic models. Biomimetics 2024, 9, 16. [Google Scholar] [CrossRef]
  20. Chen, B.; Cao, L.; Chen, C.; Chen, Y.; Yue, Y. A comprehensive survey on the chicken swarm optimization algorithm and its applications: State-of-the-art and research challenges. Artif. Intell. Rev. 2024, 57, 170. [Google Scholar] [CrossRef]
  21. Deng, W.; Ma, X.; Qiao, W. A hybrid intelligent optimization algorithm based on a learning strategy. Mathematics 2024, 12, 2570. [Google Scholar] [CrossRef]
  22. Guan, T.; Wen, T.; Kou, B. Improved lion swarm optimization algorithm to solve the multi-objective rescheduling of hybrid flowshop with limited buffer. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102077. [Google Scholar] [CrossRef]
  23. Xu, Y.; Zhang, M.; Yang, M.; Wang, D. Hybrid quantum particle swarm optimization and variable neighborhood search for flexible job-shop scheduling problem. J. Manuf. Syst. 2024, 73, 334–348. [Google Scholar] [CrossRef]
  24. Zhong, M.; Wen, J.; Ma, J.; Cui, H.; Zhang, Q.; Parizi, M.K. A hierarchical multi-leadership sine cosine algorithm to dissolving global optimization and data classification: The COVID-19 case study. Comput. Biol. Med. 2023, 164, 107212. [Google Scholar] [CrossRef]
  25. Mohamed, M.T.; Alkhalaf, S.; Senjyu, T.; Mohamed, T.H.; Elnoby, A.M.; Hemeida, A. Sine cosine optimization algorithm combined with balloon effect for adaptive position control of a cart forced by an armature-controlled DC motor. PLoS ONE 2024, 19, e0300645. [Google Scholar] [CrossRef]
  26. Miao, F.; Wu, Y.; Yan, G.; Si, X. A memory interaction quadratic interpolation whale optimization algorithm based on reverse information correction for high-dimensional feature selection. Appl. Soft Comput. 2024, 164, 111979. [Google Scholar] [CrossRef]
  27. Wei, P.; Shang, M.; Zhou, J.; Shi, X. Efficient adaptive learning rate for convolutional neural network based on quadratic interpolation egret swarm optimization algorithm. Heliyon 2024, 10, e37814. [Google Scholar] [CrossRef]
  28. Deng, W.; Wang, J.; Guo, A.; Zhao, H. Quantum differential evolutionary algorithm with quantum-adaptive mutation strategy and population state evaluation framework for high-dimensional problems. Inf. Sci. 2024, 676, 120787. [Google Scholar] [CrossRef]
  29. Khalid, A.M.; Hosny, K.M.; Mirjalili, S. COVIDOA: A novel evolutionary optimization algorithm based on coronavirus disease replication lifecycle. Neural Comput. Appl. 2022, 34, 22465–22492. [Google Scholar] [CrossRef]
  30. Salgotra, R.; Singh, U.; Saha, S.; Nagar, A. New Improved SALSHADE-cnEpSin Algorithm with Adaptive Parameters. IEEE Congr. Evol. Comput. (CEC) 2019, 2019, 3150–3156. [Google Scholar]
  31. Xiao, Y.; Cui, H.; Abu Khurma, R.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  32. Hu, G.; Cheng, M.; Houssein, E.H.; Hussien, A.G.; Abualigah, L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Adv. Eng. Inform. 2024, 62, 102783. [Google Scholar] [CrossRef]
  33. Rao, A.N.; Vijayapriya, P. Salp Swarm Algorithm and Phasor Measurement Unit Based Hybrid Robust Neural Network Model for Online Monitoring of Voltage Stability. Wirel. Netw. 2021, 27, 843–860. [Google Scholar]
  34. Benmamoun, Z.; Khlie, K.; Bektemyssova, G.; Dehghani, M.; Gherabi, Y. Bobcat Optimization Algorithm: An effective bio-inspired metaheuristic algorithm for solving supply chain optimization problems. Sci. Rep. 2024, 14, 20099. [Google Scholar] [CrossRef]
  35. Benmamoun, Z.; Khlie, K.; Dehghani, M.; Gherabi, Y. WOA: Wombat optimization algorithm for solving supply chain optimization problems. Mathematics 2024, 12, 1059. [Google Scholar] [CrossRef]
  36. Lang, Y.; Gao, Y. Dream Optimization Algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  37. Nemati, M.; Zandi, Y.; Sabouri, J. Application of a novel metaheuristic algorithm inspired by connected banking system in truss size and layout optimum design problems and optimization problems. Sci. Rep. 2024, 14, 27345. [Google Scholar] [CrossRef]
  38. Akhmedova, S.; Stanovov, V. Success-History Based Position Adaptation in Gaining-Sharing Knowledge Based Algorithm. In Proceedings of the Advances in Swarm Intelligence: 12th International Conference, Qingdao, China, 17–21 July 2021; Volume 12, pp. 174–181. [Google Scholar]
  39. Beheshti, Z.; Shamsuddin, S.M.H. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft. Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  40. Wang, J.; Liang, Y.; Tang, J.; Wu, Z. Vehicle trajectory reconstruction using lagrange-interpolation-based framework. Appl. Sci. 2024, 14, 1173. [Google Scholar] [CrossRef]
  41. Wang, R.; Feng, Q.; Ji, J. The discrete convolution for fractional cosine-sine series and its application in convolution equations. AIMS Math. 2024, 9, 2641–2656. [Google Scholar] [CrossRef]
  42. Taylor, R.L. Newton interpolation. Numer. Anal. 2022, 12, 25–37. [Google Scholar]
  43. Smith, M.J. Spline interpolation. J. Comput. Math. 2021, 18, 58–72. [Google Scholar]
  44. Thompson, A.P. Quadratic interpolation: Formula, definition, and solved examples. Appl. Math. Comput. 2023, 20, 65–78. [Google Scholar]
  45. Acton, F.S. Linear vs. quadratic interpolations example from F.S. Acton ‘Numerical methods that work’. Numer. Methods J. 2020, 9, 101–110. [Google Scholar]
  46. Zhang, X. Research on interpolation and data fitting: Basis and applications. Math. Model. Appl. 2022, 35, 125–135. [Google Scholar]
  47. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. Neural Comput. 1994, 6, 157–167. [Google Scholar] [CrossRef]
  48. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  49. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  50. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Int. Conf. Mach. Learn. (ICML) 2015, 37, 1–9. [Google Scholar]
  51. Li, Z.; Li, S.; Luo, X. A novel machine learning system for industrial robot arm calibration. IEEE Trans. Circuits Syst. II Express Briefs 2023, 71, 2364–2368. [Google Scholar] [CrossRef]
  52. Ma, R.; Hwang, K.; Li, M.; Miao, Y. Trusted model aggregation with zero-knowledge proofs in federated learning. IEEE Trans. Parallel Distrib. Syst. 2024, 35, 2284–2296. [Google Scholar] [CrossRef]
  53. Tissera, M.D.; McDonnell, M.D. Deep extreme learning machines: Supervised autoencoding architecture for classification. Neurocomputing 2016, 174, 42–49. [Google Scholar] [CrossRef]
  54. Abhishek, A.V.S.; Gurrala, V.R. Improving model performance and removing the class imbalance problem using augmentation. Technology (IJARET) 2022, 13, 14–22. [Google Scholar]
  55. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading digits in natural images with unsupervised feature learning. NIPS Workshop Deep Learn. Unsupervised Feature Learn. 2011, 2011, 4. [Google Scholar]
  56. Koklu, M.; Cinar, I.; Taspinar, Y.S. Classification of rice varieties with deep learning methods. Comput. Electron. Agric. 2021, 187, 106285. [Google Scholar] [CrossRef]
  57. Krizhevsky, A.; Hinton, G. Learning Multiple Layers Offeatures from Tiny Images. Master’s Thesis, University of Tront, Toronto, ON, Canada, 2009; pp. 1–60. [Google Scholar]
Figure 1. The residual block for ResNet18.
Figure 1. The residual block for ResNet18.
Biomimetics 10 00361 g001
Figure 2. The convolutional image of a residual block.
Figure 2. The convolutional image of a residual block.
Biomimetics 10 00361 g002
Figure 3. The model architecture diagram of ResNet.
Figure 3. The model architecture diagram of ResNet.
Biomimetics 10 00361 g003
Figure 4. The model structure of the LIBWONN method.
Figure 4. The model structure of the LIBWONN method.
Biomimetics 10 00361 g004
Figure 5. The training loss images on six datasets.
Figure 5. The training loss images on six datasets.
Biomimetics 10 00361 g005
Figure 6. The training accuracy images on six datasets.
Figure 6. The training accuracy images on six datasets.
Biomimetics 10 00361 g006
Figure 7. CEC2017 testing functions.
Figure 7. CEC2017 testing functions.
Biomimetics 10 00361 g007
Figure 8. CEC2022 testing functions.
Figure 8. CEC2022 testing functions.
Biomimetics 10 00361 g008
Figure 9. Line chart of sensitivity analysis results.
Figure 9. Line chart of sensitivity analysis results.
Biomimetics 10 00361 g009
Table 1. Comparison of results from neural network models.
Table 1. Comparison of results from neural network models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Score
ResNet180.26100.34050.99460.95330.950.950.95
RNN0.37210.39050.97030.92170.910.910.91
CNN0.28950.31970.98570.94750.940.940.94
LSTM0.35580.37290.97690.93280.930.930.93
GAN0.40120.41560.96530.91050.900.900.90
VGG0.27160.36520.95210.95490.920.920.92
Table 2. Comparison of results from interpolation methods.
Table 2. Comparison of results from interpolation methods.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Score
Newton0.31020.34210.98620.93840.930.940.93
Spline0.29050.32340.99010.94630.940.940.94
Quadratic0.32570.35520.98370.92430.920.920.92
Linear0.29980.33270.98740.93960.930.930.93
Chebyshev0.31790.34810.98520.93580.930.930.93
Lagrange0.26100.34050.99460.95330.950.950.95
Table 3. VHN dataset and comparison models.
Table 3. VHN dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.27170.29640.96280.94980.940.950.951.000
BWO0.28670.30520.93910.94300.940.930.940.0176
DOA0.32110.31940.91670.93610.930.930.930.0263
ALA0.27130.30680.91970.90990.900.900.900.0381
SDO0.29100.32010.95800.93980.930.940.930.0112
CBSO0.30830.32300.91070.92430.920.920.920.2457
PSEQADE0.31480.31690.91510.92410.930.930.930.1953
COVIDOA0.28670.30520.93910.94300.910.930.940.0214
SASS0.30070.31840.93760.94120.930.940.930.0226
LSHADE0.30620.32150.91240.92680.920.920.920.2468
AGSK0.31310.31520.91650.92370.930.930.930.0311
BOA0.26480.30890.94230.94260.930.920.930.1429
WOA0.31740.31460.92490.93120.930.930.930.2048
Table 4. MNIST dataset and comparison models.
Table 4. MNIST dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.09940.05730.99120.99260.990.990.991.0000
BWO0.14140.07210.95830.97760.980.980.980.0231
DOA0.47320.05670.98730.98240.980.980.980.0442
ALA0.13450.08720.96020.97290.970.970.970.0684
SDO0.32210.09360.98980.99030.990.990.990.0012
CBSO0.13840.18010.93220.94680.950.950.950.2350
PSEQADE0.11420.12050.95580.96550.960.960.960.3120
COVIDOA0.11640.11890.95240.96250.950.950.950.0291
SASS0.13180.17680.93130.94580.950.950.950.3980
LSHADE0.11170.11830.95480.96480.960.960.960.0094
AGSK0.11520.11920.95150.96140.950.950.950.0761
BOA0.12680.10170.96100.97020.970.970.970.2658
WOA0.11950.09460.96350.97410.970.970.970.4682
Table 5. FASHION-MNIST dataset and comparison models.
Table 5. FASHION-MNIST dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.26100.34050.99460.95330.950.950.951.0000
BWO0.35090.37410.97380.93320.930.930.930.0321
DOA0.39810.40980.97550.92450.930.920.920.0154
ALA0.35010.37140.97940.92720.930.930.930.0468
SDO0.31910.35840.97600.92520.920.930.930.0023
CBSO0.28530.30830.96120.94510.940.940.940.0214
PSEQADE0.32710.35230.97420.91620.910.910.910.1832
COVIDOA0.34960.35970.96490.92670.920.920.920.0587
SASS0.28580.30880.96040.94490.940.940.940.0914
LSHADE0.32890.35370.97240.91640.910.910.910.0009
AGSK0.34930.35860.96520.92700.920.920.920.0275
BOA0.31240.33670.96880.93410.930.930.930.2493
WOA0.29880.31980.97050.93860.940.930.930.1955
Table 6. Intel Image Classification dataset and comparison models.
Table 6. Intel Image Classification dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.26140.28620.95510.94100.950.930.931.0000
BWO0.26040.31680.94780.93870.940.930.930.0453
DOA0.37490.45370.91830.89840.900.900.900.0381
ALA0.38460.42160.93720.91950.930.910.910.0624
SDO0.35860.38700.94650.93010.940.920.920.0215
CBSO0.37940.39890.94110.93140.940.920.920.0784
PSEQADE0.39420.43670.93900.93070.930.920.920.1101
COVIDOA0.35710.34980.92260.93540.920.920.920.0892
SASS0.37960.39970.94300.93380.940.920.920.0247
LSHADE0.39480.42950.94030.92670.930.920.920.0034
AGSK0.35470.34790.92320.93470.920.920.920.0632
BOA0.32050.36020.94010.93070.930.910.910.0547
WOA0.30580.34170.94230.93320.930.920.920.0689
Table 7. RICE dataset and comparison models.
Table 7. RICE dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.01000.01330.99180.99510.990.990.991.0000
BWO0.03090.05100.99670.99550.990.990.990.0224
DOA0.02520.07030.98900.98720.980.980.980.0137
ALA0.03780.03240.98160.98830.980.980.980.0459
SDO0.13410.14520.96190.97690.970.970.970.0341
CBSO0.13870.12080.97320.97540.970.970.970.0552
PSEQADE0.15830.13780.96530.95820.960.960.960.0631
COVIDOA0.16240.12890.95760.95970.950.950.950.0709
SASS0.14130.11870.97300.97940.970.970.970.0508
LSHADE0.14420.13390.96580.95020.960.960.960.0232
AGSK0.15780.13150.95360.95910.950.950.950.0479
BOA0.08530.06840.97840.97120.970.970.970.0385
WOA0.09210.07260.97650.96980.960.960.960.0457
Table 8. CIFAR10 dataset and comparison models.
Table 8. CIFAR10 dataset and comparison models.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Scorep-Value
LIBWONN0.00800.09170.99740.93070.930.930.931.0000
BWO0.00300.09230.99670.94230.930.930.930.0214
DOA0.01910.14590.95170.90940.910.910.910.0321
ALA0.02570.09730.92950.92690.930.930.930.0457
SDO0.04270.10720.96520.91840.920.920.920.0389
CBSO0.13440.12420.97430.97600.970.970.970.0524
PSEQADE0.15740.14950.96530.95030.960.960.960.0716
COVIDOA0.17480.12710.95370.95830.950.950.950.0913
SASS0.14610.12440.96820.97490.970.970.970.0782
LSHADE0.16580.13630.95780.94820.960.960.960.0684
AGSK0.14980.13030.95730.95200.950.950.950.0607
BOA0.06520.11040.96370.94100.940.930.930.0493
WOA0.07890.11870.95980.93570.930.920.920.0547
Table 9. CEC2017 testing functions.
Table 9. CEC2017 testing functions.
TypeNo.FunctionsMin
Unimodal
Functions
1Shifted and Rotated Bent Cigar Function100
2Shifted and Rotated Sum of Different Power Function200
3Shifted and Rotated Zakharov Function300
Simple
Multimodal
Functions
4Shifted and Rotated Rosenbrock’s Function400
7Shifted and Rotated Lunacek Bi_Rastrigin’s Function700
8Shifted and Rotated Non-Continuous Rastrigin’s Function800
Hybrid
Functions
13Hybrid Function 31300
14Hybrid Function 41400
15Hybrid Function 51500
Composition
Functions
21Composition Function 12100
22Composition Function 22200
23Composition Function 32300
Table 10. CEC2022 testing functions.
Table 10. CEC2022 testing functions.
TypeNo.FunctionsMin
Unimodal
Functions
1Shifted and full Rotated Zakharov Function300
Basic Functions2Shifted and full Rotated Rosenbrock’s Function400
3Shifted and full Rotated Expanded Schaffer’s Function600
4Shifted and full Rotated Non-Continuous Rastrigin’s Function800
5Shifted and full Rotated Levy Function900
Hybrid
Functions
6Hybrid Function 11800
7Hybrid Function 22000
8Hybrid Function 32200
9Composition Function 12300
Composition
Functions
10Composition Function 22400
11Composition Function 32600
12Composition Function 42700
Table 11. Comparison of results from 12 benchmark testing functions on CEC2017.
Table 11. Comparison of results from 12 benchmark testing functions on CEC2017.
F LIBWONNBWODOAALASDOCBSOPSEQADECOVIDOASASSLSHADEAGSKBOAWOA
F1Ave1.16 × 1031.09 × 1031.32 × 1031.24 × 1036.10 × 1034.30 × 1033.18 × 1031.75 × 1035.72 × 1034.10 × 1032.92 × 1037.50 × 1037.00 × 103
Std6.79 × 1036.96 × 1038.05 × 1036.28 × 1033.55 × 1033.45 × 1036.82 × 1038.62 × 1034.59 × 1039.18 × 1031.08 × 104 × 1045.00 × 102 × 1024.20 × 102 × 102
Best1.39 × 102 × 1023.72 × 102 × 1024.50 × 102 × 1023.88 × 102 × 1021.20 × 102 × 1021.39 × 102 × 1023.52 × 102 × 1022.10 × 102 × 1022.20 × 102 × 1025.68 × 102 × 1023.20 × 102 × 1028.00 × 1037.50 × 103
F2Ave3.08 × 1034.41 × 1031.33 × 1067.75 × 1064.05 × 1033.55 × 1031.72 × 1044.90 × 1035.14 × 1032.54 × 1047.56 × 1031.20 × 1041.15 × 104
Std1.03 × 1041.65 × 1043.75 × 1072.18 × 1081.38 × 1049.30 × 1031.24 × 1041.90 × 1041.38 × 1041.72 × 1042.52 × 1041.00 × 1039.00 × 102
Best2.00 × 1022.00 × 1022.35 × 1022.17 × 1022.05 × 1022.02 × 1026.93 × 1032.02 × 1023.30 × 1021.04 × 1043.32 × 1026.00 × 1025.50 × 102
F3Ave4.78 × 1024.51 × 1024.85 × 1021.49 × 1023.77 × 1024.25 × 1027.85 × 1024.58 × 1037.48 × 1021.45 × 1039.18 × 1038.00 × 1027.50 × 102
Std5.01 × 1027.78 × 1021.02 × 1039.10 × 1025.05 × 1026.80 × 1026.15 × 1024.13 × 1031.03 × 1039.68 × 1026.82 × 1034.00 × 1023.80 × 102
Best3.28 × 1023.04 × 1023.33 × 1023.05 × 1023.18 × 1023.16 × 1025.35 × 1023.72 × 1025.68 × 1029.81 × 1027.50 × 1022.20 × 1032.10 × 103
F4Ave1.98 × 1032.12 × 1031.71 × 1031.88 × 1031.15 × 1031.28 × 1034.10 × 1035.26 × 1031.67 × 1035.80 × 1037.94 × 1036.50 × 1036.00 × 103
Std4.23 × 1038.15 × 1035.90 × 1035.95 × 1034.35 × 1037.05 × 1037.05 × 1031.48 × 1041.08 × 1041.02 × 1042.03 × 1041.00 × 1039.00 × 102
Best4.15 × 1024.22 × 1025.45 × 1026.20 × 1024.18 × 1024.55 × 1029.60 × 1026.10 × 1021.30 × 1032.59 × 1031.23 × 1037.01 × 1027.00 × 102
F7Ave7.00 × 1027.01 × 1027.02 × 1027.02 × 1027.00 × 1027.00 × 1027.01 × 1027.02 × 1021.12 × 1031.13 × 1031.13 × 1031.10 1.10
Std1.14 1.45 1.11 1.22 6.80 × 10-0.99 8.25 × 10−17.22 × 10−11.32 1.21 9.88 × 10−17.00 × 1027.00 × 102
Best7.00 × 1027.00 × 1027.02 × 1027.02 × 1027.01 × 1027.01 × 1027.01 × 1027.02 × 1021.10 × 1031.10 × 1031.12 × 1038.10 × 1028.05 × 102
F8Ave8.05 × 1028.07 × 1028.04 × 1028.05 × 1028.07 × 1028.03 × 1028.18 × 1028.16 × 1021.43 × 1031.50 × 1031.47 × 1031.20 × 101.15 × 10
Std1.02 × 101.32 × 101.04 × 101.21 × 108.80 6.75 1.44 × 106.05 × 101.25 × 102.92 × 101.17 × 1028.00 × 1028.00 × 102
Best8.00 × 1028.00 × 1028.05 × 1028.06 × 1028.04 × 1028.05 × 1028.10 × 1028.14 × 1021.32 × 1031.35 × 1031.39 × 1031.32 × 1031.31 × 103
F13Ave1.30 × 1031.31 × 1031.32 × 1031.36 × 1031.33 × 1031.33 × 1031.31 × 1031.32 × 1032.56 × 1032.41 × 1037.53 × 1031.20 1.15
Std1.14 1.45 1.11 1.20 6.78 × 10−10.97 8.30 × 10−15.28 × 10−11.45 × 1031.18 × 1031.74 × 1031.30 × 1031.30 × 103
Best1.30 × 1031.30 × 1031.31 × 1031.30 × 1031.30 × 1031.30 × 1031.30 × 1031.31 × 1032.13 × 1032.45 × 1039.20 × 1038.10 × 1028.00 × 102
F14Ave8.05 × 1028.07 × 1028.05 × 1028.07 × 1028.09 × 1028.03 × 1028.16 × 1028.15 × 1022.83 × 1061.15 × 1075.10 × 1051.15 × 101.10 × 10
Std1.02 × 101.32 × 101.05 × 101.23 × 108.80 6.72 1.47 × 106.05 × 101.02 × 1089.70 × 1071.12 × 1068.00 × 1028.00 × 102
Best8.00 × 1028.00 × 1028.05 × 1028.07 × 1028.04 × 1028.07 × 1028.12 × 1028.17 × 1021.13 × 1045.52 × 1042.43 × 1033.00 × 1032.80 × 103
F15Ave2.41 × 1033.30 × 1032.50 × 1032.40 × 1032.20 × 1032.08 × 1032.12 × 1036.10 × 1035.81 × 1075.65 × 1088.23 × 1078.00 × 1027.50 × 102
Std7.50 × 1026.34 × 1021.02 × 1031.08 × 1031.01 × 1031.06 × 1037.65 × 1029.45 × 1029.35 × 1081.91 × 1091.22 × 1091.50 × 1031.40 × 103
Best2.19 × 1033.12 × 1031.77 × 1031.62 × 1031.55 × 1031.52 × 1031.64 × 1036.05 × 1031.04 × 1044.77 × 1072.68 × 1061.50 × 1061.40 × 106
F21Ave1.50 × 1057.16 × 1051.17 × 1076.95 × 1069.75 × 1051.40 × 1065.90 × 1062.52 × 1055.39 × 1074.69 × 1081.02 × 1085.00 × 1054.80 × 105
Std6.67 × 1071.80 × 1078.45 × 1083.92 × 1085.35 × 1078.40 × 1077.28 × 1078.88 × 1051.12 × 1092.35 × 1091.19 × 1091.20 × 1051.10 × 105
Best1.32 × 1032.30 × 1031.68 × 1031.42 × 1041.02 × 1036.40 × 1033.35 × 1041.15 × 1033.51 × 1039.76 × 1061.47 × 1055.00 × 1074.80 × 107
F22Ave3.79 × 1076.43 × 1071.23 × 1081.47 × 1084.35 × 1074.18 × 1074.60 × 1086.87 × 1075.72 × 1034.10 × 1032.92 × 1031.50 × 1071.40 × 107
Std8.14 × 1088.77 × 1081.06 × 1091.34 × 1098.70 × 1088.50 × 1081.64 × 1098.85 × 1084.59 × 1039.18 × 1031.08 × 1041.00 × 1069.00 × 105
Best2.70 × 1033.70 × 1035.15 × 1041.66 × 1051.59 × 1049.10 × 1032.45 × 1071.32 × 1062.20 × 1025.68 × 1023.20 × 1025.00 × 1074.90 × 107
F23Ave3.38 × 1077.18 × 1071.41 × 1088.58 × 1079.30 × 1074.45 × 1074.00 × 1087.90 × 1075.14 × 1032.54 × 1047.56 × 1031.50 × 1071.40 × 107
Std7.43 × 1081.13 × 1093.36 × 1091.62 × 1091.20 × 1098.30 × 1081.70 × 1098.40 × 1081.38 × 1041.72 × 1042.52 × 1041.00 × 1069.00 × 105
Best3.87 × 1031.25 × 1034.40 × 1044.38 × 1041.18 × 1042.30 × 1036.50 × 1068.90 × 1043.30 × 1021.04 × 1043.32 × 1027.50 × 1037.00 × 103
Rank 128543567884
Table 12. Comparison of results from 12 benchmark testing functions on CEC2022.
Table 12. Comparison of results from 12 benchmark testing functions on CEC2022.
F LIBWONNBWODOAALASDOCBSOPSEQADECOVIDOASASSLSHADEAGSKBOAWOA
F1Ave1.13 × 1031.92 × 1037.34 × 1032.12 × 1061.56 × 1032.56 × 1031.72 × 1043.11 × 1032.78 × 1031.92 × 1043.29 × 1031.64 × 1031.63 × 103
Std1.03 × 1036.22 × 1031.15 × 1055.29 × 1075.12 × 1039.21 × 1031.15 × 1045.38 × 1038.56 × 1031.12 × 1045.11 × 1039.94 × 1021.08 × 103
Best3.00 × 1023.00 × 1024.10 × 1024.25 × 1023.85 × 1023.90 × 1029.27 × 1026.00 × 1023.53 × 1029.76 × 1026.24 × 1026.00 × 1026.00 × 102
F2Ave9.94 × 1021.08 × 1031.28 × 1032.42 × 1038.35 × 1021.02 × 1031.49 × 1038.64 × 1028.05 × 1021.50 × 1038.97 × 1028.03 × 1028.00 × 102
Std1.06 × 1031.90 × 1035.18 × 1036.85 × 1029.64 × 1021.63 × 1031.14 × 1031.14 × 1031.35 × 1039.91 × 1029.91 × 1029.01 × 1029.07 × 102
Best3.69 × 1024.50 × 1026.25 × 1026.22 × 1025.11 × 1025.33 × 1027.10 × 1024.91 × 1025.62 × 1027.32 × 1025.52 × 1021.10 × 1032.05 × 103
F3Ave6.00 × 1026.00 × 1027.20 × 1027.15 × 1027.05 × 1027.08 × 1027.22 × 1027.10 × 1027.12 × 1027.24 × 1027.12 × 1022.87 × 1032.28 × 103
Std7.93 × 10−17.02 × 10−15.10 × 10−15.56 × 10−16.08 × 10−15.35 × 10−15.99 × 10−16.42 × 10−16.00 × 10−16.83 × 10−17.42 × 10−12.21 × 10106.30 × 1011
Best6.00 × 1026.00 × 1027.12 × 1027.16 × 1027.18 × 1027.14 × 1027.19 × 1027.12 × 1027.12 × 1027.24 × 1027.12 × 1023.21 × 1033.14 × 103
F4Ave8.03 × 1028.00 × 1029.19 × 1029.20 × 1029.22 × 1029.18 × 1029.35 × 1029.17 × 1029.45 × 1029.52 × 1029.50 × 1026.35 × 1034.82 × 103
Std1.62 1.43 7.15 × 10−19.32 × 10−14.65 × 10−17.29 × 10−19.75 × 10−15.35 × 10−11.55 7.92 × 105.52 × 103.12 × 1034.33 × 103
Best8.00 × 1028.01 × 1029.15 × 1029.17 × 1029.18 × 1029.14 × 1029.30 × 1029.16 × 1029.12 × 1029.13 × 1029.10 × 1023.05 × 1033.06 × 103
F5Ave9.01 × 1029.07 × 1022.48 × 1031.87 × 1039.65 × 1031.56 × 1032.77 × 1033.72 × 1039.25 × 1029.42 × 1029.23 × 1021.13 × 1031.92 × 103
Std1.21 × 101.07 × 103.02 × 1039.52 × 1035.85 × 1031.45 × 1036.82 × 1039.58 × 1037.23 × 109.75 × 105.12 × 109.94 × 1021.08 × 103
Best9.00 × 1029.00 × 1022.70 × 1043.15 × 1042.35 × 1033.25 × 1033.10 × 1052.76 × 1039.14 × 1029.32 × 1029.19 × 1026.00 × 1026.00 × 102
F6Ave1.10 × 1032.05 × 1033.47 × 1033.75 × 1033.28 × 1033.10 × 1035.10 × 1033.04 × 1031.68 × 1032.52 × 1033.62 × 1038.03 × 1028.00 × 102
Std2.25 × 1038.93 × 1031.74 × 1037.20 × 1037.55 × 1021.75 × 1034.20 × 1036.20 × 1021.38 × 1036.92 × 1038.02 × 1039.01 × 1029.07 × 102
Best1.90 × 1032.79 × 1032.56 × 1032.43 × 1032.32 × 1032.28 × 1032.50 × 1032.45 × 1033.12 × 1033.02 × 1052.52 × 1031.10 × 1032.05 × 103
F7Ave2.87 × 1032.28 × 1034.89 × 10141.22 × 10132.78 × 10134.08 × 10121.08 × 10115.14 × 10102.92 × 1035.12 × 1032.85 × 1032.87 × 1032.28 × 103
Std1.98 × 1031.32 × 1037.35 × 10151.75 × 10144.25 × 10145.13 × 10134.82 × 10115.24 × 10121.56 × 1034.70 × 1036.12 × 1022.21 × 10106.30 × 1011
Best2.04 × 1032.05 × 1033.52 × 1034.72 × 1033.45 × 1033.56 × 1031.05 × 1062.63 × 1032.58 × 1032.74 × 1032.62 × 1033.21 × 1033.14 × 103
F8Ave2.21 × 10106.30 × 10113.80 × 1034.07 × 1033.28 × 1033.50 × 1034.78 × 1033.56 × 1034.02 × 10131.12 × 10115.22 × 10116.35 × 1034.82 × 103
Std1.62 × 10128.64 × 10122.55 × 1034.75 × 1037.80 × 1029.48 × 1021.90 × 1031.12 × 1035.34 × 10145.11 × 10125.06 × 10133.12 × 1034.33 × 103
Best2.84 × 1032.39 × 1033.18 × 1033.22 × 1033.06 × 1033.08 × 1033.61 × 1033.04 × 1033.70 × 1031.14 × 1062.72 × 1033.05 × 1033.06 × 103
F9Ave3.21 × 1033.14 × 1034.10 × 1033.98 × 1033.61 × 1034.15 × 1034.55 × 1035.03 × 1033.56 × 1034.35 × 1033.88 × 1031.13 × 1031.92 × 103
Std1.39 × 1031.01 × 1032.84 × 1032.92 × 1032.76 × 1032.60 × 1032.16 × 1034.38 × 1029.15 × 1021.77 × 1031.05 × 1039.94 × 1021.08 × 103
Best2.30 × 1032.65 × 1033.36 × 1033.28 × 1033.10 × 1032.59 × 1033.24 × 1035.08 × 1032.78 × 1033.51 × 1032.94 × 1036.00 × 1026.00 × 102
F10Ave6.35 × 1034.82 × 1034.42 × 1035.54 × 1034.55 × 1033.40 × 1034.62 × 1034.87 × 1034.12 × 1034.76 × 1035.12 × 1038.03 × 1028.00 × 102
Std6.88 × 1032.24 × 1035.15 × 1032.21 × 1048.05 × 1032.02 × 1033.75 × 1034.32 × 1032.50 × 1032.10 × 1034.12 × 1029.01 × 1029.07 × 102
Best2.44 × 1022.59 × 1032.95 × 1033.09 × 1033.01 × 1032.98 × 1033.40 × 1033.00 × 1032.67 × 1033.50 × 1035.11 × 1031.10 × 1032.05 × 103
F11Ave3.12 × 1034.33 × 1033.56 × 1033.62 × 1033.49 × 1033.29 × 1033.90 × 1034.18 × 1033.56 × 1034.53 × 1034.82 × 1032.87 × 1032.28 × 103
Std1.07 × 1034.89 × 1034.60 × 1025.82 × 1022.90 × 1022.68 × 1023.71 × 1028.24 × 101.90 × 1033.87 × 1034.01 × 1032.21 × 10106.30 × 1011
Best2.60 × 1032.61 × 1033.20 × 1033.25 × 1033.08 × 1033.15 × 1034.54 × 1034.27 × 1032.83 × 1033.23 × 1032.80 × 1033.21 × 1033.14 × 103
F12Ave3.05 × 1033.06 × 1037.34 × 1032.12 × 1061.56 × 1032.56 × 1031.72 × 1043.11 × 1033.25 × 1033.96 × 1034.12 × 1036.35 × 1034.82 × 103
Std1.72 × 1022.07 × 1021.15 × 1055.29 × 1075.12 × 1039.21 × 1031.15 × 1045.38 × 1032.85 × 1024.00 × 1028.52 × 103.12 × 1034.33 × 103
Best2.74 × 1032.94 × 1034.10 × 1024.25 × 1023.85 × 1023.90 × 1029.27 × 1026.00 × 1023.48 × 1034.56 × 1034.23 × 1033.05 × 1033.06 × 103
Rank 1244537767766
Table 13. The results of the ablation study.
Table 13. The results of the ablation study.
BaselineTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Score
LIBWONN0.26100.34050.99460.95330.950.950.95
BWO0.35090.37410.97380.93320.930.930.93
Fixed LR
Optimization (Fixed LR)
0.38000.42000.97000.92000.910.910.91
Table 14. The results of the sensitivity analysis experiment.
Table 14. The results of the sensitivity analysis experiment.
PopulationIterationsTraining LossTest LossTraining AccuracyTest AccuracyPrecisionRecallF1-Score
10500.38000.42000.97000.92000.910.910.91
30500.36800.40900.97300.92500.920.920.92
50500.35800.39600.97500.93000.920.920.92
100500.35000.38000.97650.93300.930.930.93
101000.36850.41450.97450.92200.910.910.91
301000.35400.39800.97800.92600.930.920.92
501000.33000.37000.98200.93400.940.930.94
1001000.32000.36000.98400.94500.940.940.94
102000.32150.39200.98000.92800.920.920.92
302000.31000.38100.98300.93700.930.930.93
502000.29000.35300.98550.94450.940.940.94
1002000.25430.30780.99150.95530.950.950.95
1502000.25010.30500.99200.95600.950.950.95
103000.32000.39000.98050.92950.920.920.92
303000.30800.37950.98350.93800.930.930.93
503000.28900.35250.98600.94500.940.940.94
1003000.25200.30500.99180.95550.950.950.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, P.; Hu, C.; Hu, J.; Li, Z.; Qin, W.; Gan, J.; Chen, T.; Shu, H.; Shang, M. A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18. Biomimetics 2025, 10, 361. https://doi.org/10.3390/biomimetics10060361

AMA Style

Wei P, Hu C, Hu J, Li Z, Qin W, Gan J, Chen T, Shu H, Shang M. A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18. Biomimetics. 2025; 10(6):361. https://doi.org/10.3390/biomimetics10060361

Chicago/Turabian Style

Wei, Peiyang, Can Hu, Jingyi Hu, Zhibin Li, Wen Qin, Jianhong Gan, Tinghui Chen, Hongping Shu, and Mingsheng Shang. 2025. "A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18" Biomimetics 10, no. 6: 361. https://doi.org/10.3390/biomimetics10060361

APA Style

Wei, P., Hu, C., Hu, J., Li, Z., Qin, W., Gan, J., Chen, T., Shu, H., & Shang, M. (2025). A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18. Biomimetics, 10(6), 361. https://doi.org/10.3390/biomimetics10060361

Article Metrics

Back to TopTop