Next Article in Journal
Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO)
Previous Article in Journal
Migratory Bird-Inspired Adaptive Kalman Filtering for Robust Navigation of Autonomous Agricultural Planters in Unstructured Terrains
Previous Article in Special Issue
A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification

1
School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
School of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, China
3
Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
4
Software Automatic Generation and Intelligent Service Key Laboratory of Sichuan Province, Chengdu 610225, China
5
Key Laboratory of Remote Sensing Application and Innovation, Chongqing 401147, China
6
Dazhou Key Laboratory of Government Data Security, Sichuan University of Arts and Science, Dazhou 635000, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 544; https://doi.org/10.3390/biomimetics10080544
Submission received: 2 July 2025 / Revised: 11 August 2025 / Accepted: 13 August 2025 / Published: 19 August 2025

Abstract

Convolutional neural networks (CNNs) and their improved models (like DenseNet-121) have achieved significant results in image classification tasks. However, the performance of these models is still constrained by issues such as hyperparameter optimization and gradient vanishing and exploding. Owing to their unique exploration and exploitation capabilities, evolutionary algorithms offer new avenues for addressing these problems. Simultaneously, to prevent these algorithms from falling into a local optimum during the search process, this study designs a novel interpolation algorithm. To achieve better image classification performance, thus enhancing classification accuracy and boosting model stability, this paper utilizes a hybrid algorithm based on the horned lizard algorithm with quadratic interpolation and the giant armadillo optimization with Newton interpolation (HGAO) to optimize the hyperparameters of DenseNet-121. It is applied to five datasets spanning different domains. The learning rate and dropout rate have notable impacts on the outcomes of the DenseNet-121 model, which are chosen as the hyperparameters to be optimized. Experiments are conducted using the HGAO algorithm on five image datasets and compared with nine state-of-the-art algorithms. The performance of the model is evaluated based on accuracy, precision, recall, and F1-score metrics. The experimental results reveal that the combination of hyperparameters becomes more reasonable after optimization with the HGAO algorithm, thus providing a crucial improvement. In the comparative experiments, the accuracy of the image classification on the training set increased by up to 0.5%, with a maximum reduction in loss of 0.018. On the test set, the accuracy rose by 0.5%, and the loss decreased by 54 points. The HGAO algorithm provides an effective solution for optimizing the DenseNet-121 model. The designed method boosts classification accuracy and model stability, which also dramatically augments hyperparameter optimization effects and resolves gradient difficulties.

1. Introduction

Over the past decades, CNNs have made significant strides in image classification tasks and have been widely adopted in various image recognition and analysis applications. Conventional CNNs like AlexNet [1], GoogleNet [2], and ResNet [3] have excellent performance in different image classification tasks, thus driving continuous advancements in image processing technology. Meanwhile, emerging novel networks, like LSTM [4], ReNet [5], ViT [6], and DeiT [7], have tremendous potential, and can commendably address complex time-series data and cross-domain tasks [8,9,10,11].
Owing to the unique dense connectivity mechanism of DenseNet-121 [12], which has a higher classification performance on PlantVillage, we adopted this method for addressing the issue of image classification. However, its performance depends on the network design and hyperparameter settings. Most importantly, the learning rate and dropout rate play vital roles in the convergence speed and generalization ability of the model. Additionally, the learning rate determines the magnitude of the model weight adjustments. A high value results in oscillation and failure to converge, while a low value may cause a local optimum and slow convergence speed [2,12,13,14,15].
The latest research has further expanded the application scope of these networks [16,17,18,19,20]. Ma et al. developed a novel network called GoogLeNet-AL, which has better performance in lung cancer diagnoses, outperforming traditional GoogLeNet and other models [2]. Song et al. proposed a network model named DesTrans, which combines the advantages of DenseNet, ResNet, and Transformer to deal with medical images, and then applied it to multi-exposure, multi-focus, and infrared visible images [14]. Additionally, Chang et al. presented a wheat rust recognition method based on an improved DenseNet, which can effectively extract features from wheat leaf images [12]. Talukder et al. proposed a fine-tuned EfficientNet, which provides radiologists with an effective method for rapid and accurate COVID-19 diagnosis [16].
Although these models perform effectively in their respective applications, their hyperparameters are fixed values, which limits their full potential. Evolutionary algorithms have excellent potential in optimizing hyperparameters to achieve better results on specific tasks [1,4,21,22,23,24,25,26,27].
To address the above-mentioned issues, we propose an adaptive parameter optimization method, which was adopted to further enhance the performance of DenseNet-121 on image classification tasks. This method combines the following two different evolutionary algorithms: quadratic interpolation-based horned lizard optimization algorithm (QIHLOA) [8,17] and Newton interpolation-based giant armadillo optimization algorithm (NIGAO) [9,18], forming a hybrid algorithm called the “hybrid algorithm based on horned lizard optimization algorithm and giant armadillo optimization (HGAO)”. The proposed algorithm integrates the advantages of the HLOA and GAO algorithms, thus incorporating quadratic interpolation and Newton interpolation operations to enhance search capability and accuracy.
Evolutionary algorithms, biomimetic optimizers that simulate natural selection and biological evolution, improve deep learning performance via their bio-inspired properties. Notably, QIHLOA and NIGAO represent emerging biomimetic evolutionary methodologies that provide distinctive solutions for neural network-based image classification tasks. Their biomimetic traits crucially balance model complexity and generalization in deep learning. By emulating organisms’ adaptive survival strategies, these algorithms enable neural networks to dynamically tune topologies and optimize hyperparameters. This drives image classification from “data-driven” to “intelligently evolutionary”, unlocking the potential in few-shot learning and cross-domain recognition. Furthermore, the main contributions of this study are as follows:
(a)
This study proposes an adaptive hyperparameter optimization method that combines QIHLOA and NIGAO algorithms. This algorithm is embedded into DenseNet-121, thus achieving efficient image classification. Notably, HGAO can quickly search for the optimal solution by adjusting the ratio of different weights β1 and β2, thereby improving the accuracy and speed of image classification.
(b)
Four public datasets about image classification are gathered, which contain fields such as healthcare, agriculture, traditional medicine, remote sensing, and disaster management. Moreover, we make a self-developed traditional Chinese medicine image dataset. These datasets are sufficient for our experiments.
(c)
Extensive experimental results indicate that the proposed method has high accuracy and efficiency in multi-domain image classification tasks. In particular, the proposed HGAO algorithm with DenseNet-121 outperforms several state-of-the-art algorithms—including HLOA, ESOA, PSO, and WOA—in multi-domain image classification tasks.
The rest of this paper is organized as follows: Section 2 presents the proposed method. Section 3 indicates the experimental results and analyzes the critical findings. Finally, Section 4 offers conclusions and future work.

2. Related Theoretical Description

2.1. DenseNet-121

The major structure of DenseNet-121 is shown in Figure 1. This network consists of multiple dense blocks, transition layers, global average pooling layers, and fully connected layers. Each dense block is composed of several convolutional layers, which enhance the efficiency of information flow by adopting dense connections [5,6,7,28,29,30,31,32].

2.2. HLOA Algorithm

The horned lizard optimization algorithm (HLOA) is a novel bioinspired optimization algorithm [1] that simulates various defensive behaviors of the horned lizard, including crypsis, skin color changes, blood squirting, and escaping.

2.2.1. Crypsis Behavior Strategy

Crypsis is a behavior that which organisms can simulate the characteristics of the environment to assimilate into their surroundings. This paper defines a color evaluation system, such as the Cartesian coordinate L a b system and the polar coordinate L C h system, which are used to calculate colors.
In the L a b system, L represents luminosity, and a and b are chromaticity coordinates, which can be written as follows:
a = + a ,   i n d i c a t e s   R e d , a ,   i n d i c a t e s   G r e e n , b = + b ,   i n d i c a t e s   Y e l l o w , b ,   i n d i c a t e s   B l u e ,
where L defines brightness, C specifies color intensity, and h represents the hue angle.
x i t + 1 = x b e s t t + t M a x i t e r × c 1 sin x r 1 t cos x r 2 t 1 σ c 2 cos x r 3 t sin x r 4 t ,
where x i t + 1 is the new individual position in the population space at generation t + 1; x b e s t t represents the best individual in generation t; and r1, r2, r3, and r4 are integer random numbers. M a x i t e r is the maximum number of iterations, σ is a number, is set at 2, and c1 and c2 are random numbers, respectively.

2.2.2. Skin Darkening or Lightening Strategy

Horned lizards can lighten or darken their skin, which depends on whether they need to reduce or increase the solar heat gain [1]. The skin color-changing strategy of the horned lizard can be represented as
x w o r s t t = x b e s t t + 1 2 L i g h t 1 sin x r 1 t x r 2 t 1 σ 1 2 L i g h t 2 sin x r 3 t x r 4 t ,
x w o r s t t = x b e s t t + 1 2 D a r k 1 sin x r 1 t x r 2 t 1 σ 1 2 D a r k 2 sin x r 3 t x r 4 t ,
where Light1, Light2, Dark1, and Dark2 are random numbers; x w o r s t t and x b e s t t are the worst and best individuals, respectively. r1, r2, r3, and r4 are integer random numbers; x r 1 t , x r 1 t , x r 3 t , and x r 4 t are the individuals selected at positions r1, r2, r3, and r4. σ is a number.

2.2.3. Blood-Squirting Strategy

Horned lizard defends themselves by spraying blood from their eyes. This behavior can be written as
x i t + 1 = v 0 cos α t M a x i t e r + ε x b e s t t + v 0 sin α t M a x i t e r g + ε x i t ,
where x i t + 1 , x i t are the new individual position in the population at generation t + 1, t, respectively. x b e s t t is the best individual, M a x i t e r is the maximum iterations, v 0 is set at 1 m/s, α is set at π/2, ε is set at 1 × 10−6, and g is the gravity of Earth.

2.2.4. Move and Escape Strategy

In this strategy, the horned lizard moves quickly in the environment to evade predators.
x i t + 1 = x b e s t t + w a l k 1 2 ε x i t ,
where x i t + 1 and x i t are the position of a new individual in the population space at generation t + 1, t, respectively. x b e s t t is the best individual in generation t, and walk and ε are the random numbers, respectively, σ∈(0, 1).

2.2.5. Melanophore-Stimulating Hormone Rate Strategy

The rapid color change observed on the skin of horned lizards is attributed to the effect of temperature on melanophore. The melanophore rate of horned lizard is defined as follows:
m e l a n o p h o r e i = F max F i F max F min ,
where Fmin and Fmax are the best and worst fitness values in current generation t, respectively. F(i) is the current fitness value of the i-th individual. Thereafter, we obtain the following:
x i t = x b e s t t + 1 2 x r 1 t 1 σ x r 2 t ,
where x i t is the current individual, x b e s t t is the best individual, and r1, r2 are integer random numbers, σ∈(0, 1).

2.3. Quadratic Interpolation Method

Quadratic interpolation is a local search operator [17] that can search for the optimal solution of the population in a known search space. Its learning rule is as follows:
x h = 1 2 × c h 2 b h 2 × f A + a h 2 c h 2 × f B + b h 2 a h 2 × f C c h b h × f A + a h c h × f B + b h a h × f C ,
where A, B, and C are three interpolation points; f(A), f(B), and f(C) are the fitness functions for A, B, and C, respectively.
To further enhance the search accuracy of the horned lizard optimization algorithm, we adopted a quadratic interpolation operator to modify its learning rule; its pseudocode is illustrated in Algorithm 1.
Algorithm 1: QIHLOA
Input: number of search agents D, population size P, maximum number of iterations T
 Operation
/* Initialization */
1.Initialize: D, P, T
2.Initialize: Population initialization
/* Training Starts */
3.for t = 1 to |T|
4. for i = 1 to |P|
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
  if Crypsis? Then
    Strategy   1 :   Update   x i t + 1 with (2).
  else
   if Flee? Then
     Strategy   4 :   Update   x i t + 1 with (6).
   else
     Strategy   3 :   Compute   x i t + 1 with (5).
   end if
  end if
     Strategy 2: Replace the worst population individual with (3) or (4).
  if melanophore(i) ≤ 0.3 Then
      Strategy   5 :   Generate   a   new   position   x i t with (8).
  end if
    Calculate   F ( i )   for   the   new   individual   x i t + 1 .
   Use (9) to obtain a new individual xh.
    Update   x b e s t t with (15).
  if F(i) < Fbest Then
   Fbest = F(i)
    x b e s t t   =   x i t
  end if
25.end for
26.end for
/* Operation Ending */
Output:   x b e s t t , Fbest

2.4. GAO Algorithm

The GAO algorithm is a bionic metaheuristic algorithm [9] that simulates the natural behavior of wild giant armadillos.

2.4.1. The Stage of Exploration

In the first phase of the GAO algorithm, the positions of population members in the problem-solving space are updated by the simulation of giant armadillos attacking termite mounds during hunting. Specially, the formula for updating the new position of each member in the population is as follows:
X n e w = X i , j + r · S T M i I · X i ,
X i = X n e w , F n e w F i , X i , F n e w > F i ,
where S T M i is the selected termite mound of i-th giant armadillo; X n e w is the new position calculated for i-th giant armadillo based on the attack phase of the GAO algorithm; Fnew is the objective function value, r∈(0, 1); and I is random number.

2.4.2. Digging in Termite Mounds

In the second phase of the GAO algorithm, the positions of population members in the solving space are updated based on the simulation of giant armadillos digging into termite mounds; its learning rule is as follows:
X n e w = X i + 1 2 × r u b l b t ,
X i = X n e w , F n e w F i , X i , F n e w > F i ,
where t is the iteration. ub and lb are upper and lower bounds, respectively.

2.5. Newton Interpolation Method

Newton interpolation is a local optimization method [18], which is able to identify the optimal solution. By gradually adding data points, Newton interpolation can approximate the objective function, thereby improving computational accuracy. Moreover, its updating rule is written as follows:
X b = X l a s t + X i 2 f X l a s t , X i 2 · f X l a s t , X i , X b e s t ,
X i = X i , f X i < f X b , X b , f X i f X b ,
where f and f are the first derivative and second derivative, respectively. Xlast and Xbest are the best individuals from the last iteration and the global best individual, respectively. Furthermore, the major pseudocode of Newton’s interpolation optimization algorithm for a giant armadillo is shown in Algorithm 2.
Algorithm 2: NIGAO
Input: number of search agents D, population size P, maximum number of iterations T
 Operation
/* Initialization */
1.Initialize: D, P, T
2.Initialize: Population initialization
/* Training Starts */
3.for t = 1 to |T|
4.for i = 1 to |P|
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
   Phase 1: Attack on termite mounds
    Determine the termite mound set for the i-th population member Xi
    Randomly select the termite mound of Xi.
    Obtain a new position Xnew with (10).
    Calculate the fitness of Xnew and update Xi according to (11).
   Phase 2: Digging in termite mounds
    Update the position Xnew based on (12).
    Calculate the fitness of Xnew and update Xi according to (13).
    Use (14) to obtain a new individual Xb.
    Update Xbest with (15).
    if Fi < Fbest Then:
    Fbest = Fi
    Xbest = Xi
   end if
19.end for
20.end for
/* Operation Ending */
Output: Xbest, Fbest

2.6. Association Optimization Algorithm (HGAO)

In this section, we integrate the quadratic interpolation horned lizard optimization algorithm (QIHLOA) and the Newton interpolation giant armadillo algorithm (NIGAO) to form an HGAO algorithm. The overall framework of the HGAO algorithm is shown in Figure 2. It is important to note that both the QIHLOA and NIGAO algorithms produce the current optimal individual after each iteration. Moreover, new individuals are generated by using the following formula:
X t n e w = β 1 X i + β 2 Y i ,
where β1 and β2 are the weights of the NIGAO and QIHLOA algorithms, respectively. Xi and Yi represent the i-th population individual in the t-th iteration of NIGAO and QIHLOA algorithms, respectively. In particular, Xtnew is the new individual after the integration.
Newton interpolation is widely applied in fields like engineering, physics, and scientific computing for tasks including curve fitting, simulations, and solving differential equations efficiently.

3. Experiments and Analyses

3.1. Experimental Setting

3.1.1. Evaluation Metrics

In this section, we adopt several evaluation metrics, like accuracy, precision, and recall, to evaluate the effectiveness of the proposed algorithm [33,34,35,36,37,38,39,40,41,42], which are as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N , P r e c i s i o n = T P T P + F P , R e c a l l = T P T P + F N , F 1 - S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
where TP can be used to describe that the prediction is 1, and the actual value is 1, and thus the prediction is correct. FP indicates that the prediction is 1, but the actual value is 0, and thus the prediction is incorrect. FN describes that the [33,34,35,36,37,38,39,40,41,42] prediction is 0, but the actual value is 1, and thus the prediction is incorrect. TN presents that the prediction is 0, and the actual value is 0, and thus the prediction is correct.

3.1.2. Datasets

We conducted experiments by using five different datasets, whose detailed information on the relevant datasets is shown in Table 1. Some example images from these datasets are illustrated in Figure 3 to provide a visual understanding of their characteristics.
(a)
UC Merced Land Use Dataset (UCM) [22]: This is a standard dataset for remote sensing image classification, which is released by the University of California, Merced. This dataset contains 2100 high-resolution aerial images and has a size of 256 × 256 pixels. It consists of 17 different land use types, including farmland, forests, highways, parking lots, etc.
(b)
LC25000 [20]: This consists of 25,000 color histopathological images, which can be used for classification tasks on lung and colon cancers. The dataset is evenly divided into five categories: lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and benign colon tissue.
(c)
PlantVillage [21]: This is an image dataset that contains 39 types of plant diseases, which is widely used in agricultural AI research. Moreover, this dataset includes 61,486 color images with a resolution of 256 × 256 pixels, which covers healthy and diseased leaf samples from common crops like vegetables, fruits, and grains.
(d)
CMI5 (Chinese Medicine Identification 5 Dataset): We gathered this dataset, which contains images of traditional Chinese medicinal herbs. It consists of five categories: mint, fritillaria, honeysuckle, ophiopogon, and ginseng. Each category contains 2020 images; thus, it obtains a total of 10,100 images.
(e)
AIDER (Aerial Image Dataset for Emergency Response applications) [23]: This is an aerial image dataset that is primarily used for classification tasks in emergency response scenarios. This dataset contains 5500 images with 5 categories, e.g., fire or smoke, floods, building collapse or debris, traffic accidents, and normal situations.
To fairly evaluate the proposed algorithm, 80% of this dataset was used for training; the remaining 20% was adopted for testing. Specifically, 10% of the training set was used for validation. Regarding input, all images were resized to 224 × 224 pixels. Thereafter, the batch size was set to 16, and stochastic gradient descent (SGD) was used as the optimizer. All experiments were conducted on a workstation equipped with two NVIDIA RTX A5000 GPUs and 128 GB of RAM (NVIDIA RTX A5000 GPU’s information: NVIDIA, Santa Clara, CA, USA; 128 GB of RAM’s information: Kingston, Fountain Valley, CA, USA. The Workstation was sourced from China).

3.2. Compared Algorithms

In this section, we compare the proposed HGAO algorithm with several state-of-the-art image classification algorithms to verify its experimental performance. The comparison algorithms are summarized as follows:
(a)
M1: Horned Lizard Optimization Algorithm (HLOA) [8].
(b)
M2: Giant Armadillo Optimization Algorithm (GAO) [9].
(c)
M3: Particle Swarm Optimization (PSO) [29].
(d)
M4: Egret Swarm Optimization Algorithm (ESOA) [30].
(e)
M5: Black Widow Optimization (BWO) [31].
(f)
M6: Transient Search Optimization Algorithm (TSO) [32].
(g)
M7: Whale Optimization Algorithm (WOA) [33].
(h)
M8: Catch fish Optimization Algorithm (CFOA) [34].
(i)
M9: Goose Optimization Algorithm (GO) [35].

3.3. Combination Parameter Selection

In this section, we test two critical parameters, β1 and β2, of the HGAO algorithm based on DenseNet-121, which represent the weights of the NIGAO and QIHLOA optimization algorithms, respectively. The performance was compared with different algorithms, and then we evaluated five image classification datasets. Moreover, our goal is to validate the performance of HGAO under hyperparameters like the learning rate and dropout rate.
The learning rate is searched within the range of [0.00001, 0.1], and the dropout rate is within [0.1, 0.6]. To ensure fairness, all models were initialized with identical hyperparameters as follows: a population size of P = 30, a maximum of T = 10 optimization iterations, and a training process of 60 epochs.
We conducted five tests by changing the values of β1 and β2. β1 is set to 0.1, 0.3, 0.5, 0.7, and 0.9. β2 is set to 0.9, 0.7, 0.5, 0.3, and 0.1, respectively. These adjustments allow us to observe the effect of different weight configurations on the algorithm’s performance on various datasets. The specific results are listed in Table 2, Table 3, Table 4, Table 5 and Table 6.
In the experimental analysis, the performance of the HGAO algorithm on five datasets is unique; different combinations of β1 and β2 show the best performance. On the LC25000 and PlantVillage datasets, the combination (β1 = 0.3, β2 = 0.7) has the best performance, which achieves high test accuracy and low test loss. It is important to note that the moderate configuration of NIGAO weights combined with the high configuration of QIHLOA weights effectively prevents overfitting during optimization, which can maintain stable performance on complex data. On the UC Merced Land Use Dataset, the combination (β1 = 0.7, β2 = 0.3) obtains the best results; high NIGAO weights are beneficial for remote sensing image classification tasks. In particular, the optimization capabilities of NIGAO enhance the model’s classification performance in complex data. For the Chinese herbal medicine dataset, the combination (β1 = 0.1, β2 = 0.9) shows excellent results; it shows that high QIHLOA weights can utilize local features in high-dimensionality datasets to enhance the model’s generalization ability.
Finally, for the AIDER dataset, the combination (β1 = 0.3, β2 = 0.7) once again shows its advantages; it obtains the highest test accuracy and lowest test loss, which indicates that this combination is highly effective for classification tasks.
In summary, the experimental results from the five datasets show that different combinations (β1 and β2) have a significant effect on the performance of the HGAO algorithm on various datasets. Among them, the configuration β1 = 0.3, β2 = 0.7 obtains the best performance on most datasets, which can balance the weights of NIGAO and QIHLOA, thus leading to excellent performance in both the training and testing phases.
The average accuracy of HGAO and nine compared algorithms during the computation process is displayed in Figure 4. Due to the density of the 10 lines, the results are split into two graphs; the convergence graphs for the remaining datasets are accessed by a hyperlink, which is in the Data Availability Statement section. In the early epochs, the HLOA algorithm outperforms the HGAO algorithm, which could find an initial solution faster. However, HGAO gradually surpassed HLOA with its optimization strategy. As training progressed, it showed stronger global optimization capabilities. The advantage of HGAO depends on its ability to effectively escape local optimum, thus obtaining higher accuracy and lower loss in the later stages of training.
Table 7, Table 8, Table 9, Table 10 and Table 11 present the performance of various compared algorithms on five datasets. In these tables, the HGAO algorithm obtains the lowest and best training and testing losses on these datasets. The convergence of the training process in Figure 4 illustrates the accuracy of HGAO on the training and testing sets. For the LC25000 dataset, HGAO achieves excellent performance, obtaining a test accuracy of 99.93% and a test loss of 0.014. Its precision, recall, and F1-score can reach 0.95, which outperforms other compared algorithms.
Although HLOA and GAO perform splendidly during training, the performance of HGAO in the testing phase is 0.52% higher than the accuracy of HLOA, which achieves a better balance between training and testing performance. For the other four datasets, i.e., PlantVillage, UC Merced Land Use Dataset, Chinese Medicinal Materials, and AIDER, HGAO also obtains outstanding performance. In particular, it achieves a test accuracy of 99.06% and a test loss of 0.0333 on the CMI5 dataset. These findings indicate that the HGAO algorithm with accurate hyperparameters can maintain high accuracy during the training process, and it also significantly improves generalization during the testing process, thus showing its exceptional performance in various image classification tasks.

4. Conclusions

This paper proposes an innovative HGAO algorithm to enhance the accuracy and recognition performance in multi-domain image classification tasks. It combines two different evolutionary algorithm strategies, thus enhancing the local search capability and global search precision by integrating quadratic interpolation and Newton interpolation. A novel linear combination strategy is also employed to enhance overall performance. To validate the effectiveness of this algorithm, extensive experiments were conducted on five multi-domain image datasets.
Extensive experiments conducted on five diverse image classification datasets—spanning remote sensing, medical imaging, plant disease detection, traditional Chinese medicine, and emergency response—demonstrate that HGAO consistently outperforms multiple baseline optimization methods. In particular, it achieves superior classification accuracy, improved stability, and stronger global optimization capability. These results validate the robustness, adaptability, and strong generalization ability of the proposed algorithm across heterogeneous data domains.
In the future, we plan to further expand the HGAO algorithm by incorporating various advanced evolutionary algorithms and optimization strategies, thus exploring its potential applications in other complex optimization issues. Additionally, we plan to investigate how adaptively adjusting the weight parameters of the combination strategy for different types of tasks can further enhance the algorithm’s generalization ability and robustness to promote the practical application of the HGAO algorithm in multi-domain optimization problems.

Author Contributions

Conceptualization, P.W.; methodology, R.Z.; software, Z.L.; validation, P.W.; formal analysis, P.W.; investigation, R.Z.; resources, J.G.; data curation, J.G.; writing—original draft preparation, P.W. and Z.L.; writing—review and editing, P.W. and Z.L.; visualization, P.W.; supervision, J.G.; project administration, J.G.; funding acquisition, P.W. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

We collect five different datasets. CMI5 is self-constructed, which can be obtained from the following link: https://github.com/aliwa8168/HGAO/tree/main/Datasets/Chinese%20herbal%20medicine%20Datasets (accessed on 16 August 2025). The remaining four datasets can be obtained from relevant literature.

Acknowledgments

This research was funded by the National Funded Postdoctoral Research Program (GZC20241900); the Natural Science Foundation Program of Xinjiang Uygur Autonomous Region (2024D01A141); the Sichuan University students innovation and entrepreneurship training program (S202410621082); the Chengdu University of Information Technology key project of education reform (JYJG2024206); the open project of Dazhou Key Laboratory of Government Data Security under grants ZSAQ202501, ZSAQ202502, and ZSAQ202507; and the Key Laboratory of Remote Sensing Application and Innovation (LRSAI-2025004).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Siuly, S.; Khare, S.K.; Kabir, E.; Sadiq, M.T.; Wang, H. An efficient Parkinson’s disease detection framework: Leveraging time-frequency representation and AlexNet convolutional neural network. Comput. Biol. Med. 2024, 174, 108462. [Google Scholar] [CrossRef]
  2. Ma, L.; Wu, H.; Samundeeswari, P. GoogLeNet-AL: A fully automated adaptive model for lung cancer detection. Pattern Recognit. 2024, 155, 110657. [Google Scholar] [CrossRef]
  3. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  4. Yadav, H.; Thakkar, A. NOA-LSTM: An efficient LSTM cell architecture for time series forecasting. Expert Syst. Appl. 2024, 238, 122333. [Google Scholar] [CrossRef]
  5. Visin, F.; Ciccone, M.; Romero, A.; Kastner, K.; Cho, K.; Bengio, Y.; Matteucci, M.; Courville, A. Reseg: A recurrent neural network-based model for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 27–30 June 2016; pp. 41–48. [Google Scholar]
  6. Ovadia, O.; Kahana, A.; Stinis, P.; Turkel, E.; Givoli, D.; Karniadakis, G.E. Vito: Vision transformer-operator. Comput. Methods Appl. Mech. Eng. 2024, 428, 117109. [Google Scholar] [CrossRef]
  7. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International Conference on Machine Learning, Kunming, China, 16–18 July 2021; pp. 10347–10357. [Google Scholar]
  8. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  9. Alsayyed, O.; Hamadneh, T.; Al-Tarawneh, H.; Alqudah, M.; Gochhait, S.; Leonova, I.; Malik, O.P.; Dehghani, M. Giant Armadillo optimization: A new bio-inspired metaheuristic algorithm for solving optimization problems. Biomimetics 2023, 8, 619. [Google Scholar] [CrossRef]
  10. Chen, T.; Li, S.; Qiao, Y.; Luo, X. A robust and efficient ensemble of diversified evolutionary computing algorithms for accurate robot calibration. IEEE Trans. Instrum. Meas. 2024, 73, 7501814. [Google Scholar] [CrossRef]
  11. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  12. Wei, P.; Hu, C.; Hu, J.; Li, Z.; Qin, W.; Gan, J.; Chen, T.; Shu, H.; Shang, M. A Novel Black Widow Optimization Algorithm Based on Lagrange Interpolation Operator for ResNet18. Biomimetics 2025, 10, 361. [Google Scholar] [CrossRef]
  13. Chang, S.; Yang, G.; Cheng, J.; Feng, Z.; Fan, Z.; Ma, X.; Li, Y.; Yang, X.; Zhao, C. Recognition of wheat rusts in a field environment based on improved DenseNet. Biosyst. Eng. 2024, 238, 10–21. [Google Scholar] [CrossRef]
  14. Li, Z.; Li, S.; Luo, X. An overview of calibration technology of industrial robots. IEEE/CAA J. Autom. Sin. 2021, 8, 23–36. [Google Scholar] [CrossRef]
  15. Song, Y.; Dai, Y.; Liu, W.; Liu, Y.; Liu, X.; Yu, Q.; Liu, X.; Que, N.; Li, M. DesTrans: A medical image fusion method based on transformer and improved DenseNet. Comput. Biol. Med. 2024, 174, 108463. [Google Scholar] [CrossRef] [PubMed]
  16. Wu, D.; Ying, Y.; Zhou, M.; Pan, J.; Cui, D. Improved ResNet-50 deep learning algorithm for identifying chicken gender. Comput. Electron. Agric. 2023, 205, 107622. [Google Scholar] [CrossRef]
  17. Talukder, M.A.; Layek, M.A.; Kazi, M.; Uddin, M.A.; Aryal, S. Empowering COVID-19 detection: Optimizing performance through fine-tuned efficientnet deep learning architecture. Comput. Biol. Med. 2024, 168, 107789. [Google Scholar] [CrossRef]
  18. Shafik, W.; Tufail, A.; De Silva Liyanage, C.; Rosyzie Anna Awg Haji Mohd Apong. Using transfer learning-based plant disease classification and detection for sustainable agriculture. BMC Plant Biol. 2024, 24, 136. [Google Scholar] [CrossRef]
  19. Hamza, A.; Khan, M.A.; Rehman, S.U.; Al-Khalidi, M.; Alzahrani, A.I.; Alalwan, N. A novel bottleneck residual and self-attention fusion-assisted architecture for land use recognition in remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 2995–3009. [Google Scholar] [CrossRef]
  20. Lee, G.Y.; Dam, T.; Ferdaus, M.M.; Poenar, D.P.; Duong, V.N. Watt-effnet: A lightweight and accurate model for classifying aerial disaster images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6005205. [Google Scholar] [CrossRef]
  21. Deng, Y.; Hou, X.; Li, B.; Wang, J.; Zhang, Y. A highly powerful calibration method for robotic smoothing system calibration via using adaptive residual extended Kalman filter. Robot. Comput.-Integr. Manuf. 2024, 86, 102660. [Google Scholar] [CrossRef]
  22. She, S.; Meng, T.; Zheng, X.; Shao, Y.; Hu, G.; Yin, W. Evaluation of defects depth for metal sheets using 4-coil excitation array eddy current sensor and improved ResNet18 network. IEEE Sens. J. 2024, 24, 18955–18967. [Google Scholar] [CrossRef]
  23. Qaraad, M.; Amjad, S.; Hussein, N.K.; Farag, M.A.; Mirjalili, S.; Elhosseini, M.A. Quadratic interpolation and a new local search approach to improve particle swarm optimization: Solar photovoltaic parameter estimation. Expert Syst. Appl. 2024, 236, 121417. [Google Scholar] [CrossRef]
  24. Li, Z.; Li, S.; Luo, X. Using quadratic interpolated beetle antennae search to enhance robot arm calibration accuracy. IEEE Robot. Autom. Lett. 2022, 7, 12046–12053. [Google Scholar] [CrossRef]
  25. Almutairi, N.; Saber, S. Application of a time-fractal fractional derivative with a power-law kernel to the Burke-Shaw system based on Newton’s interpolation polynomials. MethodsX 2024, 12, 102510. [Google Scholar] [CrossRef]
  26. Li, Z.; Li, S.; Bamasag, O.O.; Alhothali, A.; Luo, X. Diversified regularization enhanced training for effective manipulator calibration. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 8778–8790. [Google Scholar] [CrossRef]
  27. Wang, D.; Zhai, L.; Fang, J.; Li, Y.; Xu, Z. psoResNet: An improved PSO-based residual network search algorithm. Neural Netw. 2024, 172, 106104. [Google Scholar] [CrossRef]
  28. Kong, Z.; Le, D.N.; Pham, T.H.; Poologanathan, K.; Papazafeiropoulos, G.; Vu, Q.V. Hybrid machine learning with optimization algorithm and resampling methods for patch load resistance prediction of unstiffened and stiffened plate girders. Expert Syst. Appl. 2024, 249, 123806. [Google Scholar] [CrossRef]
  29. Lee, J.H.; Song, J.; Kim, D.; Kim, J.W.; Kim, Y.J.; Jung, S.Y. Particle swarm optimization algorithm with intelligent particle number control for optimal design of electric machines. IEEE Trans. Ind. Electron. 2018, 65, 1791–1798. [Google Scholar] [CrossRef]
  30. Wei, P.; Shang, M.; Zhou, J.; Shi, X. Efficient Adaptive Learning Rate for Convolutional Neural Network Based on Quadratic Interpolation Egret Swarm Optimization Algorithm. Heliyon 2024, 10, e37814. [Google Scholar] [CrossRef]
  31. Hayyolalam, V.; Kazem, A.A. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  32. Shaheen, M.A.; Ullah, Z.; Hasanien, H.M.; Tostado-Véliz, M.; Ji, H.; Qais, M.H.; Alghuwainem, S.; Jurado, F. Enhanced transient search optimization algorithm-based optimal reactive power dispatch including electric vehicles. Energy 2023, 277, 127711. [Google Scholar] [CrossRef]
  33. Sapnken, F.E.; Tazehkandgheshlagh, A.K.; Diboma, B.S.; Hamaidi, M.; Noumo, P.G.; Wang, Y.; Tamba, J.G. A whale optimization algorithm-based multivariate exponential smoothing grey-holt model for electricity price forecasting. Expert Syst. Appl. 2024, 255, 124663. [Google Scholar] [CrossRef]
  34. Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch fish optimization algorithm: A new human behavior algorithm for solving clustering problems. Clust. Comput. 2024, 27, 13295–13332. [Google Scholar] [CrossRef]
  35. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  36. Mahmood, K.; Shamshad, S.; Saleem, M.A.; Kharel, R.; Das, A.K.; Shetty, S.; Rodrigues, J.J.P.C. Blockchain and PUF-based secure key establishment protocol for cross-domain digital twins in industrial Internet of Things architecture. J. Adv. Res. 2024, 62, 155–163. [Google Scholar] [CrossRef]
  37. Peng, L.; Cai, Z.; Heidari, A.A.; Zhang, L.; Chen, H. Hierarchical Harris hawks optimizer for feature selection. J. Adv. Res. 2023, 53, 261–278. [Google Scholar] [CrossRef]
  38. Khan, A.T.; Li, S.; Zhou, X. Trajectory optimization of 5-link biped robot using beetle antennae search. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 3276–3280. [Google Scholar] [CrossRef]
  39. Zhou, S.; Xing, L.; Zheng, X.; Du, N.; Wang, L.; Zhang, Q. A self-adaptive differential evolution algorithm for scheduling a single batch-processing machine with arbitrary job sizes and release times. IEEE Trans. Cybern. 2021, 51, 1430–1442. [Google Scholar] [CrossRef] [PubMed]
  40. Li, S.; He, J.; Li, Y.; Rafique, M.U. Distributed recurrent neural networks for cooperative control of manipulators: A game-theoretic perspective. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 415–426. [Google Scholar] [CrossRef] [PubMed]
  41. Li, S.; Zhang, Y.; Jin, L. Kinematic control of redundant manipulators using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2243–2254. [Google Scholar] [CrossRef]
  42. Giammarco, M.D.; Martinelli, F.; Santone, A.; Cesarelli, M.; Mercaldo, F. Colon cancer diagnosis by means of explainable deep learning. Sci. Rep. 2024, 14, 15334. [Google Scholar] [CrossRef]
Figure 1. The structure of DenseNet-121, where each dense block consists of batch normalization (BN), ReLU, Conv(1 × 1), and Conv(3 × 3). Conv(1 × 1) represents a convolution operation with a kernel size of 1 × 1, and Conv(3 × 3) is a convolution operation with a kernel size of 3 × 3.
Figure 1. The structure of DenseNet-121, where each dense block consists of batch normalization (BN), ReLU, Conv(1 × 1), and Conv(3 × 3). Conv(1 × 1) represents a convolution operation with a kernel size of 1 × 1, and Conv(3 × 3) is a convolution operation with a kernel size of 3 × 3.
Biomimetics 10 00544 g001
Figure 2. The overall framework of the HGAO algorithm.
Figure 2. The overall framework of the HGAO algorithm.
Biomimetics 10 00544 g002
Figure 3. Representative sample images from the UCM, LC25000, PlantVillage, CMI5, and AIDER datasets.
Figure 3. Representative sample images from the UCM, LC25000, PlantVillage, CMI5, and AIDER datasets.
Biomimetics 10 00544 g003
Figure 4. The training accuracy on LC25000.
Figure 4. The training accuracy on LC25000.
Biomimetics 10 00544 g004
Table 1. Detailed information for relevant datasets.
Table 1. Detailed information for relevant datasets.
DatasetCategoriesSizes (M)Image Count
UC Merced Land Use Data21418 MB2100
LC2500051.75 GB25,000
AIDER5263 MB6433
PlantVillage39898 MB61,486
Chinese Medicinal Materials5117 MB10,100
Table 2. Experimental results on LC25000.
Table 2. Experimental results on LC25000.
β1β2Training AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
0.10.90.9920.0290.9500.1530.9500.9500.950
0.30.70.9930.0140.9510.1400.9500.9500.950
0.50.50.9830.0540.9480.1890.9500.9500.950
0.70.30.9890.0390.9490.2230.9500.9500.950
0.90.10.9850.0460.9420.2060.9400.9400.940
Table 3. Experimental results on PlantVillage.
Table 3. Experimental results on PlantVillage.
β1β2Training AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
0.10.90.9980.0060.9690.1040.9700.9700.970
0.30.70.9980.0040.9720.0920.9700.9700.970
0.50.50.9950.0160.9690.0990.9700.9700.970
0.70.30.9980.0060.970.1040.9700.9700.970
0.90.10.9970.0840.9670.1110.9700.9700.970
Table 4. Experimental results on UC Merced land use dataset.
Table 4. Experimental results on UC Merced land use dataset.
β1β2Training AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
0.10.90.9990.0050.9910.0330.9900.9900.990
0.30.70.9980.0080.9850.0520.9900.9900.990
0.50.50.9980.0070.9870.3900.9900.9900.990
0.70.30.9970.0090.9800.0660.9800.9800.980
0.90.10.9960.0100.9790.0690.9800.9800.980
Table 5. Experimental results on CMI5.
Table 5. Experimental results on CMI5.
β1β2Training AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
0.10.90.9990.0050.9900.0330.9900.9900.990
0.30.70.9980.0080.9850.0520.9900.9900.990
0.50.50.9990.0060.9870.0390.9900.9900.990
0.70.30.9970.0090.9800.0660.9800.9800.980
0.90.10.9960.0090.9790.0690.9800.9800.980
Table 6. Experimental results on AIDER.
Table 6. Experimental results on AIDER.
β1β2Training AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
0.10.90.9910.0210.9100.3670.9100.9100.910
0.30.70.9980.0040.9170.3390.9200.9200.920
0.50.50.9940.0100.9090.3640.9100.9100.910
0.70.30.9940.0080.9140.3490.9100.9100.910
0.90.10.9920.0160.9050.3860.9100.900 0.900
Table 7. Algorithm comparison on LC25000.
Table 7. Algorithm comparison on LC25000.
Compared AlgorithmTraining AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
HLOA0.9880.0360.9470.1940.9500.9500.950
GAO0.9860.0330.9360.2200.9400.9400.940
PSO0.9840.0350.9340.2300.9300.9300.930
ESOA0.9850.0320.9320.2500.9300.9300.930
BWO0.9810.0820.9180.2540.9200.9200.920
TSO0.9820.0480.9250.2400.9300.9300.930
WOA0.9790.0610.9030.3130.900 0.9000.900
CFOA0.9820.0580.8960.3390.9000.9000.900
GO0.9760.0630.9230.2580.9200.9200.920
HGAO0.9930.0140.9510.1400.9500.9500.950
Table 8. Algorithm comparison on PlantVillage.
Table 8. Algorithm comparison on PlantVillage.
Compared AlgorithmTraining AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
HLOA0.9970.0090.9700.1020.9700.9700.970
GAO0.9960.0110.9690.1050.9700.9700.970
PSO0.9910.0260.9640.1120.9600.9600.960
ESOA0.9940.0170.9650.1110.9700.9600.960
BWO0.9940.0190.9640.1130.9600.9600.960
TSO0.9890.0300.9610.1350.9600.9600.960
WOA0.9880.0340.9670.1060.9700.9700.970
CFOA0.9910.0220.9650.1090.9600.9600.960
GO0.9890.0280.9610.1350.9600.9600.960
HGAO0.9980.0040.9720.0920.9700.9700.970
Table 9. Algorithm comparison on UC Merced land use dataset.
Table 9. Algorithm comparison on UC Merced land use dataset.
Compared AlgorithmTraining AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
HLOA0.9950.0160.9210.2520.9200.9200.920
GAO0.9910.0390.9170.3840.9200.9200.920
PSO0.9740.18880.9110.4510.9100.9100.910
ESOA0.9930.0280.8820.6200.8800.8800.880
BWO0.9840.1040.8900.5750.8900.8900.890
TSO0.9740.1720.8710.6700.8700.8700.870
WOA0.9660.2410.8520.7830.8500.8500.850
CFOA0.9850.1010.8940.5760.8900.8900.890
GO0.9780.1360.8980.5210.900 0.9000.900
HGAO0.9980.0080.9230.2240.9200.9200.920
Table 10. Algorithm comparison on CMI5.
Table 10. Algorithm comparison on CMI5.
Compared AlgorithmTraining AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
HLOA0.9970.0080.9870.4460.9900.9900.990
GAO0.9940.0300.9730.0920.9700.9700.970
PSO0.9910.0640.9570.1310.9600.9600.960
ESOA0.9910.0500.9620.1080.9600.9600.960
BWO0.9890.0540.9720.0850.9700.9700.970
TSO0.9850.0770.9570.1320.9600.9600.960
WOA0.9880.0680.9660.1080.9700.9700.970
CFOA0.9850.0730.9700.0800.9700.9700.970
GO0.9870.0950.9440.1680.9400.9400.940
HGAO0.9990.0050.9910.0330.9900.9900.990
Table 11. Algorithm comparison on AIDER.
Table 11. Algorithm comparison on AIDER.
Compared AlgorithmTraining AccuracyTraining LossTest AccuracyTest LossPrecisionRecallF1-Score
HLOA0.9940.0100.9120.3520.9100.9100.910
GAO0.9920.0140.9080.3740.9100.9100.910
PSO0.9870.0390.8960.5560.9000.900 0.900
ESOA0.9870.0350.9040.3920.9000.900 0.900
BWO0.9780.0630.8720.6130.8700.8700.870
TSO0.9780.0680.8930.5550.8900.8900.890
WOA0.9830.0570.8700.7450.8700.8700.870
CFOA0.9860.0440.8870.6490.8900.8900.890
GO0.9770.0610.8730.6870.8700.8700.870
HGAO0.9980.0040.9170.3390.9200.9200.920
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, P.; Zou, R.; Gan, J.; Li, Z. Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification. Biomimetics 2025, 10, 544. https://doi.org/10.3390/biomimetics10080544

AMA Style

Wei P, Zou R, Gan J, Li Z. Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification. Biomimetics. 2025; 10(8):544. https://doi.org/10.3390/biomimetics10080544

Chicago/Turabian Style

Wei, Peiyang, Rundong Zou, Jianhong Gan, and Zhibin Li. 2025. "Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification" Biomimetics 10, no. 8: 544. https://doi.org/10.3390/biomimetics10080544

APA Style

Wei, P., Zou, R., Gan, J., & Li, Z. (2025). Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification. Biomimetics, 10(8), 544. https://doi.org/10.3390/biomimetics10080544

Article Metrics

Back to TopTop