A Feature Selection Based on Improved Artiﬁcial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classiﬁcation Problem

: Recycling tasks are the most effective method for reducing waste generation, protecting the environment, and boosting the overall national economy. The productivity and effectiveness of the recycling process are strongly dependent on the cleanliness and precision of processed primary sources. However, recycling operations are often labor intensive, and computer vision and deep learning (DL) techniques aid in automatically detecting and classifying trash types during recycling chores. Due to the dimensional challenge posed by pre-trained CNN networks, the scientiﬁc community has developed numerous techniques inspired by biology, swarm intelligence theory, physics, and mathematical rules. This research applies a new meta-heuristic algorithm called the artiﬁcial hummingbird algorithm (AHA) to solving the waste classiﬁcation problem based on feature selection. However, the performance of the AHA is barely satisfactory; it may be stuck in optimal local regions or have a slow convergence. To overcome these limitations, this paper develops two improved versions of the AHA called the AHA-ROBL and the AHA-OBL. These two versions enhance the exploitation stage by using random opposition-based learning (ROBL) and opposition-based learning (OBL) to prevent local optima and accelerate the convergence. The main purpose of this paper is to apply the AHA-ROBL and AHA-OBL to select the relevant deep features provided by two pre-trained models of CNN (VGG19 & ResNet20) to recognize a waste classiﬁcation. The TrashNet dataset is used to verify the performance of the two proposed approaches (the AHA-ROBL and AHA-OBL). The effectiveness of the suggested methods (the AHA-ROBL and AHA-OBL) is compared with that of 12 modern and competitive optimizers, namely the artiﬁcial hummingbird algorithm (AHA), Harris hawks optimizer (HHO), Salp swarm algorithm (SSA), aquila optimizer (AO), Henry gas solubility optimizer (HGSO), particle swarm optimizer (PSO), grey wolf optimizer (GWO), Archimedes optimization algorithm (AOA), manta ray foraging optimizer (MRFO), sine cosine algorithm (SCA), marine predators algorithm (MPA), and rescue optimization algorithm (SAR). A fair evaluation of the proposed algorithms’ performance is achieved using the same dataset. The performance analysis of the two proposed algorithms is applied in terms of different measures. The experimental results conﬁrm the two proposed algorithms’ superiority over other comparative algorithms. The AHA-ROBL and AHA-OBL produce the optimal number of selected features with the highest degree of precision.


•
In this study on solving the feature selection problem, the AHA is enhanced for the first time. • An enhanced version of the AHA is proposed based on two operators: random opposition-based learning (ROBL) and opposition-based learning (OBL). • The two proposed models are compared with the original algorithm and 12 different algorithms. • The study applies the modified algorithms AHA-ROBL and AHA-OBL to the TrashNet database by using two pre-trained networks: VGG19 and ResNet. • The two proposed algorithms each demonstrate a greater robustness and stability than other recent algorithms.
Our paper is structured as follows: Section 2 conducts a literature review, while Section 3 discusses the fundamentals of the AHA-ROBL optimization technique for pretrained neural networks. Section 4 summarizes the acquired results regarding fitness, accuracy, and feature selection.

Literature Review
In recent years, considerable research has been conducted on garbage image classification. This paper will present the work of domestic and international scholars in the fields of image recognition and waste classification

Waste Recycling Using Traditional Machine-Learning Algorithms
Different machine-learning algorithms have been applied to the TrashNet data. Yang et al. achieved an accuracy rate of 63% [101] using the SVM algorithm and Costa et al. achieved an accuracy of 88% using the kNN algorithm [102]. Satvilkar classified garbage images from the TrashNet dataset with a 62.61% accuracy [103] using the RF algorithm and classified garbage images from the TrashNet dataset using the XGBoost algorithm with an accuracy of 70.1%.

Waste Recycling Using Deep-Learning Algorithms
Deep-and machine-learning models have been combined to classify trash types. Researchers in [103] conducted an experiment in which they examined solely recyclable waste material classified into five distinct categories. The CNN, k-nearest neighbor (kNN), random forest (RF), and SVM models were all used, with the CNN model achieving the highest classification accuracy of 89.91%. [104] evaluated the kNN, RF, SVM and VGG16 models in combination. A processed dataset was created using photos of four distinct recycling materials with a success rate of 93%. Zhu et al. [105] established an identification approach for plastic solid waste (PSW) chemicals classified into six types based on nearinfrared (NIR) reflectance spectroscopy, principal component analysis (PCA), and the support vector machine (SVM) model with a 97.5% classification accuracy. Zkan et al. [106] classified garbage into plastic and non-plastic categories.

Waste Recycling Using Deep-Transfer Learning
In the following section, detailed descriptions of this dataset are provided. Several studies utilizing the TrashNet dataset to evaluate proposed solutions to the trash classification problem are summarized in [3,8,107].
First, Aral et al. classified trash from the TrashNet dataset using different transfer learning models. According to the experimental findings, the DenseNet121 model had the highest accuracy, achieving 95% [107].
Then, Ruiz et al. used different CNN models and achieved an average accuracy of 88.66% for the TrashNet dataset, producing the best performance results. This method, which ResNet Ruiz denotes, was reimplemented in our experiments [8].
Several well-known CNN models for image classification, such as ResNext [108], ImageNet [109], VGG [110], ResNet [111], and DenseNet [112], can also be used as base models for trash classification. This study determined that among the CNN models listed above, ResNext is the best model for transfer learning to classify trash.
AHA demonstrates an extremely competitive performance. It demonstrates an effectiveness in optimization issues. Moreover, this algorithm has an advantage over other algorithms. Its straightforward procedure, low computational cost, significant convergence speed, relatively close solutions, independence from problems, and gradient-free nature make it a desirable algorithm [113][114][115][116]. In the present paper, enhanced AHA algorithms are used to select the most optimal features in the waste classification problem. We propose two new enhanced approaches based on AHA for FS, namely the AHA-ROBL and AHA-OBL, based on the kNN classifier. Figure 1 illustrates the proposed framework for an improved artificial hummingbird algorithm using random opposition-based learning for solving waste classification problems based on feature selection, which contains seven significant steps:  Data collection • Data pre-processing • Feature extraction techniques using pre-trained deep-learning models (VGG19 and Resnet20) • Waste classification with the AHA-ROBL using AHA initialization followed by AHA scoring and AHA updating using an exploration mode and the AHA and an exploitation mode using ROBL • Prediction and evaluation metrics

Dataset Description
The dataset used and implemented in this research is the TrashNet dataset. The TrashNet dataset includes 2527 images classified into six categories: cardboard, glass, metal, paper, plastic, and rubbish. This study supplemented the original dataset to build a huge dataset. The dataset augmentation resulted in 2527 images of horizontal flipping, 2527 images of vertical flipping, and 2527 random 25°rotations, resulting in 10,108 waste images. Additionally, this study compared the outcomes using 2527 photos and 10,108 photographs. The dataset was partitioned, with 90% and 10% of each class randomly assigned to training and testing sets, respectively [117]. Figure 2 shows examples of each category.

Feature Extraction Using Pre-Trained CNN
The process of feature extraction using a pre-trained CNN is introduced in this section. CNNs are composed of three layers: convolutional, pooling, and fully connected layers. The most critical layers are the convolutional and pooling layers. By convolving an image area with numerous filters, a convolution layer is utilized to extract features. Due to the higher layer count, a CNN can interpret the features in its input image more precisely. The pooling layer compresses the output mapping of the convolution. Four pre-trained networks can be used in computer vision tasks, such as image generation, image classification, image captioning, and many others. The four types are VGG19, Inception V3, GoogLeNet, ResNet50, and AlexNet.
In this research, two of these pre-trained networks were used: VGG19 and ResNet50. Their benefits contributed to an improved prediction performance while avoiding overfitting traditional ANN models. The following section will explain the two pre-trained models used in this paper.

VGG19
The VGG neural network is a 19-layer convolutional neural network. Simonyan and Zisserman developed and trained it in 2014 at the University of Oxford. The details can be found in their 2015 paper "Very Deep Convolutional Networks for Large-Scale Image Recognition". Additionally, the VGG19 network was trained using images from the ImageNet collection totaling more than 1 million images. Naturally, you may import the model with training weights from ImageNet. Up to 1000 items can be classified using this pre-trained network. The network was trained using colorful images with a resolution of 224 × 224 pixels ( Figure 3 [118]).

ResNet50
ResNet50 is a 50-layer convolutional neural network. As with VGG19, it can classify up to 1000 objects and was trained on 224 × 224 pixel colored images. Additionally, this model was trained on over 1 million photos from the ImageNet collection. Microsoft developed and trained the model in 2015, and the model's performance results are available in their publication titled "Deep Residual Learning for Image Recognition". Figure 4 illustrates a ResNet residual block. As illustrated in the figure, stacked layers execute residual network mapping by establishing shortcut connections that perform the identity mapping (x). Their outputs are added to the residual function F of the stacked layers (x).
An error gradient was determined and propagated to the shallow layers during the deep network's backpropagation training. This inaccuracy became smaller and smaller as one progressed deeper into the levels until it eventually vanished. This phenomenon is referred to as the gradient vanishing problem in really deep networks. As illustrated in Figures 4 and 5, the problem can be handled via residual learning [119].  The initial residual branch, or unit l, is depicted in Figure 5 within the residual network. Weights, batch normalization (BN), and a corrected linear unit are depicted in the figure (ReLU). The following equations were used to determine the input and output of a residual unit (Equation (1)): where h(x l ) represents the identity mapping, F represents the residual function, x l represents the input, and W l represents the weight coefficient. The identity mapping, which is denoted by h(x l ) = x l , is the foundation of the ResNet architecture. The residual networks were created for networks with layer counts of 34, 50, 101, and 152. ResNet50 was employed in this investigation. The network is made up of 50 layers.

Artificial Hummingbird Algorithm (AHA)
The AHA is a brand new bioinspired meta-heuristic algorithm. The AHA simulates the amazing flying abilities and intelligent feeding methods of hummingbirds in the wild. The technique uses three flight-capability foraging strategies: axial, diagonal, and omnidirectional. In addition, foraging strategies (directed, territorial, and migratory) and a visiting table are employed to simulate the memory function of hummingbirds concerning food sources. The technique is straightforward and has few pre-defined parameters that can be modified. Each hummingbird in the AHA is assigned a unique food source from which it can be nourished. A hummingbird can memorize the location and rate of nectar replenishment of this particular food source. It can also recall the time between visits to each food source. These exceptional skills afford the AHA an exceptional capability for locating ideal solutions.
This section describes the steps of the AHA Algorithm 1, which simulate the behavior of hummingbirds. There are three types of flight skills referred to as axial, diagonal, and omnidirectional flights; these skills are employed in foraging strategies [120]. In addition, there are various types of search strategies, such as guided foraging, territorial foraging, and migratory foraging; a visiting table is also created to simulate the memory function of hummingbirds. As aforementioned, the AHA is a new bioinspired optimizer proposed by Mirjalili for solving optimization problems [70]. This algorithm was inspired by the unique flight capabilities and intelligent foraging strategies of hummingbirds. if rand ≤ 0.5 then Exploration operation Implement the guided foraging using Equation (7) else Exploitation operation Implement the territorial foraging using Equation (9) end if if t p = 2n then Implement the migration foraging using Equation (10) end if end for Update positions Return the highest value for fitness t p = t p + 1 end while The mathematical formulation of the AHA is illustrated by constructing the initial population of X hummingbirds out of N individuals, as shown in Equation (2) where L and U, respectively, represent the upper and lower bounds for a D dimension. r is a random vector in the range of [0, 1]. Additionally, a visited table of food sources is created using Equation (3): where for i = j, the value of VT i,j becomes null and stands for the food taken by a hummingbird at its specific food source. Additionally, when i = j and VT i,j becomes zero, they stand for hummingbird i visiting food source j.

Guided Foraging
In this stage, three flight skills are utilized during foraging, including omnidirectional, diagonal, and axial flight.
The axial flight is defined using Equation (4): The diagonal flight can be expressed using Equation (5): The omnidirectional flight is represented using Equation (6): where randi ([1, d]) represents a random integer between 1 and d, randperm (k) represents a random permutation of the integers between 1 and k, and r 1 ∈ [0, 1] represents a random number. Formulating the guided foraging behavior using using Equation (7): where X i,t (t) denotes the source of food i at iteration t. X i,t (t) is the target food source that ith hummingbirds visit. The value of X i can be updated using Equation (8): where f is the fitness value.

Territorial Foraging
A hummingbird is more likely to search for a new food source after visiting its target food source when flower nectar has been consumed as opposed to visiting other present food sources. Consequently, a hummingbird might readily travel to a nearby location within its area, where a possibly superior food supply could be identified. The modeling is given using Equation (9):

Migration Foraging
In the last phase, the AHA algorithm determines the migration coefficient. If a hummingbird's preferred feeding location runs out of food, it migrates to a more distant feeding location. This hummingbird will abandon the previous food source in favor of the new one, causing the visit table to be modified. The following is a description of a hummingbird's migration from a nectar source with the lowest nectar-refilling rate to one with a random rate of nectar production (see Equation (10)): Here, x w represents the food source with the lowest fitness value. A crucial component of the AHA algorithm is the visiting table. Using Equations (11)-(13), the visiting table is updated for each hummingbird.
This visiting table indicates the length of time since the same hummingbird's last visit to each food source. A long interval between visits indicates a high frequency of visits.

Opposition-Based Learning (OBL)
In the first proposed approach (the AHA-OBL), OBL is applied. OBL is an effective search strategy for avoiding stagnation in possible solutions [121]. OBL, which was proposed by Tizhoosh, improves the exploitation capability of a search mechanism [122]. In meta-heuristic algorithms, a convergence occurs rapidly when initial solutions are relatively close to an optimal location; otherwise, a late convergence is predicted. Here, the OBL strategy generates new solutions by considering search regions that may be nearer to the global optimal solution.
To better understand the OBL, assume the opposite of the real number x ∈ [lb, ub] can be calculated as Opp = (ub + lb) − x, where Opp is the variable opposite var. Consequently, for N-dimensional real numbers, the previous formulation can be generalized as demonstrated by Equation (14):

Random Opposition-Based Learning (ROBL)
In the second approach, ROBL [123] is applied to enhance the exploitation ability of a search mechanism and improve the convergence speed. Different from the original OBL, this paper utilizes this improved OBL strategy [123], which is defined using Equation (15): where x j is the opposite solution, l j and u j are the lower and upper bounds of the problem in the jth dimension, and rand is a random number within (0, 1).

AHA-ROBL-and AHA-OBL-Based FS for Waste Classification
In this study, we used two improved versions of the AHA to select the most optimal features based on ROBL and OBL based on the kNN classifier for selecting an optimal set of features. To improve the exploitation phase of the original AHA method and avoid a convergence to local minima, we developed the two new approaches. The first approach, the AHA-ROBL, incorporates ROBL. The second approach, the AHA-OBL, incorporates OBL. These operators ensure a more balanced approach to exploration and exploitation. The incorporation of OBL and ROBL with the AHA provides a good solution to escape from local optima. The design of waste classification based on the AHA-ROBL and AHA-OBL is depicted in Figure 1, and it contains five basic processes, which are detailed as follows: 1.

Pre-processing data
This stage consists of loading the TrashNet dataset, which is divided into k-folds. All images must be resized to 224 × 224 ×3 for ResNet and VGG19.

2.
Deep feature extraction In this stage, two pre-trained CNNs are used to extract trainable features, which are more efficient than other descriptors. AlexNet extracts 4096 features while ResNet extracts 2048.

Initialization
As is the case for the majority of computational algorithms, the AHA begins by generating an initial population of N objects; each object has the dimension Dim in the search space that is constrained by the higher and lower bounds of a population and the maximum number of iterations, as defined by Equation (2). The process of FS requires converting the real values into binary using a sigmoidal function, defined by the following equations: where .
Any solution is represented as a one-dimensional vector; the number of deep features specifies the length. Any cell may have one of two values, 0 or 1, where 1 indicates that the appropriate feature has been selected and 0 indicates that it has not been selected.

Score evaluation
Generally, the feature selection seeks to decrease the number of features and the classification error rate. In other words, classification accuracy is maximized by deleting superfluous and redundant traits and maintaining only the most pertinent ones. The kNN classifier was used in this investigation due to its ease in evaluating the score. Thus, the score for each object was evaluated by using the following: where Cr and Sel f are the accuracy obtained by using kNN (k = 5) and the size of selected deep features, respectively. Tot f is the total number of trainable features provided by AlexNet/ResNet.

Updating process
First, the AHA seeks to update the guided foraging by using the three flight skills, namely omnidirectional, diagonal, and axial flight, using Equations (4)-(6), respectively. In case of r ≤ 1/3, follow diagonal flight using Equation (5). In case of r ≤ 2/3, follow omnidirectional flight using Equation (4); otherwise, follow axial flight using Equation (6).
Second, the updating of objects is realized by using the exploration mode (when r ≤ 0.5), which applies the adjustment of acceleration using Equation (7). Otherwise, follow territorial foraging using Equation (9) (exploitation operation). The migration foraging is applied when t p = 2n by using Equation (10).
The exploitation mode is realized by the integration of ROBL or OBL, which ensures a good balance between the exploration and exploitation modes using Equations (14) and (15). This integration deeply enhances the convergence to the global solution. The third step consists of evaluating the score for each object using Equation (18) to find the best candidate. The evaluating and updating stages are repeated indefinitely until a termination condition is satisfied. This condition is utilized in this study to determine the quality of the suggested approach for locating the optimal subset of features within the given number of iterations.

Space Complexity
Space complexity determines the total amount of space occupied by the two proposed algorithms. The AHA-OBL and AHA-ROBL use the space complexity of O(N × D).

Experimental Results
In order to conduct a fair analysis, the effectiveness of the AHA-ROBL and AHA-OBL was compared to that of different and recent computational algorithms, namely AHA, HHO, SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR. The performance was tested on the TrashNet dataset under identical conditions utilizing two deep descriptors, namely VGG19 and ResNet20. In this section, the comparison between the results of the two developed FS approaches and the other 12 methods is performed. Overall, 90% of the dataset was used for training the classification algorithm and 10% of the dataset was used for the validation. As a classification algorithm, kNN was used.
In this study, we set the maximum number of iterations to 200. Due to the stochastic nature of the computational algorithms, each algorithm was run 30 times separately. The computer's CPU was an Intel Core i7-5500U processor running at 2.40 GHz, and the RAM was 32 GB.

Parameter Settings for the Comparative Algorithms
This section defines the parameters for each optimizer. To ensure a fair comparison, it is necessary to list the waste recognition algorithms that were implemented. The suggested two methods (the AHA-ROBL and AHA-ROBL) and other 12 computational algorithms are specified in Table 1.

Performance Metrics
The following evaluation metrics and measurements were computed for the proposed method (the AHA-ROBL) developed for waste-analysis-based FS. Consequently, the metrics' correct rate of waste classification involves mean accuracy (µ Acc ), recall (Re), precision (Pr), F-score (F sc ), score, sensitivity, specificity, average execution time, and selection ratio. Consequently, all metrics are expressed in terms of the mean and standard deviation, which are characterized by using the following: • Mean accuracy (µ Acc ): The µ Acc metric is calculated as Equation (19): where M represents the number of runs, N s represents the number of samples in the test dataset, and C r and L r represent the classifier output label and the reference label class of sample r, respectively. • Mean fitness value (µ Fit ): The fitness value metric, which evaluates the performance of algorithms, is expressed as in Equation (20): where M is the number of runs and Fit k * is the best fitness value for the k th run. • Average recall (µ Re ): This indicates the percentage of predicted positive patterns that is defined as in Equation (21): The µ Re is calculated from the best object (O best ) using Equation (22): • Average precision (µ Pr ): This indicates the frequency of true expected samples as in Equation (23): The mean precision (µ Pr ) can be calculated by using Equation (24): • Mean F-score (µ F Score ): This metric is already in use for balanced data, which can be calculated using Equation (25): The mean F-score can be calculated using Equation (26): • Mean features selection size (µ Size ): This indicates the average size of the selected attributes and is expressed as in Equation (27):

Results and Discussion
• Fitness: Table 2 displays the results of comparing the two proposed models (the AHA-ROBL and AHA-OBL) and competing algorithms. Based on the obtained results, it is evident that our AHA-ROBL model provides superior results, followed by the AHA-OBL. Two pre-trained CNN models (VGG19 and ResNet20) and the TrashNet dataset were chosen. The deep analysis of the dataset that was used revealed that the quantitative results obtained by using the proposed AHA-ROBL approach performed better with the two pre-trained CNN models (VGG19 and ResNet20) than the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR. The results of the proposed VGG19 method are significantly superior to those of ResNet20. The AHA-OBL followed this with the lowest fitness value. The standard deviation was computed to evaluate the stability of the fitness value for each FS method. According to the standard deviation results, the traditional AHA-ROBL, AHA, and PSO approaches are more stable than other algorithms. HGS is the worst possible algorithm. It is important to note that the AHA-OBL obtained the second-best position using VGG19. In addition, for the ResNet20 deep features, the AHA-OBL was ranked second compared to the remaining 12 algorithms. • Accuracy: The following observations can be drawn from the data presented in Table 3. First, the results demonstrated that the two proposed approaches (the AHA-ROBL and AHA-OBL) outperformed the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, in terms of quantitative results using the two pre-trained CNN models (VGG19 and ResNet20). The results of the proposed VGG19 method were significantly superior to those of ResNet. Compared to the 12 optimization algorithms, the MPA achieved the highest accuracy value.  Tables 4 and 5 list the recall and precision of the two proposed methods (the AHA-ROBL and AHA-OBL) with the 12 wrapper FS algorithms employing the two deep descriptors (VGG19 and ResNet20). By examining the average recall and precision values for the TrashNet dataset, it is evident that the AHA-ROBL outperformed all advanced competitor algorithms based on both deep features (VGG19 and ResNet20). Moreover, the average recall and precision obtained by using the AHA-ROBL based on VGG19 were superior to those obtained by the AHA-ROBL based on ResNet20. It can be seen that the AHA-ROBL based on deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the precision and recall metrics. In addition, the AHA-OBL based on the deep VGG19 descriptor ranked in the second position in terms of average recall and precision for the TrashNet dataset. In addition, the MPA based on the deep VGG19 descriptor ranked in the third position in terms of average recall and precision for the TrashNet dataset. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the precision and recall metrics. • Sensitivity and specificity: Tables 6 and 7 list the sensitivity and specificity of the two proposed methods (the AHA-ROBL and AHA-OBL) with the 12 wrapper FS algorithms employing the two deep descriptors (VGG19 and ResNet20). By examining the average sensitivity and specificity values for the TrashNet dataset, it is evident that the AHA-ROBL outperformed all advanced competitor algorithms based on both deep features (VGG19 and ResNet20). Moreover, the average sensitivity and precision obtained by using the AHA-ROBL based on VGG19 are superior to those obtained by using the AHA-ROBL based on ResNet20. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the sensitivity and specificity metrics. In addition, the AHA-OBL based on the deep VGG19 descriptor ranked in the second position in terms of the average sensitivity and specificity for the TrashNet dataset. Moreover, the AHA based on the deep VGG19 descriptor ranked in the second position in terms of the average sensitivity and specificity for the TrashNet dataset. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the sensitivity and specificity metrics. • F-score: In terms of the F-score, Table 8 reveals that the two proposed methods (the AHA-ROBL and AHA-OBL) were based on the pre-trained CNNs (VGG19 and ResNet20) and outperformed all the other competitors. In addition, fierce competition existed between the MPAs based on ResNet20 and VGG19 for the third position. Moreover, the GWO based on the deep features achieved lower F-score values. • Selection ratio: According to the results of Table 9, which depict the mean rate of the selection ratio and its standard deviation, the AHA-ROBL exhibited excellent performance in selecting relevant deep features from the TrashNet dataset. In addition, we can observe that the proposed AHA-ROBL method provided an excellent behavior for selecting the optimal set of relevant deep features. The deep analysis of the dataset that was used revealed that the quantitative results obtained by using the proposed AHA-ROBL approach performed better with the two pre-trained CNN models (VGG19 and ResNet20) than the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, and AOA. Clearly, the proposed VGG19 approach produces significantly superior results to ResNet20. It is important to mention that the second-best place was obtained by the AHA using VGG19. • Average execution time: Table 10 reveals that the two proposed methods (the AHA-ROBL and AHA-OBL) based on the pre-trained CNNs (VGG19 and ResNet20) outperformed 75% of the other competitors. In addition, the AHA outperformed most of the other competitors.

The Wilcoxon Test
A statistical analysis was necessary to compare the efficiency of the AHA-ROBL and AHA-OBL to the efficiency of other competitive algorithms. Thus, the Wilcoxon rank sum test was used to compare the accuracy values acquired by using the two proposed approaches (the AHA-ROBL and AHA-OBL) and those obtained by using the other algorithms, namely the basic AHA, HHO, SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR for the TrashNet dataset in the cases of the VGG19 and ResNet20 deep descriptors. Table 11 contains the results of the Wilcoxon signed-rank test, which was used to evaluate the statistical performance differences between the two proposed algorithms and the other 12 algorithms. A p-value of less than 0.05 indicated a statistically significant difference between the two compared algorithms. Following this criterion, the AHA-ROBL outperformed all the other algorithms to varying degrees, indicating that the AHA-ROBL benefits from extensive exploitation. In general, the AHA-ROBL based on the deep descriptor VGG19 had a statistically significant p-value in comparison with 85.7% of the algorithms.  Figure 6 depicts the fitness curves derived by using the various optimizers based on VGG19 and ResNet20 for the TrashNet dataset. By analyzing the behavior of the convergence of the two proposed algorithms (the AHA-ROBL and AHA-OBL) for the TrashNet dataset based on the VGG19 deep descriptor, a speed convergence was illustrated by increasing the number of iterations compared to the other 12 algorithms.

Graphical Analysis
For the TrashNet dataset, we can see that the AHA-OBL and the conventional AHA based on the VGG19 descriptor highlighted a great competition in the first iterations. Still, after 20 iterations, the AHA-ROBL and AHA-OBL became more efficient. This behavior can be interpreted through the use of operators, which allows for deeply enhancing the exploitation process. Additionally, as shown in Figure 7, we plotted a boxplot of the two proposed methods (the AHA-ROBL and AHA-OBL) against the 12 other algorithms, namely the conventional AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA MRFO, SCA, MPA and SAR, in terms of accuracy. As illustrated in the figure, the two suggested methods, the AHA-ROBL and AHA-OBL, based on the deep features achieved greater mean and median accuracy values than the other advanced algorithms for the TrashNet dataset. The collected results demonstrate the proposed methods' efficacy in maintaining the highest classification accuracy, especially for the VGG19 deep features.
To summarize the results, Figures 8-11 display the mean values for accuracy, fitness, precision, recall, F-score, sensitivity, specificity, and average execution time for the two proposed approaches (the AHA-ROBL and AHA-OBL) based on the pre-trained CNNs (VGG19 and ResNet20) and various computational methods, namely the AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, for the TrashNet dataset. The results indicate that the two suggested approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors. As shown in Figures 8  and 9, the AHA-ROBL and AHA-OBL approaches based on the deep features produced higher mean and median accuracy, recall, precision, and fitness values than the other advanced algorithms for the TrashNet dataset. Moreover, as shown in Figures 10 and 11, the AHA-ROBL and AHA-OBL approaches based on the deep features produced higher mean sensitivity and specificity values and higher average execution times than the other advanced algorithms for the TrashNet dataset.
In terms of average accuracy, Figure 8 shows that the two proposed approaches, the AHA-ROBL and AHA-OBL, outperformed the 12 other optimization techniques, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, utilizing the two pre-trained CNN models (VGG19 and ResNet20).
In terms of average fitness, in Figure 8, the results indicate that the two proposed approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors. Regarding the average precision and recall, Figure 9's values for the TrashNet dataset show that it is clear that the two proposed approaches, the AHA-ROBL and AHA-OBL, exceed all advanced rival techniques based on both the deep features and the average recall and precision values (VGG19 and ResNet20). Additionally, the average recall and precision values obtained by using the two proposed approaches using VGG19 are superior to those obtained using ResNet20.
In terms of the average F-score, Figure 10 demonstrates that the two suggested techniques, the AHA-ROBL and AHA-OBL, based on the pre-trained CNNs (VGG19 and ResNet20) beat all the other alternatives.
In terms of average sensitivity and specificity, in Figures 10 and 11, the results indicate that the two proposed approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors.
In terms of average execution time, in Figure 11, the results indicate that the two suggested approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform 75% of the other competitors.

Comparative Study with the Existing Works
Previous studies on waste classification focused on applying different traditional/nontraditional mining techniques. This section summarizes all the previous studies' results for the TrashNet dataset (from 2016 to 2022). In order to demonstrate the efficiency of the suggested techniques (the AHA-ROBL and AHA-OBL), numerous algorithms from the literature were chosen for a fair comparison, including machine-learning and deep-learning algorithms. Table 12 illustrates the proper rate of classification's performance across the TrashNet dataset.
Many state-of-the-art algorithms used pre-trained networks [8,102,107] or fine-tuned pre-trained networks [102,107,132,133]. However, their methods did not achieve a good performance with the classification problem. In our research, we focused more on how to improve the performance of these pre-trained networks by using modified optimization techniques. However, our two proposed methods, the AHA-ROBL and AHA-OBL, which depend on using pre-trained networks (i.e., VGG19 and ResNet) and apply some feature selection methods, achieved higher results than WasNet method. The proposed methods' results reached 98.81% and 98.60%, respectively.

Conclusions and Future Work
Waste classification has been a difficult task overall. However, the high number of attributes produced by pre-trained CNNs prompted us to integrate meta-heuristics to select the optimal set of deep-learning attributes. The majority of meta-heuristics suffer from the problem of exploitation. We solved this problem at the level of the AHA algorithm by using ROBL and OBL and applying them to waste classification using the two pre-trained CNN networks VGG19 and ResNet20. By analyzing the obtained results, we noted that the proposed AHA-ROBL and AHA-OBL algorithms manage to improve the performance of waste classification and are more competitive than other algorithms, namely the AHA, HHO , SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA and SAR, in terms of accuracy, recall, precision, fitness, F-score, and statistical tests for the TrashNet dataset.
In the future, self-checking the parameters of the AHA algorithm may be considered. Moreover, the processing of large datasets and the choice of a different architecture may be taken into consideration.

Conflicts of Interest:
The authors declare no conflict of interest.