Next Article in Journal
Jump and Initial-Sensitive Excessive Motion of a Class of Relative Rotation Systems and Their Control via Delayed Feedback
Next Article in Special Issue
Equilibrium in a Bargaining Game of Two Sellers and Two Buyers
Previous Article in Journal
Some New n-Point Ternary Subdivision Schemes without the Gibbs Phenomenon
Previous Article in Special Issue
An Efficient Heap Based Optimizer Algorithm for Feature Selection
 
 
Correction published on 6 May 2023, see Mathematics 2023, 11(9), 2195.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Feature Selection Based on Improved Artificial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classification Problem

by
Mona A. S. Ali
1,2,*,†,
Fathimathul Rajeena P. P.
1,† and
Diaa Salama Abd Elminaam
2,3,4,*,†
1
Computer Science Department, College of Computer Science and Information Technology, King Faisal University, Al Ahsa 400, Saudi Arabia
2
Faculty of Computers and Artificial Intelligence, Benha University, Benha 12311, Egypt
3
Computer Science Department, Faculty of Computer Science, Misr International University, Cairo 12585, Egypt
4
Faculty of Information Technology, Middle East University, Amman 11831, Jordan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(15), 2675; https://doi.org/10.3390/math10152675
Submission received: 7 June 2022 / Revised: 17 July 2022 / Accepted: 21 July 2022 / Published: 29 July 2022 / Corrected: 6 May 2023
(This article belongs to the Special Issue Advanced Optimization Methods and Applications)

Abstract

:
Recycling tasks are the most effective method for reducing waste generation, protecting the environment, and boosting the overall national economy. The productivity and effectiveness of the recycling process are strongly dependent on the cleanliness and precision of processed primary sources. However, recycling operations are often labor intensive, and computer vision and deep learning (DL) techniques aid in automatically detecting and classifying trash types during recycling chores. Due to the dimensional challenge posed by pre-trained CNN networks, the scientific community has developed numerous techniques inspired by biology, swarm intelligence theory, physics, and mathematical rules. This research applies a new meta-heuristic algorithm called the artificial hummingbird algorithm (AHA) to solving the waste classification problem based on feature selection. However, the performance of the AHA is barely satisfactory; it may be stuck in optimal local regions or have a slow convergence. To overcome these limitations, this paper develops two improved versions of the AHA called the AHA-ROBL and the AHA-OBL. These two versions enhance the exploitation stage by using random opposition-based learning (ROBL) and opposition-based learning (OBL) to prevent local optima and accelerate the convergence. The main purpose of this paper is to apply the AHA-ROBL and AHA-OBL to select the relevant deep features provided by two pre-trained models of CNN (VGG19 & ResNet20) to recognize a waste classification. The TrashNet dataset is used to verify the performance of the two proposed approaches (the AHA-ROBL and AHA-OBL). The effectiveness of the suggested methods (the AHA-ROBL and AHA-OBL) is compared with that of 12 modern and competitive optimizers, namely the artificial hummingbird algorithm (AHA), Harris hawks optimizer (HHO), Salp swarm algorithm (SSA), aquila optimizer (AO), Henry gas solubility optimizer (HGSO), particle swarm optimizer (PSO), grey wolf optimizer (GWO), Archimedes optimization algorithm (AOA), manta ray foraging optimizer (MRFO), sine cosine algorithm (SCA), marine predators algorithm (MPA), and rescue optimization algorithm (SAR). A fair evaluation of the proposed algorithms’ performance is achieved using the same dataset. The performance analysis of the two proposed algorithms is applied in terms of different measures. The experimental results confirm the two proposed algorithms’ superiority over other comparative algorithms. The AHA-ROBL and AHA-OBL produce the optimal number of selected features with the highest degree of precision.

1. Introduction

Modern consumption and manufacturing have made trash a global problem. The world manufactured 8.3 billion metric tons of plastic between 1950 and 2015, and 6.3 billion tons of plastic was dumped. 9% of garbage is recycled, 14% is burned, and 79% is correctly disposed of, according to research [1]. Plastic decomposes in 400 years. A million-year-old glass bottle will decompose. Recycling garbage (plastic, glass, and metal) harms the environment too much to be taken lightly. Each trash item dumped in oceans, farms, or other essential regions endangers all life. The economy suffers [2].
Recycling and reusing garbage is common. Many countries have launched legal recycling studies. This has affected each country’s social-economic culture [3]. Recycling is key to saving resources. Waste categorization is critical and time-consuming for calculating recyclable waste. Historically, trash was sorted manually. With more urban trash, untrained sorters are engaged [4]. Because of these difficulties, recycling trash became necessary [5]. Municipalities, ministries, and non-profits require a fast, straightforward trash classification system.
CNNs enhance rubbish classification accuracy and resource recycling efficiency [6,7,8]. Deeper representations yield more semantically rich characteristics. Linking network layers has shown tremendous features; Convolution kernels or filters extract features. Therefore, a receptive field should cover the accuracy and efficiency. However, multiple studies revealed that this technique lacks context. CNNs are feed-forward artificial neural networks inspired by the animal visual cortex. Deep learning algorithms are considered the most dependable. Their implementation is suitable for modern real-time applications [9].
Real-world problems contain a lot of data, making processing tough. Datasets have attributes/features. Data extraction doesn’t require all features. Redundant features might reduce a model’s performance. Feature reduction reduces each dataset’s size while maintaining accuracy. Feature reduction involves feature selection. Feature extraction adds new features from existing datasets, whereas feature selection selects needed features.
Optimization and meta-heuristic algorithms are currently two of the hottest topics in computer science due to their presence in several domains, such as feature selection problems [10,11,12], facial recognition [13,14], opinion mining [15,16], the identification of parameters in photovoltaic applications [17,18], economic load dispatch problems [19,20], bin packing problems [21,22], software cost estimations [23], traveling salesman problems  [24], constrained engineering problems [25], and continuous optimization problems [26,27]. According to the no free lunch (NFL) theory [28], no algorithm can discover the optimal solution to all problems; hence, numerous optimization approaches exist in the literature. In other words, if an algorithm can determine the optimal answer for a particular problem, it will fail for other types. Most of previous theorem permits researchers to build and enhance current methods.
Based on their sources of inspiration, meta-heuristic algorithms can be separated into three subcategories, namely swarm-based (biogeography-based [29], social network [30], and biology-based), physics-based, and differential-evolution-based [31] optimizers.
Swarm-inspired meta-heuristics include algorithms that replicate the social and biological characteristics of organisms, such as mating, labor division, foraging, navigation, and self-organization. Examples of social network optimizers include the multi-swarm whale optimization algorithm [32], genetic algorithm [33], multitracker optimization algorithm [34], and parallel multiobjective evolutionary algorithm [35].
Moreover, examples of biogeography-based optimizers include the evolutionary optimization algorithms (EOAs) [36], cuckoo search algorithm (CSA) [37], krill herd algorithm (KHA) [38], hybrid PSO-GA algorithm [39], shuffled frog leaping algorithm (SFLA) [40], swarm intelligence optimization algorithms (SIOAs) [41], Laplacian biogeography-based optimization algorithm (LBOA) [42], biogeography-based optimization algorithm (BOA) [43], and population-based algorithms (PBAs) [44].
There are also different modified versions of BBOs, such as modified versions of BBOs using migration-based modifications [45,46,47], mutation-based modifications [29,48,49], and others [50,51]. Furthermore, there are different modified versions of BBO-based hybridization, such as hybridizations with local search algorithms [52,53,54] and hybridizations with other population-based algorithms [55,56,57].
Examples of biology-based optimizers include the grey wolf optimizer (GWO) [58], whale optimization algorithm (WOA) [59,60], firefly algorithm (FA) [61], Salp swarm algorithm (SSA) [62,63,64], emperor penguin colony optimizer [65], squirrel search algorithm [66,67], slime mold algorithm (SMA) [68], barnacles mating optimizer (BMO) algorithm [69], tunicate swarm algorithm (TSA), and artificial hummingbird algorithm (AHA) [70].
The second category, physics-inspired meta-heuristics, contains algorithms influenced by scientific facts or principles. Examples include simulated annealing [71], big bang BigCrunch [72], and the gravitational search algorithm (GSA) [73], lightning search algorithm [74], black hole algorithm [75], sine cosine algorithm (SCA) [76], artificial electric field algorithm [77], arithmetic optimization algorithm (AOA) [78], multi-verse optimizer (MVO), and Henry gas solubility optimizer [79].
The third category, differential evolution algorithms, is motivated by concepts of biological evolution, such as the genetic algorithm (GA) [80], evolutionary programming [81], biogeography-based optimization [82], memetic algorithm [83], multipopulation differential evolution algorithm (MPDE) [84], self-adaptive mutation differential evolution (SaMDE)  [85], fitness-adaptive differential evolution (FiADE) [86], modified differential evolution (MDE) and MDE with a pbest crossover (MDE-pBX) [87], teaching-and-learning-based self-adaptive differential evolution (TLBSaDE) [88], modified differential evolution algorithm (MDE) [89], adaptive differential evolution [90], and differential evolution with a crossover rate repair [91].
Moreover, hybrid differential evolution algorithms use recent swarm intelligence algorithms, such as hybrids of differential evolution and the artificial bee colony (ABC) algorithm [92], differential evolution and the ant colony optimizer (ACO) [93], differential evolution and the bacterial foraging-based optimizer (BFO) [94], differential evolution and the gravitational search algorithm (GSA) [95], differential evolution and the invasive weed optimizer (IWO) [96], differential evolution and the firefly algorithm (FFA) [97], and differential evolution and the fireworks algorithm (FWA) [98].
The previously proposed meta-heuristics for the feature selection (FS) problem all suffer from a slow convergence, poor scalability [99,100], and lack of precision and consistency. These limitations motivated the current study, which suggests a novel AHA-based algorithm for FS tasks.
Therefore, in this study, we propose a modification to the current optimization method known as the AHA, which is an innovative bioinspired meta-heuristic algorithm. The AHA simulates wild hummingbirds’ incredible flight capabilities and cunning feeding strategies. An adaptive opposition strategy is proposed to enable the original algorithm to achieve more precise results with more hard challenges using two operators (ROBL and OBL).
The main contributions of this paper can be summarized as follows:
  • In this study on solving the feature selection problem, the AHA is enhanced for the first time.
  • An enhanced version of the AHA is proposed based on two operators: random opposition-based learning (ROBL) and opposition-based learning (OBL).
  • The two proposed models are compared with the original algorithm and 12 different algorithms.
  • The study applies the modified algorithms AHA-ROBL and AHA-OBL to the TrashNet database by using two pre-trained networks: VGG19 and ResNet.
  • The two proposed algorithms each demonstrate a greater robustness and stability than other recent algorithms.
Our paper is structured as follows: Section 2 conducts a literature review, while Section 3 discusses the fundamentals of the AHA-ROBL optimization technique for pre-trained neural networks. Section 4 summarizes the acquired results regarding fitness, accuracy, and feature selection.

2. Literature Review

In recent years, considerable research has been conducted on garbage image classification. This paper will present the work of domestic and international scholars in the fields of image recognition and waste classification

2.1. Waste Recycling Using Traditional Machine-Learning Algorithms

Different machine-learning algorithms have been applied to the TrashNet data. Yang et al. achieved an accuracy rate of 63% [101] using the SVM algorithm and Costa et al. achieved an accuracy of 88% using the kNN algorithm [102]. Satvilkar classified garbage images from the TrashNet dataset with a 62.61% accuracy [103] using the RF algorithm and classified garbage images from the TrashNet dataset using the XGBoost algorithm with an accuracy of 70.1%.

2.2. Waste Recycling Using Deep-Learning Algorithms

Deep- and machine-learning models have been combined to classify trash types. Researchers in [103] conducted an experiment in which they examined solely recyclable waste material classified into five distinct categories. The CNN, k-nearest neighbor (kNN), random forest (RF), and SVM models were all used, with the CNN model achieving the highest classification accuracy of 89.91%. [104] evaluated the kNN, RF, SVM and VGG16 models in combination. A processed dataset was created using photos of four distinct recycling materials with a success rate of 93%. Zhu et al. [105] established an identification approach for plastic solid waste (PSW) chemicals classified into six types based on near-infrared (NIR) reflectance spectroscopy, principal component analysis (PCA), and the support vector machine (SVM) model with a 97.5% classification accuracy. Zkan et al. [106] classified garbage into plastic and non-plastic categories.

2.3. Waste Recycling Using Deep-Transfer Learning

In the following section, detailed descriptions of this dataset are provided. Several studies utilizing the TrashNet dataset to evaluate proposed solutions to the trash classification problem are summarized in [3,8,107].
First, Aral et al. classified trash from the TrashNet dataset using different transfer learning models. According to the experimental findings, the DenseNet121 model had the highest accuracy, achieving 95% [107].
Then, Ruiz et al. used different CNN models and achieved an average accuracy of 88.66% for the TrashNet dataset, producing the best performance results. This method, which ResNet Ruiz denotes, was reimplemented in our experiments [8].
Several well-known CNN models for image classification, such as ResNext [108], ImageNet [109], VGG [110], ResNet [111], and DenseNet [112], can also be used as base models for trash classification. This study determined that among the CNN models listed above, ResNext is the best model for transfer learning to classify trash.
AHA demonstrates an extremely competitive performance. It demonstrates an effectiveness in optimization issues. Moreover, this algorithm has an advantage over other algorithms. Its straightforward procedure, low computational cost, significant convergence speed, relatively close solutions, independence from problems, and gradient-free nature make it a desirable algorithm [113,114,115,116]. In the present paper, enhanced AHA algorithms are used to select the most optimal features in the waste classification problem. We propose two new enhanced approaches based on AHA for FS, namely the AHA-ROBL and AHA-OBL, based on the kNN classifier.

3. Procedure and Methodology

Figure 1 illustrates the proposed framework for an improved artificial hummingbird algorithm using random opposition-based learning for solving waste classification problems based on feature selection, which contains seven significant steps:
  • Data collection
  • Data pre-processing
  • Feature extraction techniques using pre-trained deep-learning models (VGG19 and Resnet20)
  • Waste classification with the AHA-ROBL using AHA initialization followed by AHA scoring and AHA updating using an exploration mode and the AHA and an exploitation mode using ROBL
  • Prediction and evaluation metrics

3.1. Dataset Description

The dataset used and implemented in this research is the TrashNet dataset. The TrashNet dataset includes 2527 images classified into six categories: cardboard, glass, metal, paper, plastic, and rubbish. This study supplemented the original dataset to build a huge dataset. The dataset augmentation resulted in 2527 images of horizontal flipping, 2527 images of vertical flipping, and 2527 random 25° rotations, resulting in 10,108 waste images. Additionally, this study compared the outcomes using 2527 photos and 10,108 photographs. The dataset was partitioned, with 90% and 10% of each class randomly assigned to training and testing sets, respectively [117]. Figure 2 shows examples of each category.

3.2. Feature Extraction Using Pre-Trained CNN

The process of feature extraction using a pre-trained CNN is introduced in this section. CNNs are composed of three layers: convolutional, pooling, and fully connected layers. The most critical layers are the convolutional and pooling layers. By convolving an image area with numerous filters, a convolution layer is utilized to extract features. Due to the higher layer count, a CNN can interpret the features in its input image more precisely. The pooling layer compresses the output mapping of the convolution. Four pre-trained networks can be used in computer vision tasks, such as image generation, image classification, image captioning, and many others. The four types are VGG19, Inception V3, GoogLeNet, ResNet50, and AlexNet.
In this research, two of these pre-trained networks were used: VGG19 and ResNet50. Their benefits contributed to an improved prediction performance while avoiding overfitting traditional ANN models. The following section will explain the two pre-trained models used in this paper.

3.2.1. VGG19

The VGG neural network is a 19-layer convolutional neural network. Simonyan and Zisserman developed and trained it in 2014 at the University of Oxford. The details can be found in their 2015 paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. Additionally, the VGG19 network was trained using images from the ImageNet collection totaling more than 1 million images. Naturally, you may import the model with training weights from ImageNet. Up to 1000 items can be classified using this pre-trained network. The network was trained using colorful images with a resolution of 224 × 224 pixels (Figure 3 [118]).

3.2.2. ResNet50

ResNet50 is a 50-layer convolutional neural network. As with VGG19, it can classify up to 1000 objects and was trained on 224 × 224 pixel colored images. Additionally, this model was trained on over 1 million photos from the ImageNet collection. Microsoft developed and trained the model in 2015, and the model’s performance results are available in their publication titled “Deep Residual Learning for Image Recognition”.
Figure 4 illustrates a ResNet residual block. As illustrated in the figure, stacked layers execute residual network mapping by establishing shortcut connections that perform the identity mapping ( x ) . Their outputs are added to the residual function F of the stacked layers ( x ) .
An error gradient was determined and propagated to the shallow layers during the deep network’s backpropagation training. This inaccuracy became smaller and smaller as one progressed deeper into the levels until it eventually vanished. This phenomenon is referred to as the gradient vanishing problem in really deep networks. As illustrated in Figure 4 and Figure 5, the problem can be handled via residual learning [119].
The initial residual branch, or unit l, is depicted in Figure 5 within the residual network. Weights, batch normalization ( B N ), and a corrected linear unit are depicted in the figure ( R e L U ). The following equations were used to determine the input and output of a residual unit (Equation (1)):
y l = h x l + F x l + W l x l + 1 = f y l
where h ( x l ) represents the identity mapping, F represents the residual function, x l represents the input, and W l represents the weight coefficient. The identity mapping, which is denoted by h ( x l ) = x l , is the foundation of the ResNet architecture. The residual networks were created for networks with layer counts of 34, 50, 101, and 152. ResNet50 was employed in this investigation. The network is made up of 50 layers.

3.3. Artificial Hummingbird Algorithm (AHA)

The AHA is a brand new bioinspired meta-heuristic algorithm. The AHA simulates the amazing flying abilities and intelligent feeding methods of hummingbirds in the wild. The technique uses three flight-capability foraging strategies: axial, diagonal, and omnidirectional. In addition, foraging strategies (directed, territorial, and migratory) and a visiting table are employed to simulate the memory function of hummingbirds concerning food sources. The technique is straightforward and has few pre-defined parameters that can be modified. Each hummingbird in the AHA is assigned a unique food source from which it can be nourished. A hummingbird can memorize the location and rate of nectar replenishment of this particular food source. It can also recall the time between visits to each food source. These exceptional skills afford the AHA an exceptional capability for locating ideal solutions.
This section describes the steps of the AHA Algorithm 1, which simulate the behavior of hummingbirds. There are three types of flight skills referred to as axial, diagonal, and omnidirectional flights; these skills are employed in foraging strategies [120]. In addition, there are various types of search strategies, such as guided foraging, territorial foraging, and migratory foraging; a visiting table is also created to simulate the memory function of hummingbirds. As aforementioned, the AHA is a new bioinspired optimizer proposed by Mirjalili for solving optimization problems [70]. This algorithm was inspired by the unique flight capabilities and intelligent foraging strategies of hummingbirds.
Algorithm 1 AHA pseudo-code.
  • Define N p o p = n = The size of the population
  • Define N i t e r , m a x
  • Define upper and lower population limits
  • Initialize the population using Equation (2)
  • while ( t p N i t e r , m a x ) do
  •     for (each population determine direction change vector D) do
  •         if ( r a n d 1 / 3 ) then
  •            Implement the diagonal flight Equation (5)
  •         else
  •            if ( r a n d 2 / 3 ) then
  •                Implement the omnidirectional flight using Equation (6)
  •            else
  •                Implement the axial flight using Equation (4)
  •            end if
  •         end if
  •     end for
  •     for Each population foraging behavior update do
  •         if  r a n d 0.5 then                              ▹ Exploration operation
  •            Implement the guided foraging using Equation (7)
  •         else                            ▹ Exploitation operation
  •            Implement the territorial foraging using Equation (9)
  •         end if
  •         if  t p = 2 n  then
  •            Implement the migration foraging using Equation (10)
  •         end if
  •     end for
  •     Update positions
  •     Return the highest value for fitness
  •      t p = t p + 1
  • end while
The mathematical formulation of the AHA is illustrated by constructing the initial population of X hummingbirds out of N individuals, as shown in Equation (2)
X i = L + r × ( U L ) , i = 1 , 2 , , N
where L and U, respectively, represent the upper and lower bounds for a D dimension. r is a random vector in the range of [0, 1]. Additionally, a visited table of food sources is created using Equation (3):
V T i j = 0 if i j n u l l i = j , i = 1 , , N , j = 1 , , N ,
where for i = j , the value of V T i , j becomes null and stands for the food taken by a hummingbird at its specific food source. Additionally, when i j and V T i , j becomes zero, they stand for hummingbird i visiting food source j.

3.3.1. Guided Foraging

In this stage, three flight skills are utilized during foraging, including omnidirectional, diagonal, and axial flight.
The axial flight is defined using Equation (4):
D ( i ) = 1 , if i = randi ( [ 1 , d ] ) i = 1 , , d 0 , else
The diagonal flight can be expressed using Equation (5):
D ( i ) = 1 , if i = P p ( j ) , j [ 1 , k ] , P p = randperm ( K p ) , K p 2 , r 1 · ( d 2 ) + 1 0 , else i = 1 , , d
The omnidirectional flight is represented using Equation (6):
D i = 1 i = 1 , , d
where randi ( [ 1 , d ] ) represents a random integer between 1 and d, randperm ( k ) represents a random permutation of the integers between 1 and k, and r 1 [ 0 , 1 ] represents a random number. Formulating the guided foraging behavior using using Equation (7):
V i ( t + 1 ) = X i , t ( t ) + a × D × X i ( t ) X i , t ( t ) , a N ( 0 , 1 )
where X i , t ( t ) denotes the source of food i at iteration t. X i , t ( t ) is the target food source that ith hummingbirds visit. The value of X i can be updated using Equation (8):
x i ( t + 1 ) = x i ( t ) , if f x i ( t ) f v i ( t + 1 ) v i ( t + 1 ) , if f x i ( t ) > f v i ( t + 1 )
where f is the fitness value.

3.3.2. Territorial Foraging

A hummingbird is more likely to search for a new food source after visiting its target food source when flower nectar has been consumed as opposed to visiting other present food sources. Consequently, a hummingbird might readily travel to a nearby location within its area, where a possibly superior food supply could be identified. The modeling is given using Equation (9):
V i ( t + 1 ) = X i , t ( t ) + b × D × X i ( t ) , b N ( 0 , 1 )

3.3.3. Migration Foraging

In the last phase, the AHA algorithm determines the migration coefficient. If a hummingbird’s preferred feeding location runs out of food, it migrates to a more distant feeding location. This hummingbird will abandon the previous food source in favor of the new one, causing the visit table to be modified. The following is a description of a hummingbird’s migration from a nectar source with the lowest nectar-refilling rate to one with a random rate of nectar production (see Equation (10)):
X w ( t + 1 ) = L + r × ( U L ) ,
Here, x w represents the food source with the lowest fitness value.
A crucial component of the AHA algorithm is the visiting table. Using Equations (11)–(13), the visiting table is updated for each hummingbird.
V T i , k = V T i , k + 1 , if k i & k target , k = 1 , 2 , h n
V T i , target = 0
V T i , k = max L i & L h n V T i , L + 1 , if k i , k = 1 , 2 , h n
This visiting table indicates the length of time since the same hummingbird’s last visit to each food source. A long interval between visits indicates a high frequency of visits.

3.4. Opposition-Based Learning (OBL)

In the first proposed approach (the AHA-OBL), OBL is applied.
OBL is an effective search strategy for avoiding stagnation in possible solutions [121]. OBL, which was proposed by Tizhoosh, improves the exploitation capability of a search mechanism [122]. In meta-heuristic algorithms, a convergence occurs rapidly when initial solutions are relatively close to an optimal location; otherwise, a late convergence is predicted. Here, the OBL strategy generates new solutions by considering search regions that may be nearer to the global optimal solution.
To better understand the OBL, assume the opposite of the real number x [ l b , u b ] can be calculated as O p p = ( u b + l b ) x , where O p p is the variable opposite var. Consequently, for N-dimensional real numbers, the previous formulation can be generalized as demonstrated by Equation (14):
O p p i = ( u b i + l b i ) X i

3.5. Random Opposition-Based Learning (ROBL)

In the second approach, ROBL [123] is applied to enhance the exploitation ability of a search mechanism and improve the convergence speed. Different from the original OBL, this paper utilizes this improved OBL strategy [123], which is defined using Equation (15):
x ^ j = l j + u j rand × x j , j = 1 , 2 , , n
where x j is the opposite solution, l j and u j are the lower and upper bounds of the problem in the jth dimension, and rand is a random number within (0, 1).

3.6. AHA-ROBL- and AHA-OBL-Based FS for Waste Classification

In this study, we used two improved versions of the AHA to select the most optimal features based on ROBL and OBL based on the kNN classifier for selecting an optimal set of features. To improve the exploitation phase of the original AHA method and avoid a convergence to local minima, we developed the two new approaches. The first approach, the AHA-ROBL, incorporates ROBL. The second approach, the AHA-OBL, incorporates OBL. These operators ensure a more balanced approach to exploration and exploitation. The incorporation of OBL and ROBL with the AHA provides a good solution to escape from local optima. The design of waste classification based on the AHA-ROBL and AHA-OBL is depicted in Figure 1, and it contains five basic processes, which are detailed as follows:
1.
Pre-processing data
This stage consists of loading the TrashNet dataset, which is divided into k-folds. All images must be resized to 224 × 224 × 3 for ResNet and VGG19.
2.
Deep feature extraction
In this stage, two pre-trained CNNs are used to extract trainable features, which are more efficient than other descriptors. AlexNet extracts 4096 features while ResNet extracts 2048.
3.
Initialization
As is the case for the majority of computational algorithms, the AHA begins by generating an initial population of N objects; each object has the dimension D i m in the search space that is constrained by the higher and lower bounds of a population and the maximum number of iterations, as defined by Equation (2). The process of FS requires converting the real values into binary using a sigmoidal function, defined by the following equations:
x i I t + 1 = 0 , if r a n d < S i g x i I t 1 , if r a n d S i g x i I t
where
S i g ( x i I t ) = 1 1 + e ( 10 ( x i I t 0.5) ) .
Any solution is represented as a one-dimensional vector; the number of deep features specifies the length. Any cell may have one of two values, 0 or 1, where 1 indicates that the appropriate feature has been selected and 0 indicates that it has not been selected.
4.
Score evaluation
Generally, the feature selection seeks to decrease the number of features and the classification error rate. In other words, classification accuracy is maximized by deleting superfluous and redundant traits and maintaining only the most pertinent ones. The kNN classifier was used in this investigation due to its ease in evaluating the score. Thus, the score for each object was evaluated by using the following:
S c = 0.99× ( 1 C r ) + 0.01× | S e l f | | T o t f |
where C r and S e l f are the accuracy obtained by using kNN ( k = 5 ) and the size of selected deep features, respectively. T o t f is the total number of trainable features provided by AlexNet/ResNet.
5.
Updating process
First, the AHA seeks to update the guided foraging by using the three flight skills, namely omnidirectional, diagonal, and axial flight, using Equations (4)–(6), respectively. In case of r  1 / 3 , follow diagonal flight using Equation (5). In case of r 2 / 3 , follow omnidirectional flight using Equation (4); otherwise, follow axial flight using Equation (6).
Second, the updating of objects is realized by using the exploration mode (when r 0.5 ), which applies the adjustment of acceleration using Equation (7). Otherwise, follow territorial foraging using Equation (9) (exploitation operation). The migration foraging is applied when t p = 2 n by using Equation (10).
The exploitation mode is realized by the integration of ROBL or OBL, which ensures a good balance between the exploration and exploitation modes using Equations (14) and (15). This integration deeply enhances the convergence to the global solution. The third step consists of evaluating the score for each object using Equation (18) to find the best candidate. The evaluating and updating stages are repeated indefinitely until a termination condition is satisfied. This condition is utilized in this study to determine the quality of the suggested approach for locating the optimal subset of features within the given number of iterations.

3.7. The Computational Complexity of the Two Proposed Algorithms (the AHA-OBL and AHA-ROBL)

This section explains the time and space costs maintained by the proposed methods.

3.7.1. Time Complexity

First, the two proposed approaches (the AHA-OBL and AHA-ROBL) produce N number of search agents, each with size D so that the initialization complexity can be represented as the O ( N × D ) time complexity. Moreover, the AHA-OBL and AHA-ROBL calculate the fitness of each search agent with the complexity of O ( t m a x × N × D ) , where t m a x indicates the cumulative number of iterations. In addition, the AHA-OBL and AHA-ROBL need the O ( T ) time complexity to perform T number of its main operations (phase 1, phase 2, and phase 3; memory saving; and OBL or ROBL). Hence, the total time complexity of the two proposed approaches (the AHA-OBL and AHA-ROBL) can be represented by O ( t m a x × T × N × D ) .

3.7.2. Space Complexity

Space complexity determines the total amount of space occupied by the two proposed algorithms. The AHA-OBL and AHA-ROBL use the space complexity of O ( N × D ) .

4. Experimental Results

In order to conduct a fair analysis, the effectiveness of the AHA-ROBL and AHA-OBL was compared to that of different and recent computational algorithms, namely AHA, HHO, SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR. The performance was tested on the TrashNet dataset under identical conditions utilizing two deep descriptors, namely VGG19 and ResNet20. In this section, the comparison between the results of the two developed FS approaches and the other 12 methods is performed. Overall, 90% of the dataset was used for training the classification algorithm and 10% of the dataset was used for the validation. As a classification algorithm, kNN was used.
In this study, we set the maximum number of iterations to 200. Due to the stochastic nature of the computational algorithms, each algorithm was run 30 times separately. The computer’s CPU was an Intel Core i7-5500U processor running at 2.40 GHz, and the RAM was 32 GB.

4.1. Parameter Settings for the Comparative Algorithms

This section defines the parameters for each optimizer. To ensure a fair comparison, it is necessary to list the waste recognition algorithms that were implemented. The suggested two methods (the AHA-ROBL and AHA-ROBL) and other 12 computational algorithms are specified in Table 1.

4.2. Performance Metrics

The following evaluation metrics and measurements were computed for the proposed method (the AHA-ROBL) developed for waste-analysis-based FS. Consequently, the metrics’ correct rate of waste classification involves mean accuracy ( μ A c c ) , recall ( R e ) , precision ( P r ) , F-score ( F s c ) , score, sensitivity, specificity, average execution time, and selection ratio. Consequently, all metrics are expressed in terms of the mean and standard deviation, which are characterized by using the following:
  • Mean accuracy ( μ A c c ) : The μ A c c metric is calculated as Equation (19):
    μ A c c = 1 M 1 N s k = 1 M r = 1 N s ( C r = = L r )
    where M represents the number of runs, N s represents the number of samples in the test dataset, and C r and L r represent the classifier output label and the reference label class of sample r, respectively.
  • Mean fitness value ( μ F i t ) : The fitness value metric, which evaluates the performance of algorithms, is expressed as in Equation (20):
    μ F i t = 1 M k = 1 M F i t k
    where M is the number of runs and F i t k is the best fitness value for the k t h run.
  • Average recall ( μ R e ) : This indicates the percentage of predicted positive patterns that is defined as in Equation (21):
    R e = T r p T r p + F a n
    The μ R e is calculated from the best object ( O b e s t ) using Equation (22):
    μ R e = 1 30 r = 1 30 R e r b e s t
  • Average precision ( μ P r ) : This indicates the frequency of true expected samples as in Equation (23):
    P r = T r p F a p + T r p
    The mean precision ( μ P r ) can be calculated by using Equation (24):
    μ P r = 1 30 r = 1 30 P r r b e s t
  • Mean F-score ( μ F S c o r e ) : This metric is already in use for balanced data, which can be calculated using Equation (25):
    F S c o r e = 2 × R e × P r R e + P r
    The mean F-score can be calculated using Equation (26):
    μ F S c o r e = 1 30 r = 1 30 F r S c o r e b e s t
  • Mean features selection size ( μ S i z e ) : This indicates the average size of the selected attributes and is expressed as in Equation (27):
    μ S i z e = 1 M k = 1 M d k

4.3. Results and Discussion

  • Fitness: Table 2 displays the results of comparing the two proposed models (the AHA-ROBL and AHA-OBL) and competing algorithms. Based on the obtained results, it is evident that our AHA-ROBL model provides superior results, followed by the AHA-OBL. Two pre-trained CNN models (VGG19 and ResNet20) and the TrashNet dataset were chosen. The deep analysis of the dataset that was used revealed that the quantitative results obtained by using the proposed AHA-ROBL approach performed better with the two pre-trained CNN models (VGG19 and ResNet20) than the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR. The results of the proposed VGG19 method are significantly superior to those of ResNet20. The AHA-OBL followed this with the lowest fitness value. The standard deviation was computed to evaluate the stability of the fitness value for each FS method. According to the standard deviation results, the traditional AHA-ROBL, AHA, and PSO approaches are more stable than other algorithms. HGS is the worst possible algorithm. It is important to note that the AHA-OBL obtained the second-best position using VGG19. In addition, for the ResNet20 deep features, the AHA-OBL was ranked second compared to the remaining 12 algorithms.
  • Accuracy: The following observations can be drawn from the data presented in Table 3. First, the results demonstrated that the two proposed approaches (the AHA-ROBL and AHA-OBL) outperformed the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, in terms of quantitative results using the two pre-trained CNN models (VGG19 and ResNet20). The results of the proposed VGG19 method were significantly superior to those of ResNet. Compared to the 12 optimization algorithms, the MPA achieved the highest accuracy value.
  • Recall and precision: Table 4 and Table 5 list the recall and precision of the two proposed methods (the AHA-ROBL and AHA-OBL) with the 12 wrapper FS algorithms employing the two deep descriptors (VGG19 and ResNet20). By examining the average recall and precision values for the TrashNet dataset, it is evident that the AHA-ROBL outperformed all advanced competitor algorithms based on both deep features (VGG19 and ResNet20). Moreover, the average recall and precision obtained by using the AHA-ROBL based on VGG19 were superior to those obtained by the AHA-ROBL based on ResNet20. It can be seen that the AHA-ROBL based on deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the precision and recall metrics. In addition, the AHA-OBL based on the deep VGG19 descriptor ranked in the second position in terms of average recall and precision for the TrashNet dataset. In addition, the MPA based on the deep VGG19 descriptor ranked in the third position in terms of average recall and precision for the TrashNet dataset. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the precision and recall metrics.
  • Sensitivity and specificity: Table 6 and Table 7 list the sensitivity and specificity of the two proposed methods (the AHA-ROBL and AHA-OBL) with the 12 wrapper FS algorithms employing the two deep descriptors (VGG19 and ResNet20). By examining the average sensitivity and specificity values for the TrashNet dataset, it is evident that the AHA-ROBL outperformed all advanced competitor algorithms based on both deep features (VGG19 and ResNet20). Moreover, the average sensitivity and precision obtained by using the AHA-ROBL based on VGG19 are superior to those obtained by using the AHA-ROBL based on ResNet20. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the sensitivity and specificity metrics. In addition, the AHA-OBL based on the deep VGG19 descriptor ranked in the second position in terms of the average sensitivity and specificity for the TrashNet dataset. Moreover, the AHA based on the deep VGG19 descriptor ranked in the second position in terms of the average sensitivity and specificity for the TrashNet dataset. It can be seen that the AHA-ROBL based on the deep descriptors has a strong stability for the TrashNet dataset due to the lower values of the standard deviation in terms of the sensitivity and specificity metrics.
  • F-score: In terms of the F-score, Table 8 reveals that the two proposed methods (the AHA-ROBL and AHA-OBL) were based on the pre-trained CNNs (VGG19 and ResNet20) and outperformed all the other competitors. In addition, fierce competition existed between the MPAs based on ResNet20 and VGG19 for the third position. Moreover, the GWO based on the deep features achieved lower F-score values.
  • Selection ratio: According to the results of Table 9, which depict the mean rate of the selection ratio and its standard deviation, the AHA-ROBL exhibited excellent performance in selecting relevant deep features from the TrashNet dataset. In addition, we can observe that the proposed AHA-ROBL method provided an excellent behavior for selecting the optimal set of relevant deep features. The deep analysis of the dataset that was used revealed that the quantitative results obtained by using the proposed AHA-ROBL approach performed better with the two pre-trained CNN models (VGG19 and ResNet20) than the optimization algorithms, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, and AOA. Clearly, the proposed VGG19 approach produces significantly superior results to ResNet20. It is important to mention that the second-best place was obtained by the AHA using VGG19.
  • Average execution time: Table 10 reveals that the two proposed methods (the AHA-ROBL and AHA-OBL) based on the pre-trained CNNs (VGG19 and ResNet20) outperformed 75% of the other competitors. In addition, the AHA outperformed most of the other competitors.

4.4. The Wilcoxon Test

A statistical analysis was necessary to compare the efficiency of the AHA-ROBL and AHA-OBL to the efficiency of other competitive algorithms. Thus, the Wilcoxon rank sum test was used to compare the accuracy values acquired by using the two proposed approaches (the AHA-ROBL and AHA-OBL) and those obtained by using the other algorithms, namely the basic AHA, HHO, SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR for the TrashNet dataset in the cases of the VGG19 and ResNet20 deep descriptors. Table 11 contains the results of the Wilcoxon signed-rank test, which was used to evaluate the statistical performance differences between the two proposed algorithms and the other 12 algorithms. A p-value of less than 0.05 indicated a statistically significant difference between the two compared algorithms. Following this criterion, the AHA-ROBL outperformed all the other algorithms to varying degrees, indicating that the AHA-ROBL benefits from extensive exploitation. In general, the AHA-ROBL based on the deep descriptor VGG19 had a statistically significant p-value in comparison with 85.7% of the algorithms.

4.5. Graphical Analysis

Figure 6 depicts the fitness curves derived by using the various optimizers based on VGG19 and ResNet20 for the TrashNet dataset. By analyzing the behavior of the convergence of the two proposed algorithms (the AHA-ROBL and AHA-OBL) for the TrashNet dataset based on the VGG19 deep descriptor, a speed convergence was illustrated by increasing the number of iterations compared to the other 12 algorithms.
For the TrashNet dataset, we can see that the AHA-OBL and the conventional AHA based on the VGG19 descriptor highlighted a great competition in the first iterations. Still, after 20 iterations, the AHA-ROBL and AHA-OBL became more efficient. This behavior can be interpreted through the use of operators, which allows for deeply enhancing the exploitation process.
Additionally, as shown in Figure 7, we plotted a boxplot of the two proposed methods (the AHA-ROBL and AHA-OBL) against the 12 other algorithms, namely the conventional AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA MRFO, SCA, MPA and SAR, in terms of accuracy. As illustrated in the figure, the two suggested methods, the AHA-ROBL and AHA-OBL, based on the deep features achieved greater mean and median accuracy values than the other advanced algorithms for the TrashNet dataset. The collected results demonstrate the proposed methods’ efficacy in maintaining the highest classification accuracy, especially for the VGG19 deep features.
To summarize the results, Figure 8, Figure 9, Figure 10 and Figure 11 display the mean values for accuracy, fitness, precision, recall, F-score, sensitivity, specificity, and average execution time for the two proposed approaches (the AHA-ROBL and AHA-OBL) based on the pre-trained CNNs (VGG19 and ResNet20) and various computational methods, namely the AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, for the TrashNet dataset. The results indicate that the two suggested approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors. As shown in Figure 8 and Figure 9, the AHA-ROBL and AHA-OBL approaches based on the deep features produced higher mean and median accuracy, recall, precision, and fitness values than the other advanced algorithms for the TrashNet dataset. Moreover, as shown in Figure 10 and Figure 11, the AHA-ROBL and AHA-OBL approaches based on the deep features produced higher mean sensitivity and specificity values and higher average execution times than the other advanced algorithms for the TrashNet dataset.
In terms of average accuracy, Figure 8 shows that the two proposed approaches, the AHA-ROBL and AHA-OBL, outperformed the 12 other optimization techniques, namely the basic AHA, HHO, SSA, AO, HGS, PSO, GWO, AOA, MRFO, SCA, MPA, and SAR, utilizing the two pre-trained CNN models (VGG19 and ResNet20).
In terms of average fitness, in Figure 8, the results indicate that the two proposed approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors.
Regarding the average precision and recall, Figure 9’s values for the TrashNet dataset show that it is clear that the two proposed approaches, the AHA-ROBL and AHA-OBL, exceed all advanced rival techniques based on both the deep features and the average recall and precision values (VGG19 and ResNet20). Additionally, the average recall and precision values obtained by using the two proposed approaches using VGG19 are superior to those obtained using ResNet20.
In terms of the average F-score, Figure 10 demonstrates that the two suggested techniques, the AHA-ROBL and AHA-OBL, based on the pre-trained CNNs (VGG19 and ResNet20) beat all the other alternatives.
In terms of average sensitivity and specificity, in Figure 10 and Figure 11, the results indicate that the two proposed approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform all the competitors.
In terms of average execution time, in Figure 11, the results indicate that the two suggested approaches, the AHA-ROBL and AHA-OBL, have a superior performance and outperform 75% of the other competitors.

5. Comparative Study with the Existing Works

Previous studies on waste classification focused on applying different traditional/non-traditional mining techniques. This section summarizes all the previous studies’ results for the TrashNet dataset (from 2016 to 2022). In order to demonstrate the efficiency of the suggested techniques (the AHA-ROBL and AHA-OBL), numerous algorithms from the literature were chosen for a fair comparison, including machine-learning and deep-learning algorithms. Table 12 illustrates the proper rate of classification’s performance across the TrashNet dataset.
Many state-of-the-art algorithms used pre-trained networks [8,102,107] or fine-tuned pre-trained networks [102,107,132,133]. However, their methods did not achieve a good performance with the classification problem. In our research, we focused more on how to improve the performance of these pre-trained networks by using modified optimization techniques. However, our two proposed methods, the AHA-ROBL and AHA-OBL, which depend on using pre-trained networks (i.e., VGG19 and ResNet) and apply some feature selection methods, achieved higher results than WasNet method. The proposed methods’ results reached 98.81% and 98.60%, respectively.

6. Conclusions and Future Work

Waste classification has been a difficult task overall. However, the high number of attributes produced by pre-trained CNNs prompted us to integrate meta-heuristics to select the optimal set of deep-learning attributes. The majority of meta-heuristics suffer from the problem of exploitation. We solved this problem at the level of the AHA algorithm by using ROBL and OBL and applying them to waste classification using the two pre-trained CNN networks VGG19 and ResNet20. By analyzing the obtained results, we noted that the proposed AHA-ROBL and AHA-OBL algorithms manage to improve the performance of waste classification and are more competitive than other algorithms, namely the AHA, HHO, SSA, AO, HGSO, PSO, GWO, AOA, MRFO, SCA, MPA and SAR, in terms of accuracy, recall, precision, fitness, F-score, and statistical tests for the TrashNet dataset.
In the future, self-checking the parameters of the AHA algorithm may be considered. Moreover, the processing of large datasets and the choice of a different architecture may be taken into consideration.

Author Contributions

Conceptualization, D.S.A.E.; Data curation, M.A.S.A. and D.S.A.E.; Formal analysis, M.A.S.A.; Funding acquisition, M.A.S.A.; Investigation, M.A.S.A. and D.S.A.E.; Methodology, M.A.S.A., F.R.P.P. and D.S.A.E.; Project administration, D.S.A.E.; Resources, D.S.A.E.; Software, D.S.A.E.; Supervision, D.S.A.E.; Validation, F.R.P.P.; Writing—original draft, M.A.S.A., F.R.P.P. and D.S.A.E.; Writing–review & editing, M.A.S.A. and D.S.A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (Project No. AN000550).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Geyer, R.; Jambeck, J.; Law, K. Producción, uso y destino de todos los plásticos jamás fabricados. Sci. Adv. 2017, 3, 1207–1221. [Google Scholar]
  2. Kumar, S.; Smith, S.; Fowler, G.; Velis, C.; Rena; Kumar, R.; Cheeseman, C. Challenges and opportunities associated with waste management in India. R. Soc. Open Sci. 2017, 4, 160764. [Google Scholar] [CrossRef] [PubMed]
  3. Bircanoğlu, C.; Atay, M.; Beşer, F.; Genç, Ö.; Kızrak, M.A. RecycleNet: Intelligent waste sorting using deep neural networks. In Proceedings of the 2018 Innovations in Intelligent Systems and Applications (INISTA), Thessaloniki, Greece, 3–5 July 2018; pp. 1–7. [Google Scholar]
  4. Borowski, P.F. Environmental pollution as a threats to the ecology and development in Guinea Conakry. Environ. Prot. Nat. Resour. Środowiska I Zasobów Nat. 2017, 28, 27–32. [Google Scholar] [CrossRef]
  5. Zelazinski, T.; Ekielski, A.; Tulska, E.; Vladut, V.; Durczak, K. Wood dust application for improvment of selected properties of thermoplastic starch. Inmateh. Agric. Eng 2019, 58, 37–44. [Google Scholar]
  6. Tiyajamorn, P.; Lorprasertkul, P.; Assabumrungrat, R.; Poomarin, W.; Chancharoen, R. Automatic Trash Classification using Convolutional Neural Network Machine Learning. In Proceedings of the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Bangkok, Thailand, 18–20 November 2019; pp. 71–76. [Google Scholar]
  7. Yu, Y. A Computer Vision Based Detection System for Trash Bins Identification during Trash Classification. In Proceedings of the Journal of Physics: Conference Series, 2nd International Conference on Electronic Engineering and Informatics, Lanzhou, China, 17–19 July 2020; IOP Publishing: Bristol, UK, 2020; Volume 1617, p. 012015. [Google Scholar]
  8. Ruiz, V.; Sánchez, Á.; Vélez, J.F.; Raducanu, B. Automatic image-based waste classification. In Proceedings of the International Work-Conference on the Interplay Between Natural and Artificial Computation, Almería, Spain, 3–7 June 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 422–431. [Google Scholar]
  9. Singh, A.; Rai, N.; Sharma, P.; Nagrath, P.; Jain, R. Age, Gender Prediction and Emotion recognition using Convolutional Neural Network. In Proceedings of the International Conference on Innovative Computing & Communication (ICICC), New Delhi, India, 20–21 February 2021. [Google Scholar]
  10. Mohmmadzadeh, H.; Gharehchopogh, F.S. An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems. J. Supercomput. 2021, 77, 9102–9144. [Google Scholar] [CrossRef]
  11. Naseri, T.S.; Gharehchopogh, F.S. A Feature Selection Based on the Farmland Fertility Algorithm for Improved Intrusion Detection Systems. J. Netw. Syst. Manag. 2022, 30, 1–27. [Google Scholar] [CrossRef]
  12. Abd Elminaam, D.S.; Nabil, A.; Ibraheem, S.A.; Houssein, E.H. An efficient marine predators algorithm for feature selection. IEEE Access 2021, 9, 60136–60153. [Google Scholar] [CrossRef]
  13. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  14. Elgamal, Z.M.; Yasin, N.M.; Sabri, A.Q.M.; Sihwail, R.; Tubishat, M.; Jarrah, H. Improved equilibrium optimization algorithm using elite opposition-based learning and new local search strategy for feature selection in medical datasets. Computation 2021, 9, 68. [Google Scholar] [CrossRef]
  15. AbdElminaam, D.S.; Neggaz, N.; Gomaa, I.A.E.; Ismail, F.H.; Elsawy, A. AOM-MPA: Arabic Opinion Mining using Marine Predators Algorithm based Feature Selection. In Proceedings of the 2021 International Mobile, Intelligent and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 26—27 May 2021; pp. 395–402. [Google Scholar]
  16. Abd Elminaam, D.S.; Neggaz, N.; Ahmed, I.A.; Abouelyazed, A.E.S. Swarming behavior of Harris hawks optimizer for Arabic opinion mining. Comput. Mater. Contin. 2021, 69, 4129–4149. [Google Scholar] [CrossRef]
  17. Shaban, H.; Houssein, E.H.; Pérez-Cisneros, M.; Oliva, D.; Hassan, A.Y.; Ismaeel, A.A.; AbdElminaam, D.S.; Deb, S.; Said, M. Identification of Parameters in Photovoltaic Models through a Runge Kutta Optimizer. Mathematics 2021, 9, 2313. [Google Scholar] [CrossRef]
  18. Abdelminaam, D.S.; Said, M.; Houssein, E.H. Turbulent flow of water-based optimization using new objective function for parameter extraction of six photovoltaic models. IEEE Access 2021, 9, 35382–35398. [Google Scholar] [CrossRef]
  19. Deb, S.; Abdelminaam, D.S.; Said, M.; Houssein, E.H. Recent Methodology-Based Gradient-Based Optimizer for Economic Load Dispatch Problem. IEEE Access 2021, 9, 44322–44338. [Google Scholar] [CrossRef]
  20. Deb, S.; Houssein, E.H.; Said, M.; AbdElminaam, D.S. Performance of Turbulent Flow of Water Optimization on Economic Load Dispatch Problem. IEEE Access 2021, 9, 77882–77893. [Google Scholar] [CrossRef]
  21. El-Ashmawi, W.H.; Abd Elminaam, D.S. A modified squirrel search algorithm based on improved best fit heuristic and operator strategy for bin packing problem. Appl. Soft Comput. 2019, 82, 105565. [Google Scholar] [CrossRef]
  22. Abdul-Minaam, D.S.; Al-Mutairi, W.M.E.S.; Awad, M.A.; El-Ashmawi, W.H. An adaptive fitness-dependent optimizer for the one-dimensional bin packing problem. IEEE Access 2020, 8, 97959–97974. [Google Scholar] [CrossRef]
  23. Dizaji, Z.A.; Gharehchopogh, F.S. A hybrid of ant colony optimization and chaos optimization algorithms approach for software cost estimation. Indian J. Sci. Technol. 2015, 8, 128. [Google Scholar] [CrossRef]
  24. Gharehchopogh, F.S.; Abdollahzadeh, B. An efficient harris hawk optimization algorithm for solving the travelling salesman problem. Clust. Comput. 2022, 25, 1981–2005. [Google Scholar] [CrossRef]
  25. Gharehchopogh, F.S.; Farnad, B.; Alizadeh, A. A modified farmland fertility algorithm for solving constrained engineering problems. Concurr. Comput. Pract. Exp. 2021, 33, e6310. [Google Scholar] [CrossRef]
  26. Zaman, H.R.R.; Gharehchopogh, F.S. An improved particle swarm optimization with backtracking search optimization algorithm for solving continuous optimization problems. Eng. Comput. 2021, 2021, 1–35. [Google Scholar] [CrossRef]
  27. Goldanloo, M.J.; Gharehchopogh, F.S. A hybrid OBL-based firefly algorithm with symbiotic organisms search algorithm for solving continuous optimization problems. J. Supercomput. 2022, 78, 3998–4031. [Google Scholar] [CrossRef]
  28. Lan, P.; Xia, K.; Pan, Y.; Fan, S. An improved equilibrium optimizer algorithm and its application in LSTM neural network. Symmetry 2021, 13, 1706. [Google Scholar] [CrossRef]
  29. Ma, H.; Simon, D.; Siarry, P.; Yang, Z.; Fei, M. Biogeography-based optimization: A 10-year review. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 391–407. [Google Scholar] [CrossRef]
  30. Niccolai, A.; Bettini, L.; Zich, R. Optimization of electric vehicles charging station deployment by means of evolutionary algorithms. Int. J. Intell. Syst. 2021, 36, 5359–5383. [Google Scholar] [CrossRef]
  31. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution–an updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  32. Saidala, R.K.; Devarakonda, N. Multi-swarm whale optimization algorithm for data clustering problems using multiple cooperative strategies. Int. J. Intell. Syst. Appl. 2018, 11, 36. [Google Scholar] [CrossRef]
  33. Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–55. [Google Scholar]
  34. Elsisi, M. Optimal design of nonlinear model predictive controller based on new modified multitracker optimization algorithm. Int. J. Intell. Syst. 2020, 35, 1857–1878. [Google Scholar] [CrossRef]
  35. Massobrio, R.; Toutouh, J.; Nesmachnow, S.; Alba, E. Infrastructure deployment in vehicular communication networks using a parallel multiobjective evolutionary algorithm. Int. J. Intell. Syst. 2017, 32, 801–829. [Google Scholar] [CrossRef]
  36. Simon, D. Evolutionary Optimization Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  37. Garg, H. Multi-objective optimization problem of system reliability under intuitionistic fuzzy set environment using Cuckoo Search algorithm. J. Intell. Fuzzy Syst. 2015, 29, 1653–1669. [Google Scholar] [CrossRef]
  38. Bolaji, A.L.; Al-Betar, M.A.; Awadallah, M.A.; Khader, A.T.; Abualigah, L.M. A comprehensive review: Krill Herd algorithm (KH) and its applications. Appl. Soft Comput. 2016, 49, 437–446. [Google Scholar] [CrossRef]
  39. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  40. Eusuff, M.M.; Lansey, K.E. Water distribution network design using the shuffled frog leaping algorithm. In Proceedings of the Bridging the Gap: Meeting the World’s Water and Environmental Resources Challenges, Orlando, FL, USA, 20–24 May 2001; pp. 1–8. [Google Scholar]
  41. Ma, H.; Ye, S.; Simon, D.; Fei, M. Conceptual and numerical comparisons of swarm intelligence optimization algorithms. Soft Comput. 2017, 21, 3081–3100. [Google Scholar] [CrossRef]
  42. Garg, V.; Deep, K. Performance of Laplacian Biogeography-Based Optimization Algorithm on CEC 2014 continuous optimization benchmarks and camera calibration problem. Swarm Evol. Comput. 2016, 27, 132–144. [Google Scholar] [CrossRef]
  43. Yang, G.P.; Liu, S.Y.; Zhang, J.K.; Feng, Q.X. Control and synchronization of chaotic systems by an improved biogeography-based optimization algorithm. Appl. Intell. 2013, 39, 132–143. [Google Scholar] [CrossRef]
  44. García-Torres, J.M.; Damas, S.; Cordón, O.; Santamaría, J. A case study of innovative population-based algorithms in 3D modeling: Artificial bee colony, biogeography-based optimization, harmony search. Expert Syst. Appl. 2014, 41, 1750–1762. [Google Scholar] [CrossRef]
  45. Ma, H. An analysis of the equilibrium of migration models for biogeography-based optimization. Inf. Sci. 2010, 180, 3444–3464. [Google Scholar] [CrossRef]
  46. Ma, H.; Ni, S.; Sun, M. Equilibrium species counts and migration model tradeoffs for biogeography-based optimization. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 3306–3310. [Google Scholar]
  47. Ma, H.; Fei, M.; Ding, Z.; Jin, J. Biogeography-based optimization with ensemble of migration models for global numerical optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  48. Gong, W.; Cai, Z.; Ling, C.X.; Li, H. A real-coded biogeography-based optimization with mutation. Appl. Math. Comput. 2010, 216, 2749–2758. [Google Scholar] [CrossRef]
  49. Niu, Q.; Zhang, L.; Li, K. A biogeography-based optimization algorithm with mutation strategies for model parameter estimation of solar and fuel cells. Energy Convers. Manag. 2014, 86, 1173–1185. [Google Scholar] [CrossRef]
  50. Roy, P.; Mandal, D. Quasi-oppositional biogeography-based optimization for multi-objective optimal power flow. Electr. Power Components Syst. 2011, 40, 236–256. [Google Scholar] [CrossRef]
  51. Kim, S.S.; Byeon, J.H.; Lee, S.; Liu, H. A grouping biogeography-based optimization for location area planning. Neural Comput. Appl. 2015, 26, 2001–2012. [Google Scholar] [CrossRef]
  52. Feng, Q.; Liu, S.; Wu, Q.; Tang, G.; Zhang, H.; Chen, H. Modified biogeography-based optimization with local search mechanism. J. Appl. Math. 2013, 2013, 960524. [Google Scholar] [CrossRef]
  53. Lim, W.L.; Wibowo, A.; Desa, M.I.; Haron, H. A biogeography-based optimization algorithm hybridized with tabu search for the quadratic assignment problem. Comput. Intell. Neurosci. 2016, 2016, 27. [Google Scholar] [CrossRef]
  54. Yang, Y. A modified biogeography-based optimization for the flexible job shop scheduling problem. Math. Probl. Eng. 2015, 2015, 184643. [Google Scholar] [CrossRef]
  55. Li, X.; Yin, M. Hybrid differential evolution with biogeography-based optimization for design of a reconfigurable antenna array with discrete phase shifters. Int. J. Antennas Propag. 2011, 2011, 685629. [Google Scholar] [CrossRef]
  56. Sinha, S.; Bhola, A.; Panchal, V.; Singhal, S.; Abraham, A. Resolving mixed pixels by hybridization of biogeography based optimization and ant colony optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–6. [Google Scholar]
  57. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  58. Heidari, A.A.; Abbaspour, R.A.; Jordehi, A.R. An efficient chaotic water cycle algorithm for optimization tasks. Neural Comput. Appl. 2017, 28, 57–85. [Google Scholar] [CrossRef]
  59. Krithiga, R.; Ilavarasan, E. A Novel Hybrid Algorithm to Classify Spam Profiles in Twitter. Webology 2020, 17, 260–279. [Google Scholar]
  60. Sawhney, R.; Mathur, P.; Shankar, R. A firefly algorithm based wrapper-penalty feature selection method for cancer diagnosis. In Proceedings of the International Conference on Computational Science and Its Applications, Melbourne, Australia, 2–5 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 438–449. [Google Scholar]
  61. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  62. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Ala’M, A.Z.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  63. Sayed, G.I.; Khoriba, G.; Haggag, M.H. A novel chaotic salp swarm algorithm for global optimization and feature selection. Appl. Intell. 2018, 48, 3462–3481. [Google Scholar] [CrossRef]
  64. Harifi, S.; Khalilian, M.; Mohammadzadeh, J.; Ebrahimnejad, S. Emperor Penguins Colony: A new metaheuristic algorithm for optimization. Evol. Intell. 2019, 12, 211–226. [Google Scholar] [CrossRef]
  65. Zheng, T.; Luo, W. An improved squirrel search algorithm for optimization. Complexity 2019, 2019, 6291968. [Google Scholar] [CrossRef]
  66. Wang, Y.; Du, T. An improved squirrel search algorithm for global function optimization. Algorithms 2019, 12, 80. [Google Scholar] [CrossRef]
  67. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  68. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  69. Houssein, E.H.; Abdelminaam, D.S.; Hassan, H.N.; Al-Sayed, M.M.; Nabil, E. A Hybrid Barnacles Mating Optimizer Algorithm With Support Vector Machines for Gene Selection of Microarray Cancer Classification. IEEE Access 2021, 9, 64895–64905. [Google Scholar] [CrossRef]
  70. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  71. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  72. Liu, M.; Li, Y.; Huo, Q.; Li, A.; Zhu, M.; Qu, N.; Chen, L.; Xia, M. A two-way parallel slime mold algorithm by flow and distance for the travelling salesman problem. Appl. Sci. 2020, 10, 6180. [Google Scholar] [CrossRef]
  73. Premkumar, M.; Jangir, P.; Sowmya, R.; Alhelou, H.H.; Heidari, A.A.; Chen, H. MOSMA: Multi-objective slime mould algorithm based on elitist non-dominated sorting. IEEE Access 2020, 9, 3229–3248. [Google Scholar] [CrossRef]
  74. İzci, D.; Ekinci, S.; Ekinci, S. Comparative performance analysis of slime mould algorithm for efficient design of proportional–integral–derivative controller. Electrica 2021, 21, 151–159. [Google Scholar] [CrossRef]
  75. Kumari, S.; Chugh, R. A novel four-step feedback procedure for rapid control of chaotic behavior of the logistic map and unstable traffic on the road. Chaos Interdiscip. J. Nonlinear Sci. 2020, 30, 123115. [Google Scholar] [CrossRef] [PubMed]
  76. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  77. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  78. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November 27–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  79. Liu, Y.; Gong, D.; Sun, J.; Jin, Y. A many-objective evolutionary algorithm using a one-by-one selection strategy. IEEE Trans. Cybern. 2017, 47, 2689–2702. [Google Scholar] [CrossRef]
  80. El-Sehiemy, R.; Abou El Ela, A.A.; Shaheen, A. A multi-objective fuzzy-based procedure for reactive power-based preventive emergency strategy. In International Journal of Engineering Research in Africa; Trans Tech Publications Ltd.: Bäch, Switzerland, 2015; Volume 13, pp. 91–102. [Google Scholar]
  81. Shaheen, A.M.; El-Sehiemy, R.A. Application of multi-verse optimizer for transmission network expansion planning in power systems. In Proceedings of the 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 2–4 February 2019; pp. 371–376. [Google Scholar]
  82. Shaheen, A.M.; El-Sehiemy, R.A.; Elattar, E.E.; Abd-Elrazek, A.S. A modified crow search optimizer for solving non-linear OPF problem with emissions. IEEE Access 2021, 9, 43107–43120. [Google Scholar] [CrossRef]
  83. Jeddi, B.; Einaddin, A.H.; Kazemzadeh, R. A novel multi-objective approach based on improved electromagnetism-like algorithm to solve optimal power flow problem considering the detailed model of thermal generators. Int. Trans. Electr. Energy Syst. 2017, 27, e2293. [Google Scholar] [CrossRef]
  84. Yu, W.; Zhang, J. Multi-population differential evolution with adaptive parameter control for global optimization. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 1093–1098. [Google Scholar]
  85. Pedrosa Silva, R.C.; Lopes, R.A.; Guimarães, F.G. Self-adaptive mutation in the differential evolution. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 1939–1946. [Google Scholar]
  86. Gao, X.Z.; Wang, X.; Ovaska, S.J.; Zenger, K. A hybrid optimization method based on differential evolution and harmony search. Int. J. Comput. Intell. Appl. 2014, 13, 1450001. [Google Scholar] [CrossRef]
  87. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2011, 42, 482–500. [Google Scholar] [CrossRef]
  88. Biswas, S.; Kundu, S.; Das, S.; Vasilakos, A.V. Teaching and learning best differential evoltuion with self adaptation for real parameter optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1115–1122. [Google Scholar]
  89. Zou, D.; Wu, J.; Gao, L.; Li, S. A modified differential evolution algorithm for unconstrained optimization problems. Neurocomputing 2013, 120, 469–481. [Google Scholar] [CrossRef]
  90. Bujok, P.; Tvrdík, J.; Poláková, R. Differential evolution with rotation-invariant mutation and competing-strategies adaptation. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 2253–2258. [Google Scholar]
  91. Gong, W.; Cai, Z.; Wang, Y. Repairing the crossover rate in adaptive differential evolution. Appl. Soft Comput. 2014, 15, 149–168. [Google Scholar] [CrossRef]
  92. Tran, D.H.; Cheng, M.Y.; Cao, M.T. Hybrid multiple objective artificial bee colony with differential evolution for the time–cost–quality tradeoff problem. Knowl.-Based Syst. 2015, 74, 176–186. [Google Scholar] [CrossRef]
  93. Chang, L.; Liao, C.; Lin, W.; Chen, L.L.; Zheng, X. A hybrid method based on differential evolution and continuous ant colony optimization and its application on wideband antenna design. Prog. Electromagn. Res. 2012, 122, 105–118. [Google Scholar] [CrossRef]
  94. Biswal, B.; Behera, H.S.; Bisoi, R.; Dash, P.K. Classification of power quality data using decision tree and chemotactic differential evolution based fuzzy clustering. Swarm Evol. Comput. 2012, 4, 12–24. [Google Scholar] [CrossRef]
  95. Chakraborti, T.; Chatterjee, A.; Halder, A.; Konar, A. Automated emotion recognition employing a novel modified binary quantum-behaved gravitational search algorithm with differential mutation. Expert Syst. 2015, 32, 522–530. [Google Scholar] [CrossRef]
  96. Basak, A.; Maity, D.; Das, S. A differential invasive weed optimization algorithm for improved global numerical optimization. Appl. Math. Comput. 2013, 219, 6645–6668. [Google Scholar] [CrossRef]
  97. Abdullah, A.; Deris, S.; Anwar, S.; Arjunan, S.N. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters. PLoS ONE 2013, 8, e56310. [Google Scholar] [CrossRef]
  98. Zheng, Y.J.; Xu, X.L.; Ling, H.F.; Chen, S.Y. A hybrid fireworks optimization method with differential evolution operators. Neurocomputing 2015, 148, 75–82. [Google Scholar] [CrossRef]
  99. Sharma, M.; Kaur, P. A Comprehensive Analysis of Nature-Inspired Meta-Heuristic Techniques for Feature Selection Problem. Arch. Comput. Methods Eng. 2021, 28, 1103–1127. [Google Scholar] [CrossRef]
  100. Xue, Y.; Xue, B.; Zl, M. Self-Adaptive particle swarm optimization for large-scale feature selection in classification. ACM Trans. Knowl. Discov. Data 2019, 13, 1–27. [Google Scholar] [CrossRef]
  101. Zhang, K.; Lan, L.; Wang, Z.; Moerchen, F. Scaling up kernel svm on limited resources: A low-rank linearization approach. In Proceedings of the Artificial Intelligence and Statistics, PMLR, La Palma, Spain, 21–23 April 2012; pp. 1425–1434. [Google Scholar]
  102. Costa, B.S.; Bernardes, A.C.; Pereira, J.V.; Zampa, V.H.; Pereira, V.A.; Matos, G.F.; Soares, E.A.; Soares, C.L.; Silva, A.F. Artificial intelligence in automated sorting in trash recycling. In Proceedings of the Anais do XV Encontro Nacional de Inteligência Artificial e Computacional, Sao Paulo, Brazil, 22–25 October 2018; pp. 198–205. [Google Scholar]
  103. Satvilkar, M. Image Based Trash Classification Using Machine Learning Algorithms for Recyclability Status. Ph.D. Thesis, National College of Ireland, Dublin, Ireland, 2018. [Google Scholar]
  104. Sousa, J.; Rebelo, A.; Cardoso, J.S. Automation of waste sorting with deep learning. In Proceedings of the 2019 XV Workshop de Visão Computacional (WVC), Sao Paulo, Brazil, 9–11 September 2019; pp. 43–48. [Google Scholar]
  105. Zhu, S.; Chen, H.; Wang, M.; Guo, X.; Lei, Y.; Jin, G. Plastic solid waste identification system based on near infrared spectroscopy in combination with support vector machine. Adv. Ind. Eng. Polym. Res. 2019, 2, 77–81. [Google Scholar] [CrossRef]
  106. Özkan, K.; Ergin, S.; Işık, Ş.; Işıklı, İ. A new classification scheme of plastic wastes based upon recycling labels. Waste Manag. 2015, 35, 29–35. [Google Scholar] [CrossRef] [PubMed]
  107. Aral, R.A.; Keskin, Ş.R.; Kaya, M.; Hacıömeroğlu, M. Classification of trashnet dataset based on deep learning models. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 2058–2062. [Google Scholar]
  108. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  109. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  110. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  111. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  112. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  113. Hamida, M.A.; El-Sehiemy, R.A.; Ginidi, A.R.; Elattar, E.; Shaheen, A.M. Parameter identification and state of charge estimation of Li-Ion batteries used in electric vehicles using artificial hummingbird optimizer. J. Energy Storage 2022, 51, 104535. [Google Scholar] [CrossRef]
  114. Abid, M.S.; Apon, H.J.; Morshed, K.A.; Ahmed, A. Optimal Planning of Multiple Renewable Energy-Integrated Distribution System with Uncertainties Using Artificial Hummingbird Algorithm. IEEE Access 2022, 10, 40716–40730. [Google Scholar] [CrossRef]
  115. Ramadan, A.; Kamel, S.; Hassan, M.H.; Ahmed, E.M.; Hasanien, H.M. Accurate Photovoltaic Models Based on an Adaptive Opposition Artificial Hummingbird Algorithm. Electronics 2022, 11, 318. [Google Scholar] [CrossRef]
  116. Sadoun, A.M.; Najjar, I.R.; Alsoruji, G.S.; Abd-Elwahed, M.; Elaziz, M.A.; Fathy, A. Utilization of improved machine learning method based on artificial hummingbird algorithm to predict the tribological behavior of Cu-Al2O3 nanocomposites synthesized by in situ method. Mathematics 2022, 10, 1266. [Google Scholar] [CrossRef]
  117. Yang, M.; Thung, G. Classification of trash for recyclability status. CS229 Proj. Rep. 2016, 2016, 3. [Google Scholar]
  118. Zheng, Y.; Yang, C.; Merkulov, A. Breast cancer screening using convolutional neural network and follow-up digital mammography. In Computational Imaging III; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10669, p. 1066905. [Google Scholar]
  119. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  120. Zayed, M.E.; Zhao, J.; Li, W.; Elsheikh, A.H.; Abd Elaziz, M. A hybrid adaptive neuro-fuzzy inference system integrated with equilibrium optimizer algorithm for predicting the energetic performance of solar dish collector. Energy 2021, 235, 121289. [Google Scholar] [CrossRef]
  121. Aarts, E.; Aarts, E.H.; Lenstra, J.K. Local Search in Combinatorial Optimization; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
  122. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Sydney, Australia, 28 November–1 December 2005; Volume 1, pp. 695–701. [Google Scholar]
  123. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  124. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  125. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  126. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  127. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  128. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  129. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  130. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  131. Shabani, A.; Asgarian, B.; Salido, M.; Gharebaghi, S.A. Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  132. Rabano, S.L.; Cabatuan, M.K.; Sybingco, E.; Dadios, E.P.; Calilung, E.J. Common garbage classification using mobilenet. In Proceedings of the 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines,, 29 November–2 December 2018; pp. 1–4. [Google Scholar]
  133. Kennedy, T. OscarNet: Using transfer learning to classify disposable waste. CS230 Report: Deep Learning; Stanford University: Stanford, CA, USA, 2018. [Google Scholar]
  134. Zhang, Q.; Zhang, X.; Mu, X.; Wang, Z.; Tian, R.; Wang, X.; Liu, X. Recyclable waste image recognition based on deep learning. Resour. Conserv. Recycl. 2021, 171, 105636. [Google Scholar] [CrossRef]
  135. Yang, Z.; Li, D. WasNet: A Neural Network-Based Garbage Collection Management System. IEEE Access 2020, 8, 103984–103993. [Google Scholar] [CrossRef]
  136. Shi, C.; Xia, R.; Wang, L. A Novel Multi-Branch Channel Expansion Network for Garbage Image Classification. IEEE Access 2020, 8, 154436–154452. [Google Scholar] [CrossRef]
  137. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
Figure 1. The design framework of the AHA-ROBL based on FS for waste classification.
Figure 1. The design framework of the AHA-ROBL based on FS for waste classification.
Mathematics 10 02675 g001
Figure 2. TrashNet dataset with one example of each class: (a) metal, (b) glass, (c) cardboard, (d) paper, (e) plastic, and (f) trash.
Figure 2. TrashNet dataset with one example of each class: (a) metal, (b) glass, (c) cardboard, (d) paper, (e) plastic, and (f) trash.
Mathematics 10 02675 g002
Figure 3. VGG19 architecture.
Figure 3. VGG19 architecture.
Mathematics 10 02675 g003
Figure 4. ResNet residual block.
Figure 4. ResNet residual block.
Mathematics 10 02675 g004
Figure 5. Original residual unit.
Figure 5. Original residual unit.
Mathematics 10 02675 g005
Figure 6. (a) Convergence curve of the AHA-ROBL and AHA-OBL versus other algorithms using VGG19 and (b) convergence curve of the AHA-ROBL and AHA-OBL versus other algorithms using RESNET20.
Figure 6. (a) Convergence curve of the AHA-ROBL and AHA-OBL versus other algorithms using VGG19 and (b) convergence curve of the AHA-ROBL and AHA-OBL versus other algorithms using RESNET20.
Mathematics 10 02675 g006aMathematics 10 02675 g006b
Figure 7. (a) Boxplot of the AHA-ROBL and AHA-OBL approaches versus other swarm intelligence algorithms using VGG19 and (b) boxplot of the AHA-ROBL and AHA-OBL approaches versus other swarm intelligence algorithms using ResNet20.
Figure 7. (a) Boxplot of the AHA-ROBL and AHA-OBL approaches versus other swarm intelligence algorithms using VGG19 and (b) boxplot of the AHA-ROBL and AHA-OBL approaches versus other swarm intelligence algorithms using ResNet20.
Mathematics 10 02675 g007
Figure 8. (a) Avg. accuracy of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. fitness of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Figure 8. (a) Avg. accuracy of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. fitness of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Mathematics 10 02675 g008
Figure 9. (a) Avg. precision of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. recall of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Figure 9. (a) Avg. precision of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. recall of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Mathematics 10 02675 g009
Figure 10. (a) Avg. F-score of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. sensitivity of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Figure 10. (a) Avg. F-score of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. sensitivity of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Mathematics 10 02675 g010
Figure 11. (a) Avg. specificity of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. execution time of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Figure 11. (a) Avg. specificity of the proposed approaches versus other algorithms using VGG19 and ResNet20 and (b) avg. execution time of the proposed approaches versus other algorithms using VGG19 and ResNet20.
Mathematics 10 02675 g011
Table 1. Parameter settings of the AHA-ROBL and other computational algorithms.
Table 1. Parameter settings of the AHA-ROBL and other computational algorithms.
AlgorithmParameter Settings
Common Settings
Max no. of iterations: ( I t M a x = 200 )
Number of runs: 30
Size of population: N = 30
Problem dimensions: Dim = 3 and
B = 0.1
Social effect parameter: S E = 0.05
Maximal limit: Fixed to 1
Minimal limit: Fixed to 0
AHA [70] r = r a n d , r range [ 0 , 1 ] ,
migration coefficient = 2 n
HHO [124] β = 1.5 (default)
SSA [61] c 1 = 2 e x p ( 4 t / I t M a x ) 2 , c 2 = r a n d , and c 3 = r a n d
AO [125] U = 0.00565; r 1 = 10 ; ω = 0.005; α = 0.1 ; δ = 0.1 ;
G 1 [ 1 , 1 ] ; and  G 2 = [ 2 , 0 ]
HGSO [126]Clusters number = 2,
M 1 = 0.1 , M 2 = 0.2 , α = β = 1 , K = 1 ,
l 1 = 5 × 10 3 , l 2 = 1 × 10 2 , and l 3 = 1 × 10 2
PSO [127] W m a x = 0.9 , W m i n = 0.2 ,
c 1 = 2 , and c 2 = 2
GWO [128]a ∈ [ 2 , 0 ]
AOA [129] c 1 = 2 , c 2 = 6 ,
λ = 0.9 , and μ = 0.1
MRFO [130]b = 1 and a decreases linearly from −1 to −2 (default)
Maximum count of iterations: 100
SCA [76] P a [ 0 , 1 ]
MPA [129]FADs = 0.2 and
P = 0.5
SAR [131] M U = 2 × D for infeasible solutions
and M U = 30 × D for feasible solutions
Table 2. Comparison of the performance of deep features models of the AHA-ROBL with other recent optimizers versus avg. fitness.
Table 2. Comparison of the performance of deep features models of the AHA-ROBL with other recent optimizers versus avg. fitness.
FitnessVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.01290.02190.00450.05330.07200.0085
AHA-OBL0.01480.02570.0077070.06430.08420.01407
AHA0.02500.03910.00530.09720.11200.0072
HHO0.02910.04590.00510.10890.12800.0065
SSA0.04750.05320.00330.10670.12460.0090
AO0.04010.04850.00530.09680.12670.0109
HGS0.05170.06120.00450.12220.14070.0080
PSO0.04760.05080.00200.11040.12410.0043
GWO0.05220.05900.00310.12880.14040.0065
AOA0.04780.05680.00410.12640.13930.0070
MRFO0.040790.03220.00470.11420.10020.0088
SCA0.001950.05530.04750.00430.13020.1103
MPA0.03780.03000.00490.10510.09350.0082
SAR0.05130.04570.00250.12200.11550.0040
Table 3. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus accuracy.
Table 3. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus accuracy.
AccuracyVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.98810.97910.00450.94860.92950.0085
AHA-OBL0.98600.97700.00070.93690.91800.0133
AHA0.97630.96260.00520.90510.89090.0070
HHO0.97230.95590.00490.89330.87500.0064
SSA0.95650.95100.00340.89720.87910.0091
AO0.96050.95360.00500.90510.87550.0109
HGS0.95260.94300.00450.88140.86270.0080
PSO0.95650.95350.00200.89330.87960.0044
GWO0.95260.94450.00350.87750.86310.0072
AOA0.95650.94740.00420.87750.86550.0064
MRFO0.97620.96950.00470.92880.90250.0087
SCA0.96840.95910.00450.89320.87490.0078
MPA0.98020.97100.00490.92880.90800.0079
SAR0.96440.95820.00260.89720.88800.0041
Table 4. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. recall.
Table 4. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. recall.
RecallVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.98610.97740.00480.94180.90220.0172
AHA-OBL0.98530.97440.00770.93650.89800.0272
AHA0.97400.95950.00680.88660.85880.0128
HHO0.97030.95060.00810.87740.84020.0150
SSA0.95460.94400.00620.86820.84400.0140
AO0.96130.94710.00800.86230.83620.0161
HGS0.94690.93440.00610.85280.82470.0142
PSO0.95480.94700.00390.87160.84440.0126
GWO0.94470.93520.00610.86220.82610.0163
AOA0.95640.93880.00650.84710.82280.0132
MRFO0.97710.96740.00690.92290.87170.0171
SCA0.96500.95330.00750.86090.83380.0136
MPA0.98090.96830.00680.92260.87550.0194
SAR0.96150.95330.00440.87970.85800.0125
Table 5. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. precision.
Table 5. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. precision.
PrecisionVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.99090.97560.00800.95200.91670.0155
AHA-OBL0.98750.97320.01010.94300.90750.0251
AHA0.97520.95440.00860.90080.87570.0125
HHO0.96630.94780.00760.87330.85600.0087
SSA0.94990.93930.00470.89700.86400.0147
AO0.95910.94490.00700.89020.85630.0170
HGS0.94530.93030.00610.86570.84390.0103
PSO0.94760.93680.00520.87470.85020.0116
GWO0.94450.93440.00410.86420.84760.0115
AOA0.94580.93460.00540.87760.84710.0128
MRFO0.97890.96300.00730.91950.88650.0156
SCA0.96990.95440.00820.90450.86320.0168
MPA0.98270.96600.00710.92190.89480.0140
SAR0.95540.94680.00450.88760.86920.0096
Table 6. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. sensitivity.
Table 6. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. sensitivity.
SensitivityVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.99220.98790.00300.88740.88360.0026
AHA-OBL0.98850.98690.00110.89800.88760.0073
AHA0.97400.95950.00680.88660.85880.01285
HHO0.97030.95060.00810.87740.84020.0150
SSA0.95460.94400.00620.86820.84400.0140
AO0.96130.94710.00800.86230.83620.01612
HGS0.94690.93440.00610.85280.82470.0142
PSO0.95480.94700.00390.87160.84440.01262
GWO0.94450.93520.006120.86220.82610.0163
AOA0.95640.93880.00650.84710.82280.0132
MRFO0.97720.96750.00690.92300.87180.0171
SCA0.96500.95340.00750.860930.83380.01365
MPA0.98090.96840.00680.92260.87550.0195
SAR0.96160.95330.00450.87970.85810.0126
Table 7. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. specificity.
Table 7. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. specificity.
SpecificityVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.99800.99710.00060.99340.99220.0008
AHA-OBL0.99670.99630.00020.98820.98730.0006
AHA0.99510.99250.00100.98050.97750.0015
HHO0.99450.99120.00100.97820.97420.0014
SSA0.99140.99030.00070.97880.97500.0019
AO0.99220.99070.00100.98050.97440.0022
HGS0.99070.98870.00090.97560.97170.0017
PSO0.99150.99080.00040.97800.97520.0009
GWO0.99060.99060.00080.97480.97170.0015
AOA0.99150.98960.00080.97490.97220.0013
MRFO0.99530.99390.000950.98540.97990.0018
SCA0.99390.99170.00090.97800.97420.0016
MPA0.99590.99420.00100.98540.981050.00166
SAR0.99290.99170.00050.97880.97700.0009
Table 8. Comparison of the performance of deep features models of the AHA-ROBL with other recent optimizers versus F-score.
Table 8. Comparison of the performance of deep features models of the AHA-ROBL with other recent optimizers versus F-score.
F-scoreVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL0.98830.97590.00630.94180.90760.0153
AHA-OBL0.98600.97200.00980.93750.89950.0268
AHA0.97360.95550.00720.88460.86480.0118
HHO0.96600.94770.00750.87080.84480.0110
SSA0.95030.93970.00520.87560.85070.0130
AO0.95820.94450.00690.87230.84260.0154
HGS0.94510.93040.00590.84860.83060.0114
PSO0.94810.94290.00290.86800.84950.0097
GWO0.94260.93290.00440.85690.83300.0128
AOA0.94740.93460.00550.85080.83100.0118
MRFO0.97680.96420.00680.91930.87650.0150
SCA0.96250.95260.00650.86190.84350.0125
MPA0.98140.96620.00650.91920.88220.0158
SAR0.95720.94840.00410.87710.86090.0097
Table 9. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus number of selected features.
Table 9. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus number of selected features.
Number of
Selected Features
VGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL94.0000117.900018.7458138.0000224.900040.8034
AHA-OBL73.0000237.900090.5882218.0000385.566779.8156
AHA80.0000210.833370.7487294.0000404.133361.8952
HHO99.0000223.366770.9844193.0000425.5667126.6580
SSA439.0000464.200018.2575449.0000482.766714.1998
AO98.0000260.533398.0597232.0000339.466759.3498
HGS422.0000469.433321.9430437.0000475.600019.5141
PSO445.0000471.633316.5143455.0000488.900016.1295
GWO222.0000413.6667121.3977273.0000491.7333119.3106
AOA442.0000477.233318.8634463.0000609.7333128.2274
MRFO89.0000210.233361.4142250.0000376.833371.0110
SCA3669.627.8086135.346.81
MPA78.0000135.700041.3139105.0000251.033395.6839
SAR411.0000437.200016.0675440.0000469.266718.1278
Table 10. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. time.
Table 10. Comparison of the performance of deep feature models of the AHA-ROBL with other recent optimizers versus avg. time.
TimeVGG19ResNet
AlgorithmBestMeanSTDBestMeanSTD
AHA-ROBL235.5352342.976766.5081364.7488475.639958.0576
AHA-OBL179.1031335.809475.5996297.8248428.163059.2680
AHA175.2588313.566355.8632342.3833434.841444747.3143
HHO339.1078536.8270121.4785477.5016902.9543242.9377
SSA465.2132487.639116.6067474.1626500.455412.7761
AO518.7569601.969239.6790531.2182588.262830.3227
HGS5.433512.76954.96295.446413.08646.9104
PSO488.9208499.33435.5008 492.8176506.32415.2705
GWO401.2708579.552090.2559429.0840640.4219100.0631
AOA855.3195915.314027.9433906.2781955.838225.4345
MRFO519.2461687.208970.7906669.6235842.005770.53934
SCA70.3634100.019329.2336103.6779153.376838.5945
MPA303.4597372.049652.9391395.3910564.2305100.2369
SAR463.6103478.85636.7850482.5578499.35557.5595
Table 11. Wilcoxon rank sum statistical test.
Table 11. Wilcoxon rank sum statistical test.
AHA-ROBLTrashNet Dataset
vs.VGG19ResNet
AHA7.44 × 10 11 2.65 × 10 11
HHO2.44 × 10 11 2.39 × 10 11
SSA1.76 × 10 11 2.63 × 10 11
AO2.12 × 10 11 2.64 × 10 11
HGS2.04 × 10 11 2.60 × 10 11
PSO8.45 × 10 11 2.21 × 10 11
GWO1.75 × 10 11 2.60 × 10 11
AOA2.02 × 10 11 2.60 × 10 11
MRFO3.04 × 10 11 3.720 × 10 11
SCA3.852 × 10 11 3.942 × 10 11
MPA2.52 × 10 11 2.73 × 10 11
SAR2.08 × 10 11 2.70 × 10 11
Table 12. A comparative study of the AHA-ROBL based on pre-trained CNNs with existing algorithms for TrashNet dataset.
Table 12. A comparative study of the AHA-ROBL based on pre-trained CNNs with existing algorithms for TrashNet dataset.
ArticleMethodologyAccuracy
[134]Self-monitoring ResNet module95.80%
[135]WasNet96%
[136]Mb Xception94.34%
[8]Inception ResNet88.60%
[107]Fined-tuned DenseNet12195%
[107]DenseNet12189%
[107]Inception-ResNet V289%
[102]Fine-tuned VGG1693%
[102]Fine-tuned AlexNet91%
[102]ResNet88.66%
[137]Inception-ResNet88.34%
[8]Inception87.71%
[133]Fine-tuned OscarNet88.42%
[102]Modified kNN88%
[132]Fine-tuned MobileNet87.20%
[103]Modified XGB70.10%
[103]Modified RF62%
[117]Modified SVM63%
Our Proposed ModelAHA-ROBL(VGG19 and ResNet)98.81%
AHA-OBL (VGG19 and ResNet)98.60%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ali, M.A.S.; P. P., F.R.; Salama Abd Elminaam, D. A Feature Selection Based on Improved Artificial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classification Problem. Mathematics 2022, 10, 2675. https://doi.org/10.3390/math10152675

AMA Style

Ali MAS, P. P. FR, Salama Abd Elminaam D. A Feature Selection Based on Improved Artificial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classification Problem. Mathematics. 2022; 10(15):2675. https://doi.org/10.3390/math10152675

Chicago/Turabian Style

Ali, Mona A. S., Fathimathul Rajeena P. P., and Diaa Salama Abd Elminaam. 2022. "A Feature Selection Based on Improved Artificial Hummingbird Algorithm Using Random Opposition-Based Learning for Solving Waste Classification Problem" Mathematics 10, no. 15: 2675. https://doi.org/10.3390/math10152675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop