Bio-Inspired Spotted Hyena Optimizer with Deep Convolutional Neural Network-Based Automated Food Image Classification

Food image classification, an interesting subdomain of Computer Vision (CV) technology, focuses on the automatic classification of food items represented through images. This technology has gained immense attention in recent years thanks to its widespread applications spanning dietary monitoring and nutrition studies to restaurant recommendation systems. By leveraging the developments in Deep-Learning (DL) techniques, especially the Convolutional Neural Network (CNN), food image classification has been developed as an effective process for interacting with and understanding the nuances of the culinary world. The deep CNN-based automated food image classification method is a technology that utilizes DL approaches, particularly CNNs, for the automatic categorization and classification of the images of distinct kinds of foods. The current research article develops a Bio-Inspired Spotted Hyena Optimizer with a Deep Convolutional Neural Network-based Automated Food Image Classification (SHODCNN-FIC) approach. The main objective of the SHODCNN-FIC method is to recognize and classify food images into distinct types. The presented SHODCNN-FIC technique exploits the DL model with a hyperparameter tuning approach for the classification of food images. To accomplish this objective, the SHODCNN-FIC method exploits the DCNN-based Xception model to derive the feature vectors. Furthermore, the SHODCNN-FIC technique uses the SHO algorithm for optimal hyperparameter selection of the Xception model. The SHODCNN-FIC technique uses the Extreme Learning Machine (ELM) model for the detection and classification of food images. A detailed set of experiments was conducted to demonstrate the better food image classification performance of the proposed SHODCNN-FIC technique. The wide range of simulation outcomes confirmed the superior performance of the SHODCNN-FIC method over other DL models.


Introduction
Food image detection and identification are the existing research subjects in the domain of Computer Vision (CV)."Food" is one of the developing areas of interest for the CV community as well as multimedia [1], whereas image detection and identification remain a highly significant problem in the medical field as well.In the literature, a new food recording tool called "FoodLog" has been developed that supports users to record their daily meals with the aid of an image recovery technique [2].However, it is extremely challenging to perform food image analyses.For instance, the identification of food products in images is still a challenging process due to low inter-class variance and high intra-class variance [3].Furthermore, many food classes have not yet been effectively classified.Therefore, automated food detection is a developing area of research not only in the image recognition domain but also in social media research [4].A significant number of researchers paid attention to this domain due to its improving advantages from a medical viewpoint [5].Automated food identification tools can support and facilitate decisionmaking methods in terms of calorie calculation, food quality detection, diet monitoring systems to overcome obesity, etc. [6].In general, food is naturally distorted and has a broad difference in appearance [7].Food images may have high intra-class and low inter-class variances, owing to which standard techniques may not be able to detect complex features in the images.This drawback in the food identification process makes it a challenging task since complex features cannot be detected using conventional methods [8].
Recently, several developments have occurred in the domain of dietary valuation depending on multimedia approaches, e.g., food image analysis [9].In the literature, an automated image-based nutritional assessment technique was proposed in which the technique had the following key stages: food image identification, recognition of food products, weight or quantity valuation, and lastly, nutritional and caloric value assessment [10].In recent years, developments in Machine Learning (ML), image processing, and specifically Convolutional Neural Networks (CNN), and Deep-Learning (DL) techniques have heavily benefited the image classification and detection processes, comprising the issue of food image identification [11].Researchers have developed diverse phases of food detection systems, despite which it remains challenging to find a satisfactory and efficient solution for food identification and classification with high accuracy.This is because there exist extensive types of food products and extremely complicated hybrid food products in food images [12].Therefore, it is tremendously challenging to detect all food items accurately since a variety of food items can appear similar in terms of shape, color, or context, and are not even differentiable to the human eye [13].
Given this background, the current research article develops the Bio-Inspired Spotted Hyena Optimizer with a Deep Convolutional Neural Network-based Automated Food Image Classification (SHODCNN-FIC) approach.The presented SHODCNN-FIC method exploits the DL model with hyperparameter tuning approaches for the classification of food images.To achieve this, the SHODCNN-FIC method exploits the DCNN-based Xception model to derive the feature vectors.In addition to this, the SHODCNN-FIC technique uses the SHO algorithm for the optimal hyperparameter selection of the Xception model.The rest of the paper is organized as follows.Section 2 discusses the related works, and Section 3 details the proposed model.Then, Section 4 provides the analytical results, and Section 5 concludes the paper.

Related Works
Shah and Bhavsar [14] introduced the depth-restricted-CNN (DRCNN) method in which the Transfer Learning (TL) technique was applied to a few frameworks, such as the Alexnet, Resnet-50, Inceptionv3, VGG16, and the VGG19 framework.This method was presented as a Batch Normalization (BN) approach that heavily enhances performance with a lower number of parameters.Chopra and Purwar [15] introduced a food image detection system composed of CNN, GA, and PSO to improve outcomes.CNN, as an approach, was utilized in this study for the classification of food images.The reason for supplementing the CNN technique with GA and PSO is to ensure an efficient classification outcome.In the literature [16], an enhanced VGG16 framework was proposed through a food classification technique.This approach employed the Asymmetric Convolution Block (ACB) to change the convolution kernels and enhance the effectiveness of the standard technique.This technique also involved BN and pooling layers to enrich the normalization.The attention mechanism should be integrated with the CNN technique due to its complications, such as higher texture similarity, complex context, and contextual intervention.
Chopra and Purwar [17] developed the Squirrel Search Algorithm (SSA) to provide optimum solutions for multiple thresholds.This technique implemented the CNN method to identify food images.Then, the study suggested that the Enhanced SSA (ESSA) increases food detection accuracy.Yadav and Chand [18] recommended automatic food classification techniques with the help of the DL algorithm.In this study, both VGG-16 and SqueezeNet CNNs were exploited for the classification of food images.These networks demonstrated significantly high effectiveness due to two tasks, namely fine-tuning the hyperparameters and data augmentation.The developed VGG16 framework then enhanced the performance of the automated food image classification process.In the study conducted earlier [19], the CNN approach was introduced and employed to recognize and classify food images.A pre-trained Inceptionv3 CNN algorithm was implemented using TL to stimulate the original customized CNN model.By utilizing the pre-trained method, the learning approach increased, and therefore, more proficient results were achieved.Therefore, data augmentation must be executed on the training set, since it enhances the performance.
Pan et al. [20] recommended a novel classification technique based on the DL approach for the automatic identification of food items.A combinational CNN (CBNet) was created with a subnet integrating method in this study.Primarily, two different NNs were employed to learn important features.Secondarily, a highly developed feature fusion element combined the features from sub-networks.Shermila et al. [21] introduced a new DL-based Food Item Classification (DEEPFIC) method in which the image was processed using the sigmoid stretching algorithm to improve the quality of the images and eliminate the noise.Afterward, the preprocessed image was segmented by employing the Improved Watershed Segmentation (IWS2) technique.In this study, the RNN approach was utilized for the extraction of the features, which were then normalized through the dragonfly algorithm.The Bi-LSTM was employed in this study for the classification of food items.
Though existing automatic food image classification algorithms are valuable, these methods have critical shortcomings that need to be resolved.One important limitation is that these methods often have a narrow scope in identifying food items from certain cultural contexts or cuisines, therefore resulting in poor generalization whenever it encounters unconventional or diverse dishes.Furthermore, these models struggle when handling variations in food presentation techniques, including changes in angles, lighting, or plating styles, which are common in real-time scenarios.Therefore, a research need exists to develop a highly effective and efficient hyperparameter optimization method, particularly a customized one for food image classification tasks.This is because the hyperparameter tuning process is a crucial aspect in enhancing model performance.This involves the exploration of novel techniques or the adaptation of existing ones to overcome the unique challenges posed by food image datasets.The hyperparameter tuning process ultimately affects the generalization ability and performance of the models.In this scenario, the DL model is extremely complicated and has various hyperparameters, namely batch sizes, learning rates, regularization strengths, and layer depths, among others.This hyperparameter considerably affects the performance of the model in terms of learning from data and its capability to fit patterns while preventing over-fitting issues.Without accurate tuning, the DL model converges slowly, becomes trapped in a sub-optimal solution, or fails to adapt to certain features of the dataset.By systematically adjusting the hyperparameters through techniques such as random search, grid search, or metaheuristic optimization algorithms, DL algorithms can be fine-tuned to accomplish high accuracy, fast convergence rate, and best generalization.These outcomes make the model highly effective in different applications.Addressing these research gaps can advance the field of automatic food image classification using hyperparameter tuning and contribute to the development of highly efficient, accurate, and interpretable models with real-time applications in fields such as food waste reduction, dietary analysis, and restaurant menu management.

The Proposed Model
The current research article is focused on the design and development of an automated food image detection and classification algorithm named the SHODCNN-FIC approach.The main objective of the SHODCNN-FIC method is to recognize and classify food images into distinct types.The presented SHODCNN-FIC technique exploits the DL model with hyperparameter tuning strategies for the classification of food images.It involves different stages of operations, namely Xception, SHO-based hyperparameter tuning, and ELM classification.Figure 1 shows the entire procedure of the SHODCNN-FIC algorithm.

Feature Extraction Using Xception Model
The SHODCNN-FIC technique uses the DCNN-based Xception model to derive the feature vectors.The CNN model has proved to be an extraordinary implementer of different image-grouping problems in various fields [22].The concept of sharing the load in

Feature Extraction Using Xception Model
The SHODCNN-FIC technique uses the DCNN-based Xception model to derive the feature vectors.The CNN model has proved to be an extraordinary implementer of different image-grouping problems in various fields [22].The concept of sharing the load in CNN makes the image segmentation process a difficult one by finding the high-level components in the images and diminishing the dissipating tendency problem.The development of the CNN technique incorporates the related layer, convolution layer, and pooling layer.The convolution layer deals with channels, whereas the chief aim is to eliminate the features from the images.Both pooling and convolution layers yield low performances when looking at and holding the basic data in food images.The final layer is the related layer that uses ReLU and takes a certain level component from the food image to gather them into different classifications with marks.
In the XceptionNet model, the conventional convolutional layers are exchanged for depth-wise separable convolution layers.The CNN feature map enables cross-channel and decoupling spatial correlation, whereas the mapping of both correlations is added to the basic operations of the network.Finally, the XceptionNet replaces the main structure of the Inception model.XceptionNet, with 36 convolution layers, is divided into 14 modules.First, the actual image is transformed into defining the possibility contained over different input channels to arrive at the unified images.The following scheme exploits 11 depth-wise convolution layers.

Hyperparameter Tuning Using the SHO Algorithm
The SHO algorithm is applied for the hyperparameter tuning process.This technique is based on the hunting strategy of the hyena predation model [23].It comprises four phases: searching, siege, hunting, and attacking.It continuously approaches and encircles the prey after recognizing its location.The individual search for a target is the optimal searching point, and the rest update their positions [24].The distance equation between the prey and the spotted hyenas is given in Equation (1).
where B shows the coefficient vector, D h signifies the distance between the captured prey and the searched individual, and t denotes the iteration count.P P and P show the position of the target and the individuals searched for during t iteration.
where E signifies the coefficient vector.The individual search location at the t + 1 iteration is related to the target point and the distance between them.Equations ( 1) and (2) contain a coefficient vector and the expression is as follows: where r d1 and r d2 are random numbers that lie in the range [0, 1]; h indicates the control factor that drops linearly from 5 to 0 as follows: where M inter denotes the maximal iteration count; I inter shows the natural numbers except 0. The spotted hyenas frequently engage in groups to encircle the target.Assume that a bettersearched individual is much closer to the target, whereas the rest defines the location of the better-searched individuals as the target location, which forms a cluster and cooperatively moves toward the optimum point location.The computation equation for this scenario is attained from the succeeding formula: where P h indicates the optimum location for the hyena group; P k shows the location of the residual hyenas.C h = P k + P k+1 + . . .+ P k+N N = C nos (P h , P h+1 , P h+2 , . . . ,P h + M) where C h shows a set of N optimum solutions; N refers to the number of spotted hyenas; and C nos indicates the number of solutions attained.The coefficient vector E changes continuously, whereas the control factor h gradually decreases.Once the absolute value of E becomes less than 1, then it is the attack moment.Otherwise, it continues to search for prey.The computation equation for this scenario is as follows: C h is the optimum search individual set, where the individuals disperse and pursue the target.The condition is that |E| is higher than 1, after which the distance between the target and spotted hyenas is forcefully limited.Extending the search phase might assist in finding a better hunting position and ensure the successful implementation of global search.Figure 2 depicts the steps involved in the SHO algorithm.
The SHO system progresses an FF to offer the highest classification solution.It expresses a positive integer to exemplify the optimal solution of the candidate performance.The reduction in classifier errors is assumed to be FF.
No. o f misclassi f ied instances Total no.o f instances × 100 (8)

Image Classification Using the ELM Model
In this study, the ELM algorithm has been applied to the food image classification process.ELM is an FFNN model for ML that provides various benefits compared to other techniques, including RBFNN and BPNN [25].It does not need adjustment of the structural parameters, which makes it an easier and highly effective one.In ELM, the weights connected between the hidden and input layers, along with the threshold of the HL neurons, are randomly generated and do not require adjustment during training.Consider that there exist refers to the input vector of the ith sample, and In Equation (9), g(x) denotes the activation function; b f shows the threshold of f −th HL neuron; and represents the output weight.The objective of the ELM technique is to minimize the output error as follows: where T N×m .The output weight β is attained by resolving the least-square solution as follows: Here, the generalized inverse matrix of the output matrix H is represented as H + .

Figure 2.
Steps involved in the SHO algorithm.
The SHO system progresses an FF to offer the highest classification solution.It expresses a positive integer to exemplify the optimal solution of the candidate performance.The reduction in classifier errors is assumed to be FF.

Results and Discussion
The proposed model was simulated using the Python 3.8.5 release.The proposed model was executed on a PC configured with specifications as follows: i5-8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD.The food classification outcomes of the SHODCNN-FIC algorithm were tested using the Indian food classification dataset [26].The dataset includes a total of 1800 samples under six classes, as defined in Table 1. Figure 3 represents some of the sample images.The food classification results of the SHODCNN-FIC technique with 60:40 of TR set/TS set are reported in Table 2 and Figure 5.The outcomes infer the proficient performance of the SHODCNN-FIC technique on different food classes.On the 60% TR set, the SHODCNN-FIC technique attained the average  ,  ,  , , and MCC values of 85.90%, 60.55%, 57.95%, 57.54%, and 50.42%, respectively.Also, on the 40% TS set, the SHODCNN-FIC method accomplished the average  ,  ,  ,  , and MCC values of 85.69%, 60.39%, 57.10%, 57.49%, and 49.79%, respectively.The food classification results of the SHODCNN-FIC technique with 60:40 of TR set/TS set are reported in Table 2 and Figure 5.The outcomes infer the proficient performance of the SHODCNN-FIC technique on different food classes.On the 60% TR set, the SHODCNN-FIC technique attained the average accu y , prec n , reca l , F score , and MCC values of 85.90%, 60.55%, 57.95%, 57.54%, and 50.42%, respectively.Also, on the 40% TS set, the SHODCNN-FIC method accomplished the average accu y , prec n , reca l , F score , and MCC values of 85.69%, 60.39%, 57.10%, 57.49%, and 49.79%, respectively.Figure 8 shows the classification outcomes of the SHODCNN-FIC method at 70:30 of the TR set/TS set. Figure 8a,b show the confusion matrix generated by the SHODCNN-FIC technique.The outcome indicates that the SHODCNN-FIC method detected and categorized all six class labels.Likewise, Figure 8c demonstrates the PR examination out-   The food classification results of the SHODCNN-FIC technique with 70:30 of the TR set/TS set are reported in Table 3 and Figure 9.The outcomes found the proficient performance of the SHODCNN-FIC method on different food classes.On the 70% TR set, the SHODCNN-FIC technique achieved average  ,  ,  ,  , and MCC values of 85.98%, 60.95%, 57.79%, 58.68%, and 50.76%, respectively.Also, on the 30% TS set, the SHODCNN-FIC technique yielded average  ,  ,  ,  , and MCC values of 84.81%, 58.08%, 54.51%, 55.32%, and 46.88%, respectively.The food classification results of the SHODCNN-FIC technique with 70:30 of the TR set/TS set are reported in Table 3 and Figure 9.The outcomes found the proficient performance of the SHODCNN-FIC method on different food classes.On the 70% TR set, the SHODCNN-FIC technique achieved average accu y , prec n , reca l , F score , and MCC values of 85.98%, 60.95%, 57.79%, 58.68%, and 50.76%, respectively.Also, on the 30% TS set, the SHODCNN-FIC technique yielded average accu y , prec n , reca l , F score , and MCC values of 84.81%, 58.08%, 54.51%, 55.32%, and 46.88%, respectively.In Table 4 and Figure 12, the overall comparative analysis outcomes between the proposed SHODCNN-FIC system and other approaches are given.The outcomes show that the ResNet50 model achieved the worst results, whereas the NASNetLarge, Mo-bileNet, ResNet101, and ResNet152 models obtained slightly closer performances.Meanwhile, the InceptionResNet model gained a considerably high performance.However, the SHODCNN-FIC technique demonstrated promising performance with the maximum accu y , prec n , reca l , F score , and MCC values of 85.98%, 60.95%, 57.79%, 58.68%, and 50.76% respectively.In Table 4 and Figure 12, the overall comparative analysis outcomes between the proposed SHODCNN-FIC system and other approaches are given.The outcomes show that the ResNet50 model achieved the worst results, whereas the NASNetLarge, MobileNet, ResNet101, and ResNet152 models obtained slightly closer performances.Meanwhile, the InceptionResNet model gained a considerably high performance.However, the SHOD-CNN-FIC technique demonstrated promising performance with the maximum  ,  ,  , , and MCC values of 85.98%, 60.95%, 57.79%, 58.68%, and 50.76% respectively.CT of 2.03 s.At the same time, it can be observed that the SHODCNN-FIC technique exhibits an enhanced food image classification outcome.

Conclusions
This paper designs an automated food image detection and classification algorithm named SHODCNN-FIC.The main objective of the SHODCNN-FIC technique is to recognize and classify distinct types of food images.The presented SHODCNN-FIC technique exploits the DL model with hyperparameter tuning strategies for the classification of food The SHODCNN-FIC technique uses the Extreme Learning Machine (ELM) model for the detection and classification of food images.A detailed set of experiments was conducted to illustrate the better food image classification performance of the SHODCNN-FIC technique.The key contributions of the current study are summarized below.(a) The development of an automated SHODCNN-FIC algorithm, including Xception feature extraction, SHO-based parameter tuning, and ELM-based classification for food classification.To the best of the authors' knowledge, the SHODCNN-FIC approach has never been reported in the literature.(b) The development of a new technique, i.e., SHODCNN-FIC, by combining bio-inspired optimization and DL for automatic food image classification.The proposed technique is highly useful in many real-time applications involving dietary analysis and restaurant menu management.(c) The SHODCNN-FIC leverages the power of deep learning using the DCNN-based Xception model for extracting the feature vectors from food images.Furthermore, the optimum fine-tuning of the hyperparameters for the Xception model, using the SHO technique, improves the performance of the DL model by fine-tuning its parameters.(d) The application of the ELM model for the actual detection and classification of food images.ELM is known for its high accuracy and fast training features in different machine-learning tasks.

Figure 2 .
Figure 2. Steps involved in the SHO algorithm.

Figure 4
Figure 4 illustrates the classification outcomes of the SHODCNN-FIC method on 60:40 of the TR set/TS set. Figure 4a,b depict the confusion matrix generated by the SHODCNN-FIC approach.The outcome indicates that the SHODCNN-FIC method detected and

Figure 5 .
Figure 5. Average values of the SHODCNN-FIC algorithm at 60:40 TR set/TS set.To evaluate the performance of the SHODCNN-FIC method on 60:40 TR set/TS set, the TR and TS  curves were plotted and are shown in Figure 6.The TR and TS  values illustrate the performance of the SHODCNN-FIC technique over various number of epochs.The figure shows meaningful insights into the learning task and the generalization abilities of the SHODCNN-FIC method.With an increase in the number of epochs, both TR and TS  curves improved.The SHODCNN-FIC technique attained improved testing accuracy, which can detect patterns in the TR and TS datasets.Figure 7 displays the overall TR and TS loss values of the SHODCNN-FIC method on 60:40 of the TR set/TS set over a different number of epochs.The TR loss outcomes show that the model s loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the model modified the weight to minimize the prediction

Figure 7
displays the overall TR and TS loss values of the SHODCNN-FIC method on 60:40 of the TR set/TS set over a different number of epochs.The TR loss outcomes show that the model s loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the model modified the weight to minimize the prediction

Figure 5 .
Figure 5. Average values of the SHODCNN-FIC algorithm at 60:40 TR set/TS set.To evaluate the performance of the SHODCNN-FIC method on 60:40 TR set/TS set, the TR and TS accu y curves were plotted and are shown in Figure6.The TR and TS accu y values illustrate the performance of the SHODCNN-FIC technique over various number of epochs.The figure shows meaningful insights into the learning task and the generalization abilities of the SHODCNN-FIC method.With an increase in the number of epochs, both TR and TS accu y curves improved.The SHODCNN-FIC technique attained improved testing accuracy, which can detect patterns in the TR and TS datasets.

Figure 6 .
Figure 6. curve of the SHODCNN-FIC algorithm at 60:40 of the TR set/TS set.

Figure 7 .
Figure 7. Loss curve of the SHODCNN-FIC algorithm at 60:40 of the TR set/TS set.

Figure 8
Figure 8 shows the classification outcomes of the SHODCNN-FIC method at 70:30 of the TR set/TS set. Figure 8a,b show the confusion matrix generated by the SHODCNN-FIC technique.The outcome indicates that the SHODCNN-FIC method detected and categorized all six class labels.Likewise, Figure 8c demonstrates the PR examination outcomes of the SHODCNN-FIC method.The figure infers that the SHODCNN-FIC technique attained the maximum PR performance under all six classes.Lastly, Figure 8d de-

Figure 6 .
Figure 6.Accu y curve of the SHODCNN-FIC algorithm at 60:40 of the TR set/TS set.

Figure 7 19 Figure 6 .
Figure 7 displays the overall TR and TS loss values of the SHODCNN-FIC method on 60:40 of the TR set/TS set over a different number of epochs.The TR loss outcomes show that the model's loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the model modified the weight to minimize the prediction error on TR and TS datasets.The loss curves illustrate the extent to which the model fits the training data.Both TR and TS loss values steadily decreased, and this shows that the SHODCNN-FIC technique effectually learned the patterns exhibited in the TR and TS datasets.The SHODCNN-FIC approach adjusted the parameters to minimize the discrepancy between the prediction and the original training label.

Figure 7 .
Figure 7. Loss curve of the SHODCNN-FIC algorithm at 60:40 of the TR set/TS set.

Figure 7 .
Figure 7. Loss curve of the SHODCNN-FIC algorithm at 60:40 of the TR set/TS set.

Figure 8
Figure 8 shows the classification outcomes of the SHODCNN-FIC method at 70:30 of the TR set/TS set. Figure 8a,b show the confusion matrix generated by the SHODCNN-FIC technique.The outcome indicates that the SHODCNN-FIC method detected and categorized all six class labels.Likewise, Figure 8c demonstrates the PR examination outcomes of the SHODCNN-FIC method.The figure infers that the SHODCNN-FIC technique attained the maximum PR performance under all six classes.Lastly, Figure 8d depicts the ROC examination outcomes of the SHODCNN-FIC approach.The figure portrays the promising performance of the SHODCNN-FIC approach with maximum ROC values under all six class labels.Biomimetics 2023, 8, x FOR PEER REVIEW 13 of 19

Figure 9 . 19 Figure 10 .
Figure 9. Average values of the SHODCNN-FIC algorithm at 70:30 TR set/TS set.To assess the performance of the SHODCNN-FIC method on the 70:30 TR set/TS set, the TR and TS accu y curves were determined and are shown in Figure10.The TR and TS accu y curves illustrate the performance of the SHODCNN-FIC technique over several epochs.The figure offers meaningful insights into the learning task and generalization

Figure 11
Figure 11 shows the overall TR and TS loss values of the SHODCNN-FIC method at 70:30 of the TR set/TS set over a varying number of epochs.The TR loss values illustrate that the model loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the technique modified the weight to minimize the prediction error on TR and TS data.The loss curves show the extent to which the model fits the training data.Both TR and TS loss values steadily reduced, which shows that the SHODCNN-FIC model effectually learned the patterns displayed in both TR and TS data.The SHODCNN-FIC method adjusted the parameters to minimize the discrepancy between the predicted and the original training label.

Figure 10 .
Figure 10.Accu y curve of the SHODCNN-FIC algorithm at 70:30 of the TR set/TS set.

Figure 11
Figure 11 shows the overall TR and TS loss values of the SHODCNN-FIC method at 70:30 of the TR set/TS set over a varying number of epochs.The TR loss values illustrate that the model loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the technique modified the weight to minimize the prediction error on TR and TS data.The loss curves show the extent to which the model fits the training data.Both TR and TS loss values steadily reduced, which shows that the SHODCNN-FIC model effectually learned the patterns displayed in both TR and TS data.The SHODCNN-FIC method adjusted the parameters to minimize the discrepancy between the predicted and the original training label.In Table4and Figure12, the overall comparative analysis outcomes between the proposed SHODCNN-FIC system and other approaches are given.The outcomes show that the ResNet50 model achieved the worst results, whereas the NASNetLarge, Mo-bileNet, ResNet101, and ResNet152 models obtained slightly closer performances.Meanwhile, the InceptionResNet model gained a considerably high performance.However, the SHODCNN-FIC technique demonstrated promising performance with the maximum accu y , prec n , reca l , F score , and MCC values of 85.98%, 60.95%, 57.79%, 58.68%, and 50.76% respectively.
that the model loss reduced over an increasing number of epochs.Primarily, the loss values were reduced as the technique modified the weight to minimize the prediction error on TR and TS data.The loss curves show the extent to which the model fits the training data.Both TR and TS loss values steadily reduced, which shows that the SHODCNN-FIC model effectually learned the patterns displayed in both TR and TS data.The SHODCNN-FIC method adjusted the parameters to minimize the discrepancy between the predicted and the original training label.

Figure 12 .
Figure 12.Comparative analysis outcomes of the SHODCNN-FIC algorithm and other recent methods.

Figure 12 .
Figure 12.Comparative analysis outcomes of the SHODCNN-FIC algorithm and other recent methods.The Computation Time (CT) analysis outcomes of the SHODCNN-FIC technique and other existing DL approaches are demonstrated in Table 5 and Figure 13.The outcomes show the enhanced classification results of the SHODCNN-FIC technique with a minimal

Figure 13 .
Figure 13.CT outcomes of the SHODCNN-FIC algorithm and other recent methods.

Figure 13 .
Figure 13.CT outcomes of the SHODCNN-FIC algorithm and other recent methods.
This paper designs an automated food image detection and classification algorithm named SHODCNN-FIC.The main objective of the SHODCNN-FIC technique is to recognize and classify distinct types of food images.The presented SHODCNN-FIC technique exploits the DL model with hyperparameter tuning strategies for the classification of food images.It involves different stages of operations, namely the Xception, SHO-based hyperparameter tuning, and the ELM classification.To accomplish this, the SHODCNN-FIC technique employs a DCNN-based Xception model to derive feature vectors.In addition, the SHODCNN-FIC technique uses the SHO approach for the selection of the optimum hyperparameters for the Xception model.The SHODCNN-FIC technique uses the ELM model for both the detection and classification of food images.A detailed set of experiments was conducted to demonstrate the enhanced food image classification performance of

Table 1 .
Details on the database.

Table 1 .
Details on the database.

Table 2 .
Food classification outcomes of the SHODCNN-FIC algorithm at 60:40 TR set/TS set.

Table 2 .
Food classification outcomes of the SHODCNN-FIC algorithm at 60:40 TR set/TS set.

Table 3 .
Food classification outcomes of the SHODCNN-FIC algorithm at 70:30 TR set/TS set.

Table 3 .
Food classification outcomes of the SHODCNN-FIC algorithm at 70:30 TR set/TS set.

Table 4 .
Comparative analysis outcomes of the SHODCNN-FIC algorithm and other recent approaches.

Table 4 .
Comparative analysis outcomes of the SHODCNN-FIC algorithm and other recent approaches.

Table 5 .
CT outcomes of the SHODCNN-FIC algorithm and other recent methods.

Table 5 .
CT outcomes of the SHODCNN-FIC algorithm and other recent methods.