Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways

Object detection in pedestrian walkways is a crucial area of research that is widely used to improve the safety of pedestrians. It is not only challenging but also a tedious process to manually examine the labeling of abnormal actions, owing to its broad applications in video surveillance systems and the larger number of videos captured. Thus, an automatic surveillance system that identifies the anomalies has become indispensable for computer vision (CV) researcher workers. The recent advancements in deep learning (DL) algorithms have attracted wide attention for CV processes such as object detection and object classification based on supervised learning that requires labels. The current research study designs the bioinspired Garra rufa optimization-assisted deep learning model for object classification (BGRODL-OC) technique on pedestrian walkways. The objective of the BGRODL-OC technique is to recognize the presence of pedestrians and objects in the surveillance video. To achieve this goal, the BGRODL-OC technique primarily applies the GhostNet feature extractors to produce a set of feature vectors. In addition to this, the BGRODL-OC technique makes use of the GRO algorithm for hyperparameter tuning process. Finally, the object classification is performed via the attention-based long short-term memory (ALSTM) network. A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC technique. The experimental values established the superior performance of the BGRODL-OC algorithm over other existing approaches.


Introduction
Recent technological developments such as computer vision (CV) and surveillance cameras (CCTV) have been utilized to protect the pedestrians and support safer walking practices.For this purpose, the type of characteristics of risk constituents is required to save the pedestrians from accidents [1].Numerous CV techniques have been developed by focusing on the processes such as activity learning, feature extraction, data acquisition, behavioral learning, and scene learning.The main objective of such techniques is to compute the operations, such as video processing systems, traffic monitoring, scene identification, multicamera-relied challenges and methods, human behavior learning, activities analysis, vehicle prediction and monitoring, anomaly prediction techniques, etc. [2].The current study focuses on anomalous forecast, a subfield of behavioral learning from the captured visual images.Moreover, anomalous prediction methods realize popular behavior with the help of training processes.The presence of numerous significant variations in the standard implementation process is defined as "anomalous" [3].Specific instances of anomalies include cross-walking, presence of vehicles on paths, collapse of individuals while walking, signal avoidance at traffic junctions, vehicles making U-turns in red signals, and the unpredicted allocation of people in the crowd [4].
Pedestrian detection involves the automated identification of the walking persons from the information gathered from video and image sequences as well as the accurate location of the pedestrian region [5].However, pedestrians can be identified as nonrigid objects in difficult environments, in various positions, under varying light conditions, and with changing levels of occlusion in real road situations.These scenarios increase the difficulty of accurately identifying the pedestrians [6].With the fast growth of artificial intelligence (AI) technology, pedestrian identification has become a significant research area in CV.Pedestrian identification research approaches have been commonly categorized into two types, namely, conventional and deep learning (DL)-based identification techniques [7].
The DL technique is an advanced domain in the machine learning (ML) field that aims to determine the complex models of modest representations.The DL algorithms commonly depend on artificial neural networks (ANNs) that contain numerous hidden layers with nonlinear processing components [8].The term "deep" corresponds to the presence of several hidden layers that are employed to modify the representation of the data.By applying the idea of feature learning, all the hidden layers of the neural networks design their input data in a new representation [9].The layer control engages a higher level of generalization than the theoretical perception in the preceding layer.In DL frameworks, the hierarchy of feature learning at numerous levels is ultimately mapped to the output of the ML technique in one architecture [10].Like ML algorithms, the DL approach can be categorized into two different classes such as unsupervised learning methods and supervised learning techniques, comprising deep neural networks (DNNs).
The current research paper outlines the design of the bioinspired Garra rufa optimizationassisted deep learning model for object classification (BGRODL-OC) technique on pedestrian walkways.The BGRODL-OC technique primarily applies the GhostNet feature extractor to produce a set of feature vectors.Moreover, the BGRODL-OC technique makes use of the GRO algorithm in the hyperparameter tuning process.Finally, the object classification process is performed using the attention-based long short-term memory (ALSTM) network.A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC method.In short, the key contributions of the paper are summarized herewith.

•
An effective BGRODL-OC technique is developed in this study, comprising GhostNet feature extraction, GRO-based hyperparameter tuning, and ALSTM-based classification for pedestrian walkway detection.To the best of the authors' knowledge, the BRGODL-OC technique has never been mentioned in the literature.

•
The GhostNet model is developed to produce a collection of feature vectors.This model is known for its efficiency and effectiveness in deep-learning-based image analysis and in improving the accuracy of object detection.

•
The BRGO algorithm is employed for the hyperparameter tuning process, which helps in fine-tuning the model's parameters to improve its performance in object classification.

•
The ALSTM model is presented for the object classification process, which can capture long-term dependencies in video data.The attention mechanism enhances the model's ability to focus on relevant information, thus further improving the accuracy.

Related Works
Abdullah and Jalal [11] presented a new technique using the DL framework and conditional random field (CRF).In this study, the preprocessing was executed primarily, while the superpixels were produced secondarily, utilizing enhanced watershed transform.Then, the objects were segmented using a CRF.The relevant field was localized by employing the conditional probability while a temporal relationship was applied to find the areas.At last, a DL-based hierarchical network method was exploited for identification and classification of the objects.In [12], the authors proposed the automatic DL-based anomaly detection technology in pedestrian walkways (DLADT-PW) technique for susceptible transport user's protection.The suggested technique comprised preprocessing as the main phase to be implemented for eliminating the noise and increasing the quality of the image.Similarly, the mask region convolutional neural network (Mask-RCNN) with DenseNet technique was utilized for identifying the operations.Harrou et al. [13] developed an innovative deep hybrid learning approach with a completely-directed attention module.The presented technique increased the modeling ability of the variational autoencoder (VAE) by combining it with the LSTM algorithm and employing a self-attention module at multiple phases of the VAE method.
Al Sulaie [14] introduced a novel golden jackal optimizer with DL-based anomaly detection in pedestrian walkways (GJODL-ADPW).In the developed GJODL-ADPW method, the Xception technique was utilized for efficient extraction of the feature method.The GJO technique was employed for optimal selection of the hyperparameters.Lastly, the bidirectional-LSTM (Bi-LSTM) methodology was implemented with an aim to detect the anomalies.Jayasuriya [15] suggested a method with the help of the convolutional neural network (CNN) approach.In this study, the localization was performed on a predesigned map.The extended Kalman filter (EKF) combines such annotations.In addition to this, an omnidirectional camera was added to the technique to enhance the efficient field of view (FoV) of the landmark detection method.The data-theoretic approach was also exploited to select a better viewpoint.Alia et al. [16] designed a hybrid DL technique along with a visualization model.This architecture had two key mechanisms.Firstly, deep optical flow and wheel visualization were utilized to produce the motion data maps; secondly, a false reduction method and EfficientNet-B0-based classifier were incorporated.
Kolluri and Das [17] implemented a technique by employing the hybrid metaheuristic optimizer with DL (IPDC-HMODL) in which 3-phase was offered.Primarily, the IPDC-HMODL approach employed multiple modal object detectors through RetinaNet and YOLO-v5 frameworks.Secondarily, the IPDC-HMODL technique implemented the kernel extreme learning machine (KELM) method for the classification of the pedestrians.Lastly, the hybrid salp swarm optimization (HSSO) method was utilized to optimally adapt the parameters.Alsolai et al. [18] introduced the innovative sine cosine algorithm with DL-based anomaly detection in pedestrian walkways (SCADL-ADPW) technique.This approach employed the VGG-16 framework for producing the feature vectors.In addition, the SCA algorithm was developed for the optimal hyperparameter tuning methods.In this study, the LSTM approach was exploited for anomaly detection.

The Proposed Model
In the current manuscript, an automatic object classification technique on pedestrian walkways termed BGRODL-OC method is developed.The objective of the BGRODL-OC algorithm is to recognize the presence of pedestrians and objects in the surveillance video.It encompasses several processes such as the GhostNet feature extractor, the GRO-based hyperparameter selection, and the ALSTM-based classification.Figure 1 illustrates the workflow of the BGRODL-OC algorithm.

Feature Extraction: GhostNet Model
In this phase, the GhostNet model is applied for the feature extraction process.The fundamental breakthrough of the GhostNet model is the introduction of the Ghost modules that reduce the number of convolutional sizes and calculations through cheap linear transformation so as to generate redundant feature mapping [19].It also uses initial and cheap convolutions instead of typical convolutions.The input dataset is  ∈ ( ×  × ), in which the , ,   correspond to the number of height, width, and channels, and  first passes over the convolution kernels to be 1 × 1 first convolutions once the network is trained.
Here,  × refers to pointwise convolution and  ∈ ( ×  ×  ) shows the inherent features.Next, the cheap convolution is used to generate further features and interconnect the generated features through the first convolution, as given below.
Though the GhostNet model reduces the computational cost, its ability to capture the spatial information is reduced.Thus, the Ghostnet-V2 model adds another attention mechanism, i.e., DFC, based on the FC layer that has low hardware requirements.It is achieved by capturing the dependencies amongst longer distance pixels, and it can enhance the inference speeds.The computation of DFC attention is as follows.
Assume ∈ ( ×  × ) as an HW label,  ∈  ,  =  ,  , … ,  .The features are aggregated along the vertical and horizontal directions, correspondingly, and are formulated using the following equations.

Feature Extraction: GhostNet Model
In this phase, the GhostNet model is applied for the feature extraction process.The fundamental breakthrough of the GhostNet model is the introduction of the Ghost modules that reduce the number of convolutional sizes and calculations through cheap linear transformation so as to generate redundant feature mapping [19].It also uses initial and cheap convolutions instead of typical convolutions.The input dataset is X ∈ R(C × H × W), in which the H, W, and C correspond to the number of height, width, and channels, and X first passes over the convolution kernels to be 1 × 1 first convolutions once the network is trained.
Here, F 1×1 refers to pointwise convolution and Y ∈ R(H × W × C out ) shows the inherent features.Next, the cheap convolution is used to generate further features and interconnect the generated features through the first convolution, as given below.
In Equation ( 2), F dp refers to depthwise convolution and Though the GhostNet model reduces the computational cost, its ability to capture the spatial information is reduced.Thus, the Ghostnet-V2 model adds another attention mechanism, i.e., DFC, based on the FC layer that has low hardware requirements.It is achieved by capturing the dependencies amongst longer distance pixels, and it can enhance the inference speeds.The computation of DFC attention is as follows.
Assume ∈ R(H × W × C) as an HW label, z i ∈ R C , Z = {z 11 , z 12 , . . . ,z HW }.The features are aggregated along the vertical and horizontal directions, correspondingly, and are formulated using the following equations.
Here, F H and F W denote the transformation weights, indicates the elementwise multiplication, and A = {a 11 , a 12 , . . . ,a HW } represents the generated attention map.The original feature Z is taken as input while the long-range dependency along both the directions is captured.Equations ( 3) and ( 4) are the representations of DFC attention that aggregate the pixels in 2D horizontal and vertical directions, correspondingly.These equations partially exploit the shared transformation of weights and perform them with convolution to increase the inference speed.This also avoids the time-consuming tensor operations.Two depthwise convolutions, sized 1 × K H and K W × 1, are used, independent of the feature map sizes to adapt the input images of various resolutions.

Hyperparameter Tuning: GRO Algorithm
The current study uses the GRO algorithm to adjust the hyperparameters related to the GhostNet architecture.The GRO algorithm is a procedure that employs the mathematical rules and is used to identify the better approach so as to determine the solutions for the problem [20].The procedure starts by determining a main function that is normally connected to many engineering problems.Then, a group of parameters is defined and the constraints are overcome to attain the required outcomes.Once these are determined, the software then begins the optimization procedure that employs the mathematical models to identify the most efficient and effective parameter values for resolving this issue.The optimizer procedure is iterative, i.e., modifying the distribution of resources increases the performance.The GRO technique is performed in three parts: the GRO initialization, leader crossover, and the follower crossover.

Procedure 1. GRO initialization
The basic theory of GRO is to split the particles into several groups; each of the groups takes a unique group of leaders for either global or the local optimum group places.The GRO system also needs to deploy major rules like the notion that all the fishes can act as followers while the leaders rely on the connected global optimum point for all the groups.The follower portions can switch the very weak ones to stronger leaders, who can accomplish a better ideal value before the next iteration.It is essential to primarily adopt these maximal follower portions as a percentage.Further, the primary parameters are considered as the acceleration coefficients (cl, c2) and the inertia weighted (ω).
f ollowers number = total number o f particles − number o f groups number o f groups (5)

Procedure 2. Leaders' crossover of GRO
The GRO approach contains two leader crossover procedures that need to be considered in the study.A primary model involves the selection of the novel leaders for all the groups, whereas the secondary model involves the election of a great leader who can lead the maximal number of followers.These stages assist as the guiding rules that found the approaches to be a vital element, thus offering flexibility to this method.

Procedure 3. Followers' crossover of GRO
There is a huge probability of determining the optimum performance from a problem space owing to the flexible motion among the groups.Some optimizer approaches are not as flexible to work from one searching space to another, which can cause confusion in most of the difficult issues.This problem appears because of the presence of numerous parameters and higher differential equation order from difficult problems.The GRO model deploys a process to seek a large space of the problem by utilizing the follower crossovers among the groups.First, an arbitrarily chosen fish, in all the groups, changes to a strong leader.Second, one step needs to be taken from the direction of all the leaders by evaluating the position (X) and velocity (v), employed in Equations ( 6) and (7), correspondingly.
The fitness function (FF) in the group figures is recomputed, containing every follower and leader.Equations ( 8) and ( 9) define a novel phase in the GRO method.
Here, f refers to the maximal feasibility of the moving fish.The pseudocode of GRO algorithm is given in Algorithm 1.

Algorithm 1: Pseudocode of the GRO algorithm
Select the primary values (number of particles, leader number, FF limits) Followers number = n/leaders number Compute FF for n of particles, with sort FF While t < iteration do For i = 1 to leader counts Upgrade particles for the follower for leaders(i) utilizing optimizer system End for i = 2 to leader counts Random£x = mobile_fishes(i) Followers(i) = Max(0,followers(i)− mobile_fishes(i)) The total amount of mobile_fishes = total no.mobile_fishes+ mobile_fishes(i) End for Followers(1) = total no. of mobile_fishes + Followers(1) Define the performance of sub-global for all the leaders Compute the global solution End while In the GRO approach, fitness selection is a vital factor.The encoded solution is applied to measure the goodness of the solution candidate.Here, the accuracy values remain the primary condition to design an FF.
Here, TP and FP represent the true and false positive values, respectively.

Object Classification: ALSTM Model
The ALSTM model is used for the object classification process in this study.The underlying concept of the LSTM network is to control the data flow through gates and to utilize the memory cells (units) for storing and transferring the data [21,22].Particularly, the LSTM network includes a memory cell along with input, forget, and output gates.
The input gate defines the amount of data that are fed as input to the memory units while the forget gate decides whether to remove the prior memory or not.Finally, the output gate decides the output of the hidden layer.The memory units are accountable for storing and transmitting long-term data that can be updated and controlled by calculating the gate units.Now, x t refers to the input dataset at t time; C t−1 denotes the memory values at t − 1 time; h t−1 indicates the output values of the LSTM network at t − 1 time.The three datasets x t , C t−1 , and h t−1 constitute the input dataset.C t corresponds to the memory values at t time, h t represents the output values of the LSTM network at t time, and the two datasets C t and h t constitute the output information.
The control functions of the forget, input, and the output gates are as follows.
where b o , Uc, and Wo refer to the bias, the input, and the cyclic weights of the forget gate, correspondingly.
Attention mechanism is the important component used in the NN model.The core principle is to allocate the attention weight to dissimilar parts of the input datasets, thus reducing the role of inappropriate parts.At the time of processing and learning tasks, this allows one to be more focused on the crucial data, which eventually improves the performance.This attention weight is used for computing the context vectors that capture the fittest data from the inputs.
In order to improve the model's performance, the relevant equation is given below.
Now α j denotes the score of the feature vector and a high score designates great attention.s(h i , h t ) shows the weight value of the ith input feature in the attention module, viz., the score ratio of the feature vector to the entire population.Next, each vector is added and averaged to attain the concluding vector, α.

Results and Discussion
The proposed model was simulated in Python 3.6.5 tool on a PC configured with specifications such as i5-8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD.The parameter settings used for the study were as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.In this section, the performance of the BGRODL-OC technique is evaluated using the UCSD dataset [23], comprising images from the surveillance videos.Figure 3 depicts the sample images.

Results and Discussion
The proposed model was simulated in Python 3.6.5 tool on a PC configured with specifications such as i5-8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD.The parameter settings used for the study were as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.In this section, the performance of the BGRODL-OC technique is evaluated using the UCSD dataset [23], comprising images from the surveillance videos.Figure 3 depicts the sample images.
Table 1 shows the comparative accuracy ( accu y examination outcomes achieved by the BGRODL-OC technique on test 004 and 007 datasets [12,24,25].The results infer that the MDT and FRCNN models attained ineffectual performance.Moreover, the CBODL-RPD, DLADT-PW, and RS-CNN methodologies exhibited significant performance.However, the BGRODL-OC technique accomplished the maximum performance on all the frames. Table 2 shows the average accu y analysis outcomes accomplished by the BGRODL-OC technique and other recent models on two datasets.Figure 4 portrays the comparative average accu y analysis results of the BGRODL-OC method with the existing systems on the test-004 dataset.The experimental values denote that both FR-CNN and MDT systems reached the least average accu y values such as 87.42% and 85.17%, respectively.In addition, the DLADT-PW and RS-CNN techniques achieved a moderate performance with average accu y values such as 98.35% and 97.90%, respectively.Although the CBODL-RPD model attained a considerable accu y of 99.06%, the BGRODL-OC technique exhibited the maximum performance with an average accu y of 99.32%.Table 1 shows the comparative accuracy ( ) examination outcomes achieved by the BGRODL-OC technique on test 004 and 007 datasets [12,24,25].The results infer that the MDT and FRCNN models attained ineffectual performance.Moreover, the CBODL-RPD, DLADT-PW, and RS-CNN methodologies exhibited significant performance.However, the BGRODL-OC technique accomplished the maximum performance on all the frames.Figure 5 shows the comparative average accu y analysis outcomes of the BGRODL-OC technique with present methods on the test-007 dataset.The experimental values specify that the FR-CNN and MDT techniques accomplished the least average accu y values such as 80.51% and 74.61% individually.Also, the DLADT-PW and RS-CNN methodologies achieved a modest performance with average accu y values such as 89.50% and 84.78%, correspondingly.While the CBODL-RPD algorithm simulated the data with a significant accu y of 92.27%, the BGRODL-OC system attained the maximum performance with an average accu y of 93.18%.
Biomimetics 2023, 8, 541 11 of 16 Figure 5 shows the comparative average  analysis outcomes of the BGRODL-OC technique with present methods on the test-007 dataset.The experimental values specify that the FR-CNN and MDT techniques accomplished the least average  values such as 80.51% and 74.61% individually.Also, the DLADT-PW and RS-CNN methodologies achieved a modest performance with average  values such as 89.50% and 84.78%, correspondingly.While the CBODL-RPD algorithm simulated the data with a significant  of 92.27%, the BGRODL-OC system attained the maximum performance with an average  of 93.18%.3 and Figure 6 portray the comparative TPR results of the BGRODL-OC approach on test sequence 004.The results show that the DLADT-PW and MDT models obtained poor performance.Then, the CBODL-RPD technique reported slightly decreased performance.Simultaneously, the RS-CNN and FR-CNN methods accomplished   6 portray the comparative TPR results of the BGRODL-OC approach on test sequence 004.The results show that the DLADT-PW and MDT models obtained poor performance.Then, the CBODL-RPD technique reported slightly decreased performance.Simultaneously, the RS-CNN and FR-CNN methods accomplished considerable results.However, the BGRODL-OC technique outperformed other models with the maximum TPR values.Table 4 and Figure 7 portray the comparative TPR analysis outcomes of the BGRODL-OC system on test sequence 007.The observational data denote that the DLADT-PW and MDT algorithms acquired inferior performance.In addition, the CBODL-RPD approach achieved a moderately low performance.Concurrently, the RS-CNN and FR-CNN methodologies attained notable experimental outcomes.However, the BGRODL-OC system outperformed the rest of the techniques with better TPR values.7 portray the comparative TPR analysis outcomes of the BGRODL-OC system on test sequence 007.The observational data denote that the DLADT-PW and MDT algorithms acquired inferior performance.In addition, the CBODL-RPD approach achieved a moderately low performance.Concurrently, the RS-CNN and FR-CNN methodologies attained notable experimental outcomes.However, the BGRODL-OC system outperformed the rest of the techniques with better TPR values.Table 5 shows the comparative area under the ROC (AUC) curve and computation time (CT) results of the BGRODL-OC technique.Figure 8 shows the comparative AUC score results achieved by the BGRODL-OC technique.The results infer that the DLADT-PW, FR-CNN, RS-CNN, and MDT systems exhibited the worst performance, with the lowest AUC scores, such as 89.24%, 89.88%, 90.03%, and 89.28%, respectively.Though the CBODL-RPD technique attained a slightly enhanced performance with an AUC score of 96.54%, the BGRODL-OC technique surpassed the compared methods by achieving a maximum AUC score of 97.80%.Table 5 shows the comparative area under the ROC (AUC) curve and computation time (CT) results of the BGRODL-OC technique.Figure 8 shows the comparative AUC score results achieved by the BGRODL-OC technique.The results infer that the DLADT-PW, FR-CNN, RS-CNN, and MDT systems exhibited the worst performance, with the lowest AUC scores, such as 89.24%, 89.88%, 90.03%, and 89.28%, respectively.Though the CBODL-RPD technique attained a slightly enhanced performance with an AUC score of 96.54%, the BGRODL-OC technique surpassed the compared methods by achieving a maximum AUC score of 97.80%.

Conclusions
In the current study, an automatic object classification technique for pedestrian

Conclusions
In the current study, an automatic object classification technique for pedestrian walkways termed the BGRODL-OC technique was developed.The objective of the BGRODL-OC technique is to recognize the presence of pedestrians and objects in the

Conclusions
In the current study, an automatic object classification technique for pedestrian walkways termed the BGRODL-OC technique was developed.The objective of the BGRODL-OC technique is to recognize the presence of pedestrians and objects in the surveillance video.It encompasses several processes such as the GhostNet feature extractor, GRO-based hyperparameter selection, and the ALSTM-based classification.To achieve the objective, the BGRODL-OC technique primarily applies the GhostNet feature extractors to produce a set of feature vectors.In addition, the BGRODL-OC technique makes use of the GRO algorithm for hyperparameter tuning.Finally, the object classification is performed using the ALSTM network.A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC algorithm.The experimental values exhibited the better performance of the BGRODL-OC approach over other current techniques, with a maximum AUC score of 97.80%.The future works of the BGRODL-OC technique can enhance its scalability for handling real-time video streams and extend its applicability to different environmental conditions and camera perspectives so as to further bolster the pedestrian safety and object classification accuracy.

Figure 2
defines the architecture of the ALSTM network.

Figure 4 .
Figure 4. Average  analysis results of the BGRODL-OC approach on test 004 dataset.Figure 4. Average accu y analysis results of the BGRODL-OC approach on test 004 dataset.

Figure 4 .
Figure 4. Average  analysis results of the BGRODL-OC approach on test 004 dataset.Figure 4. Average accu y analysis results of the BGRODL-OC approach on test 004 dataset.

Figure 5 .
Figure 5. Average  analysis outcomes of the BGRODL-OC approach on test 007 dataset,

Figure 5 .
Figure 5. Average accu y analysis outcomes of the BGRODL-OC approach on test 007 dataset.

Figure 6 .
Figure 6.Comparative TPR outcomes of the BGRODL-OC technique on test sequence 004.

Figure 6 .
Figure 6.Comparative TPR outcomes of the BGRODL-OC technique on test sequence 004.

Figure 7 .
Figure 7. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 007.

Figure 7 .
Figure 7. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 007.

Figure 8 .
Figure 8. AUC score analysis outcomes of the BGRODL-OC technique and other methods.

Figure 9
Figure9shows the comparative CT outcomes achieved by the BGRODL-OC technique and other recent approaches.The results imply that the RS-CNN and MDT algorithms achieved ineffectual outcomes, with maximum CT values such as 3.19 s and 3.56 s, respectively.At the same time, the CBODL-RPD, DLADT-PW, and FR-CNN methods accomplished moderate performance, with CT values such as 2.90 s, 2.75 s, and 2.90 s.However, the BGRODL-OC technique achieved an effectual performance with a minimal CT of 1.08 s.These results show the enhanced performance of the BGRODL-OC technique.

Figure 9 .
Figure 9. CT analysis outcomes of the BGRODL-OC technique and other techniques.

Figure 8 .
Figure 8. AUC score analysis outcomes of the BGRODL-OC technique and other methods.

Figure 9
Figure 9 shows the comparative CT outcomes achieved by the BGRODL-OC technique and other recent approaches.The results imply that the RS-CNN and MDT algorithms achieved ineffectual outcomes, with maximum CT values such as 3.19 s and 3.56 s, respectively.At the same time, the CBODL-RPD, DLADT-PW, and FR-CNN methods accomplished moderate performance, with CT values such as 2.90 s, 2.75 s, and 2.90 s.However, the BGRODL-OC technique achieved an effectual performance with a minimal CT of 1.08 s.These results show the enhanced performance of the BGRODL-OC technique.

Figure 8 .
Figure 8. AUC score analysis outcomes of the BGRODL-OC technique and other methods.

Figure 9
Figure 9 shows the comparative CT outcomes achieved by the BGRODL-OC technique and other recent approaches.The results imply that the RS-CNN and MDT algorithms achieved ineffectual outcomes, with maximum CT values such as 3.19 s and 3.56 s, respectively.At the same time, the CBODL-RPD, DLADT-PW, and FR-CNN methods accomplished moderate performance, with CT values such as 2.90 s, 2.75 s, and 2.90 s.However, the BGRODL-OC technique achieved an effectual performance with a minimal CT of 1.08 s.These results show the enhanced performance of the BGRODL-OC technique.

Figure 9 .
Figure 9. CT analysis outcomes of the BGRODL-OC technique and other techniques.

Figure 9 .
Figure 9. CT analysis outcomes of the BGRODL-OC technique and other techniques.

Table 1 .
analysis outcomes of the BGRODL-OC approach on test 004 and 007 datasets.

Table 1 .
Accu y analysis outcomes of the BGRODL-OC approach on test 004 and 007 datasets.

Table 2 .
Average accu y analysis results of the BGRODL-OC approach on test 004 and 007 datasets. analysis results of the BGRODL-OC method with the existing systems on the test-004 dataset.The experimental values denote that both FR-CNN and MDT systems reached the least average  values such as 87.42% and 85.17%, respectively.In addition, the DLADT-PW and RS-CNN techniques achieved a moderate performance with average  values such as 98.35% and 97.90%, respectively.Although the CBODL-RPD model attained a considerable  of 99.06%, the BGRODL-OC technique exhibited the maximum performance with an average  of 99.32%. average

Table 2 .
Average  analysis results of the BGRODL-OC approach on test 004 and 007 datasets.

Table 3 and
Figure

Table 3 .
Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 004.

Table 4 .
Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 007.

Table 4 and
Figure

Table 4 .
Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 007.

Table 5 .
AUC score and CT analysis outcomes of the BGRODL-OC technique and other algorithms.

Table 5 .
AUC score and CT analysis outcomes of the BGRODL-OC technique and other algorithms.