An Efﬁcient Applications Cloud Interoperability Framework Using I-Anﬁs

: Cloud interoperability provides cloud services such as Software as a Service (SaaS) or customer system to communicate between the cloud providers. However, one of the most important barriers for existing researches was to adopt the application’s or data’s in cloud computing environments so as to obtain efﬁcient cloud interoperability. This paper focuses on reliable cloud interoperability with a heterogeneous cloud computing resource environment with the objective of providing unilateral provision computing capabilities of a cloud server without the help of human interaction and allowing proper utilization of applications and services across various domains by using an effective cloud environment available at runtime. Moreover, the framework uses hybrid squirrel search genetic algorithm (HSSGA) to select the relevant features from a set of extracted features in order to eliminate irrelevant data which provides advantages of low computational time and less memory usage. Thereafter, for a proper selection of cloud server with respect to the selected features, the system has developed the improved adaptive neuro-fuzzy inference system (I-ANFIS) which provides accurate server selection and helps against uncertainties caused by servers or applications. Hence, the experimental result of the proposed framework gives an accuracy of 94.24% and remains more efﬁcient compared to existing frameworks.


Introduction
With the increasing need of computing resources such as server, storage, CPU, application, database and network, which enables a new computing paradigm named cloud computing.Cloud infrastructure which are available in a data centers have to be shared among organizations through pay as you go models use service anytime anywhere [1].Cloud service can be provisioned rapidly and released with minimum human intervention in a cost effective manner.Cloud service can be scaled based on user needs [2].Intensive need of cloud computing services such as network-demanded web application, computation resource intensive application [3] and storage dependent application [4] are drivers of this technology.
Vendors such as Amazon [5,6], IBM Cloud, Openstack, Google [7], Pivotal Web, Microsoft Azure [8] and Salesforce [9] have seen rapid growth in their cloud business for the past 10 years.Each of the cloud providers have their own format to access the cloud, which results in the consumer opting for one cloud provider.There are no widely accepted standards for accessing and deploying cloud services for cloud consumers.Multi-clouds need to work seamlessly with other cloud services in an interoperable manner [10].Using the Service Level Agreement (SLA) format, authorization and authentication token [11], it is possible for the data and workload available in a public or private cloud to move between one cloud provider to another cloud provider which enable interoperability between clouds.Interoperability in cloud computing will have three kind of interoperable cloud Symmetry 2021, 13, 268 2 of 18 services such as platform, application and management interoperability with response to management, application and platform components [12].
In recent years, industry and academia have highlighted the need for cloud interoperable services.Many cloud standards address interoperability due to increasing demand.Cloud standards address the issue of interoperability when there is an exchange of data, applications and Virtual Machine (VMs) between different cloud providers [13].These inconsistencies can be either specialized, for example incongruent virtualization executions (VMware, Xen and KVM), or contrary programming code (Java-based, PHP-based) or semantic.Notwithstanding that, data synchronization is a specific issue when resource components in various clouds interact, regardless of whether they are indistinguishable [14].Such resource components regularly keep duplicates of similar information, and these duplicates should be maintained in a steady state.Interaction between clouds ordinarily have high latency, which makes synchronization troublesome [15].
Many cloud interoperability standards have been proposed based on state-of-the-art efforts, such as Distributed Management Task Force (DMTF) [16], Storage Networking Industry Association (SNIA) [17], Open Virtual Machine Format (OVF) standards [18], NIST Cloud Computing related standards [19], et cetera, to enable cloud interoperability and to be implemented soon.However, a technological challenge still remains in terms of different hypervisors for virtualization in cloud environments and platforms for developing applications and non-technological challenges in terms of the policy as well as management in cloud interoperability [20].In order to overcome the challenges faced by existing research, this work has developed an efficient applications interoperability framework using the I-ANFIS algorithm.
Therefore, the rest of the paper is organized as follows: Section 2 reviews various frameworks developed for cloud interoperability; Section 3 briefly explains the proposed methodology for cloud interoperability; Section 4 discusses the results achieved by the proposed work based on various performance metrics, and finally; Section 5 concludes the work with future scope.

Literature Review
This section illustrates the various existing works or frameworks carried out for cloud interoperability and the corresponding challenges faced by the framework.
Huedo et al. [21] developed a BEACON framework which allows cloud applications and services to be deployed automatically across different data centers and different clouds with emphasis on intercloud networking and security issues and enables the provision of federated cloud infrastructures.BEACON has contributed to the OpenNebula and OpenStack cloud management platforms so as to manage the data interoperability and it is a open-source software.The management of data interoperability was modeled well.However, they had repeated data or imbalanced distribution of data, which were the big challenges in the framework.
Castane et al. [22] illustrated mOSAIC ontology for inter-cloud interoperability which was extended to create the cloud lightning ontology (CL-ontology).CL-ontology was the incorporation of heterogeneous resources and high performance computing (HPC) environments in the cloud.To help the CL-ontology, a conventional architecture was introduced as a driver to address heterogeneity in the cloud, and as a utilization case illustration of the architecture, the internal processes of the cloud lightning framework were updated and introduced to show the plausibility of joining a semantic engine to reduce interoperability issues to encourage the consolidation of HPC in the cloud.Based on the inter-cloud interoperability, the approach was developed.However, when it considered various platforms, the respective execution of an application did not match with the corresponding server environment.
Nodehi et al. [23] discussed a framework that supports interaction between different cloud resources to support inter cloud interoperability with the objective of dispatching the remaining workloads to the best cloud service accessible at runtime.Additionally, a genetic Symmetry 2021, 13, 268 3 of 18 algorithm (GA)-based employment scheduler was created as a part of the interoperability system that offered remaining job movement with the best execution at any event cost.The cloud resource selection model [24] was evaluated with an agent-based simulation.The approach used features selection techniques to provide accurate placement of the data or applications with the respective cloud server environment; but the selection of features remained more complex and time-consuming.
K. Chandrasekaran et al. [25] used service level agreement to implement interoperable cloud services.To provide sufficient cloud resources during the deficiency of resources of one cloud, it encouraged concurred administrations by the client.Adaptable resource management methodology was created for interoperability-based cloud services.At first, the service level agreement (SLA) layouts of private and public clouds were mapped with the soft term frequency inverse document frequency (TF-IDF) metric with case-based reasoning (CBR) approach.At that point, in light of the mapped SLA, various groups of cloud suppliers were framed with the assistance of the K-means clustering technique.Lastly, when one of the clouds in a cluster confronted the issue of resource deficiency, adaptable cloud resource assignment was given through the adaptive dimensional search algorithm.The developed interoperable cloud computing with SLA remains to provide unwanted resources needed for an application to get executed.
Huedo et al. [26] used an SLA-based service virtualization which a self-manageable architecture gives an approach for easing interoperable service executions in an assorted, heterogeneous, distributed and virtualized universe of cloud services.In a distributed environment, to achieve reliable and efficient service operation, a model with the combination of negotiation, brokering and deployment with SLA-aware extensions and autonomic computing principles was used.The method remained more challenging at the time of big file size or application which required installation of wide setup files.
Loulloudes et al. [27] used a utility-based approach for provisioning on demand cloud computing resource to IT solutions.Currently, cloud computing consumers, who are willing to deploy cloud applications across different cloud resource providers do not have any automated user application programming interface (API) nor support tools.The approach presented current efforts to develop an open-source cloud application management framework (CAMF) based on the Eclipse Rich Client Platform.The framework facilitated cloud application lifecycle management in a vendor-neutral way to avoid interoperability issues.The framework conflicted with a vendor in a neutral way to avoid interoperability but the technological issues reduced its efficiency.
Sehgal et al. [28] has addressed interoperability at application-level which paves the path to understanding allocation of cloud resources at distributed infrastructure level.The approach was widely used, and it had the ability to capture the primary challenges in developing distributed applications.It used MapReduce as the fundamental exemplar and it developed an interoperable implementation of MapReduce with an application programming interface (API) called Simple API for Grid Applications (SAGA) to support distributed programming.Thereafter, the work provided a canonical word count application that used SAGA-based MapReduce.At last, the performance measures were provided and analyses of the SAGA-map were done based on multiple, different, heterogeneous infrastructures concurrently for the same problem instance.As for the same problem instance, the framework was analyzed and stated to be efficient for mapping the application according to the respective servers but non-technological issues caused an obstacle in terms of providing efficient interoperability.Thus, the above investigation illustrates various existing works to conquer cloud interoperability and to provide smooth application-based computing.However, technological challenges as well as non-technological challenges remain that cause inefficient data or application interoperability in the cloud.In order to overcome the challenge, the work has initiated efficient application interoperability in cloud computing, which meets the current trend requirements and scope towards future needs.

Proposed System
For the various platforms on the source and objective clouds, a separate transfer procedure has to be started to pack, copy, download, install, deploy and configure in order to allow interoperability.Nevertheless, there are open problems with how systems communicate with others when the originating cloud does not accept additional cloud services or when a specific system in the original cloud does not support the particular cloud operating system.This type of issue may create unwanted congestion of the data or applications and may affect the whole process of interaction with the cloud.In order to conquer these types of challenges, the work has proposed an efficient application or data interoperability framework containing the following phases: feature extraction, feature selection, cloud server arrangement and cloud server selection.
The proposed framework illustrated in Figure 1 provides a simple exchange of data and services among different applications hosted on different platforms and infrastructures on the cloud, and allows utilization of applications and services across various domains.Therefore, detailed discussion of each phase is done in later sections.has initiated efficient application interoperability in cloud computing, which meets the current trend requirements and scope towards future needs.

Proposed System
For the various platforms on the source and objective clouds, a separate transfer procedure has to be started to pack, copy, download, install, deploy and configure in order to allow interoperability.Nevertheless, there are open problems with how systems communicate with others when the originating cloud does not accept additional cloud services or when a specific system in the original cloud does not support the particular cloud operating system.This type of issue may create unwanted congestion of the data or applications and may affect the whole process of interaction with the cloud.In order to conquer these types of challenges, the work has proposed an efficient application or data interoperability framework containing the following phases: feature extraction, feature selection, cloud server arrangement and cloud server selection.
The proposed framework illustrated in Figure 1 provides a simple exchange of data and services among different applications hosted on different platforms and infrastructures on the cloud, and allows utilization of applications and services across various domains.Therefore, detailed discussion of each phase is done in later sections.

Feature Extraction
This is the initial stage of the proposed framework.The initialization of the proposed framework with feature extraction provides various features extracted from the application or data and cloud server.Features extraction helps the work to know the important relationships among the data without losing any relevant information and also provides for identifying the redundant features so as to reduce the computational time.Some of the extracted features based on the task manager and cloud server that have been extracted are stated below:

Feature Extraction
This is the initial stage of the proposed framework.The initialization of the proposed framework with feature extraction provides various features extracted from the application or data and cloud server.Features extraction helps the work to know the important relationships among the data without losing any relevant information and also provides for identifying the redundant features so as to reduce the computational time.Some of the extracted features based on the task manager and cloud server that have been extracted are stated below:

Task Manager
The task manager helps us to know about the currently running applications on a system.The seasonal requests of multiple users are divided into a series of tasks, and these tasks are managed by a task manager.The task manager (T m ) contains several features, such as task speed (I 1 ), task cost (I 2 ), task weight (I 3 ), and size of the data (I 4 ).These features are extracted for the process of resource allocation.The explanation for some of the features is given below: (a) Task speed: It can be calculated by taking the ratio of the turnaround time for allocating cloud resources to the amount time taken to respond to the cloud consumer from the time of the request, which can be expressed as where, I 1 is the task speed, Γ a is the turnaround time for allocating cloud resources and ω is the amount of time taken to respond to the cloud consumer from the time of the request.(b) Task cost: The cost of the task is defined as the amount required to be paid before the request can be attained.The demand cost is determined using the equation: where, I 2 is the task cost, τ is the rate at with cloud resource are allocated to the cloud consumer and Γ d is the amount time taken to respond to the cloud consumer from the time of the request.(c) Task weight: The amount of weight depends on the speed and cost of the demands, which is expressed as where, x denotes a constant value and x ∈ (0, 1).(d) Data size: Data size of a request can be estimated from the following expression, where, I 4 denotes the user's data size, T s represents the total size of the user's request and e denotes the allowed probability of committing an error in selecting a small representative of the user's request.

Cloud Server Features
The features of cloud servers, such as memory resources (I 5 ), bandwidths (I 6 ), disk space (I 7 ) and number of requests (I 8 ) are extracted [29].The explanations of some of the cloud server features are given below:

Memory
The cloud memory in a server is used to store, retain and retrieve information by the user and it is calculated as follows, where, I 5 denotes the memory of the cloud server, S L represents the number of storage locations, and S s denotes the size of each storage location.

Bandwidth
The transferring rate of data is known as bandwidth.It is calculated by taking the product of several tasks and weight of the applications.
where, I 6 denotes the bandwidth of the cloud server and N t represents a number of tasks, u w denotes the usage weight.

Disk Space
Disk space of the cloud server is the total used space and the amount of free space available, which is expressed as, I 7 = f s + u s (7) where, I 7 denotes the disk space of the cloud server, f s and u s represents the free space and the used space of the cloud server.

Number of Requests
The total number of requests to the cloud server is denoted as, where, I 8 denotes the number of requests sent, and R 1 + R 2 , . . ., R n denotes the requests from the user.The extracted features are expressed in the form of an equation as stated below: where, I (A,C) denotes the extraction of the features from "1" to n-th values of the application and cloud servers.

Proposed Algorithm
The feature selection process contributes to the proposed work by selecting most of the important features from the extracted features to select the cloud server in order to eliminate the irrelevant features that might reduce model accuracy and also may lead to more computational time.In order to optimize the extracted feature, the work developed a Hybrid Squirrel Search Genetic (HSSG) algorithm.

HSSG Algorithm
The HSSG algorithm [30] provides a complete summary of all the features and the important features required to suit the cloud server for meeting all the application needs.This reduces the error rate and tends to achieve a good accuracy rate.The HSSG algorithm follows the convergence theorem that is the time taken so as to achieve an optimal solution or best solution.The HSSG algorithm takes the extracted features as input and finds the best fitness value between the corresponding squirrel (features) and the server (food).Finally, the minimum value is chosen to be the best fitness value, which is an important feature to map the cloud server.The convergence time of the HSSG algorithm is faster and obtains relevant features so as to provide good accuracy rate.The HSSG algorithm follows certain steps which have been discussed in depth below.
Step 1: Random Initialization There are n number of flying squirrels (extracted features) in a forest (working space) and the location of i-th flying squirrel can be specified by a vector [30].The location of all the flying squirrels can be represented by the following matrix: where, I i,j represents the j-th dimension of i-th flying squirrel.A uniform distribution (Equation ( 2)) is used to allocate the initial location of each flying squirrel in the forest.
where, I L and I U are lower and upper bounds respectively of i-th flying squirrel in j-th dimension and U (0, 1) is a uniformly distributed random number in the range (0, 1).
Step 2: Fitness evaluation The fitness value of each flying squirrel is calculated by substituting the decision variables into the user-defined fitness function equation and the respective values are stored in arrays [30] as illustrated below: . . .
where F EVAL = (F EVAL1 , F EVAL2 , . . ., F EVALn) is the fitness value for each flying squirrel location.
Step 3: Sorting, declaration and random selection Furthermore, the quality of the food source (cloud server) based on the fitness value is sorted in ascending order.
Based on the sorting of the food source of each flying squirrel location, the optimization algorithm consists of three trees, such as hickory tree (hickory nuts food source), oak tree (acorn nuts food source) and the normal tree, which evaluates the best nuts or food source, which is located by the flying squirrel.Within this tree, the hickory tree represents the best solution that is the minimal fitness value, which is denoted as (I HT ), then the next three best flying squirrels are considered to be on the acorn nut trees (I AT ) and the rest are considered as normal trees (I NT ).
Step 4: Generate new locations Under dynamic foraging conditions, that is, in search of an efficient solution, there are three situations.In every situation, it is assumed that the predator is absent so that it encourages the flying squirrel to search for a better solution over a wide area or forest or working space [31].The three scenarios are: Scenario 1: The new location generated by the flying squirrels when it tends to move from acorn nut trees to hickory nut trees.The new locations can be generated as follows: where, I new AT denotes the newly updated value for flying squirrel of acorn nut trees to hickory nut tree, λ g represents the gliding distance, κ states the gliding constant, U 1 is a function which returns a value from the uniform distribution on the interval (0, 1), and P AB is the absence of predators.Scenario 2: The new location generated by the flying squirrel when it tends to move from normal trees to acorn nut trees.The new locations can be generated as follows: U 2 is a function which returns a value from the uniform distribution on the interval (0, 1).Scenario 3: The new location generated by the flying squirrels when it tends to move towards hickory tree if they have already fulfilled their daily energy requirements in this scenario, the new location of squirrels can be generated as follows: U 3 is a function which returns a value from the uniform distribution on the interval (0, 1).
Step 5: Seasonal monitoring condition Due to seasonal conditions (outliers), the behavior of the flying squirrel may be changed.So as to cope up with the seasonal conditions and to avoid getting trapped within the local minima or solutions, a seasonal monitoring condition is introduced in the algorithm Then, the seasonal monitoring condition is checked.Under the condition of Sea t C < Sea min , the winter is over and the flying squirrels which lose their abilities to explore the forest will randomly relocate their searching positions for food sources again: where Levy distribution is a powerful mathematical tool to enhance the global exploration capability of most optimization algorithms where, u a and u b are two functions which return a value from the uniform distribution on the interval (0, 1), κ is the constant and ξ is the Levy Index and it is calculated as: where, Γ(κ) = (κ − 1)!
Step 6: Crossover and Mutation In order to achieve an accurate location of the flying squirrel which is in search of food, crossover and mutation are performed on the flying squirrel.That is, among the updated new locations of the flying squirrel (I new HT ), a crossover point is chosen randomly.Now, within the selected crossover point, the new location of the flying squirrel is updated until the crossover point is reached.Now, these updated new locations are added the previous locations, and the mutation is performed so as to obtain better selection accuracy.Initially, two squirrels are selected with the best fitness value and after that, crossover probability ň C and mutation probability ň m are calculated using the equation below: where, ň c 0 and ň m 0 represent the higher crossover and mutation probabilities, ň c 1 (ň c 1 < ň c 0 ) and ň m 1 (ň m 1 < ň m 0 ) are the lower probabilities, I NT and I NT are the lower fitness values of the flying squirrel, and I HTMAX and I avg are the optimal and average fitness values in the population.

1.
When the fitness of a flying squirrel is greater than the average one in the population (total features), it shows that the squirrel is better, and ň C and ň m should be correspondingly reduced.

2.
When the fitness of a flying squirrel is smaller than the average one, it indicates that the squirrel has a poor performance, and ň C and ň m should keep their initial configurations.

3.
When the fitness of a flying squirrel is close to the maximum one, ň C and ň m should be kept as small as possible to retain the best individual.
Step 7: Stopping criterion The algorithm terminates if the maximum number of iterations is satisfied.Otherwise, generating new locations and checking seasonal monitoring conditions are repeated.
Thus, the HSSG algorithms tend to provide the best overall fitness value-based features so as to select the respective cloud server.The selected features are stored in an array for further processing as shown below in the equation: Thus, the pseudo code for the proposed HSSG algorithm is illustrated in Algorithm 1, which gives an overall outline of the proposed HSSG method.

Cloud Server Arrangements
After selecting the features, they are sorted depending upon the fitness value of the features of application as well as cloud servers.The sorting of the features is carried out by the quick-sort algorithm.The sorting of the features helps the cloud server to improve its reliability.
The key process in quick sort is partitioning.The target of the partitions is F SELECT , given an array and a fitness value of the feature of the array is chosen as the pivot, so the quick-sort algorithm puts the pivot value at its correct position in a sorted array and puts all smaller features before the pivot value and put all greater feature fitness value after the pivot value.Thus, the sorting is represented in the equation below: QuickSort(F SELECT , low, high) pivot_index = partition(F SELECT , low, high); QuickSort(F SELECT , low, pivot_index − 1); QuickSort(F SELECT , high, pivot_index + 1); Based on the above equation, the sorting of the features has been done.Now, the features are arranged more reliably so as to avoid any errors while selecting the cloud server, which allows the full utilization of the resource and provides a good accuracy rate.Thus, the sort feature is represented in the equation below:

Cloud Server Selection
Cloud server selection is the most important step for cloud interoperability.Based on the sorted features the selection of a cloud server takes place.As the existing techniques faced an inappropriate cloud server selection and improper utilization of resources which caused a high error rate.In order to select the cloud server, the work uses the improved adaptive neuro-fuzzy inference system (I-ANFIS) [32].The primary goal of I-ANFIS is to allocate an appropriate resource to the application or software with maximum resource utilization and minimum delay.The I-ANFIS provides accurate selection of the cloud server due to the Gaussian kernel membership function which tends to reduce the possibility of error.The I-ANFIS structural layers are presented below in Figure 2. The workings behind each structural layer of the I-ANFIS are stated below: function.The output of each node multiplies the entire signal coming towards it and represents the firing strength of a rule.
Here, i w → firing strength.
). ( 4(33) Fuzzification layer: This transforms the crisp input that is the collection of the selected of the application and the VM servers into linguistic labels with a degree of membership as explained below in Equations ( 27) and ( 28).
Rule1 : if I i is i and I i+1 is ňi , then Rule2 : I i is i+1 and I i+1 is ňi+1 , then where i , ň i , i+1 and ň i+1 signify the fuzzy sets; and I i and I i+1 denote the input set (task features and VM status details).i , x i , y i , i+1 , x i+1 and y i+1 values are the parameter set and Φ is a first-order polynomial and represents the outputs of the first-order Sugeno fuzzy inference system.I-ANFIS is composed of five functional layers.The function of each layer is given as follows: The output of each node in the first layer is defined as, where, ∏ 1 i signifies the output of layer 1, Φ i is the collection of input nodes and Ω i represents the Gaussian kernel membership function (GKMF).To improve the performance of I-ANFIS the proposed work performs the Gaussian kernel membership (GKMF) function, which is computed as follows, where, y i is the central value and standard deviation which should be greater than 0 ( > 0).If the Standard Deviation (SD) value is smaller, then the membership value will be accurate in identifying the features.Membership values are computed for each input value in I. Product layer: Every node in layer 2 is a fixed node, which admits first layer input values and turns to represent fuzzy sets of respective input variables as a membership function.The output of each node multiplies the entire signal coming towards it and represents the firing strength of a rule.
Here, w i → firing strength.Normalization layer: The output of each node in this layer represents their normalization position to the firing strengths from the previous layer.
Here, E y (w i ) signifies normalized firing strength entropy value.Defuzzification layer: Each node of this layer gives a firing strength output of the first order Sugeno-type fuzzy if-then rule as follows, Output layer: It has only one node, and it calculates the sum of all the outputs coming from the nodes of the previous layer to produce the overall I-ANFIS output as in Equation (24).
Here, ∏ 5 i is the overall output of I-ANFIS.That is to say, the output ∏ 5 i signifies the best cloud server that is regarded as a chosen VM to execute the application or software.
Thus, the overall outline of the proposed I-ANFIS is given in the form of pseudo code illustrated in Algorithm 2.

Algorithm 2
Input: Application and VM features Output: Cloud Server Selection Begin Initialize the linear parameters, i, x i , y i For i = 1 to n Generate fuzzy output from crisp input using,

Discussion and Results
This section provides a detailed analysis of the proposed efficient application cloud interoperability based on various performance metrics and thereafter a comparative analysis is done with the existing methodologies in order to state its efficiency.

Performance Analysis
Based on performance metrics, such as accuracy, precision, F-measures, recall, fitness value vs. iteration, sorting time and training time, the evaluation of the proposed methods is carried out.The analysis of the proposed work is split based on proposed feature selection, sorting of features and selection of cloud server.

Performance Analysis of Proposed HSSG Algorithm for Selecting Features
The proposed HSSGA feature selection method is analyzed based on iteration vs. fitness value and compared with existing techniques, such as lower pollination algorithm (FPA), cat swarm optimization algorithm (CSO), genetic algorithm (GA), and squirrel search algorithm (SSA).The evaluation of iteration vs. fitness for the proposed method is tabulated.
Table 1 illustrates the evaluation of the proposed HSSG optimization algorithm based on iteration vs. fitness value to select the important features in order to achieve good accuracy and low complexity time.To elaborate its efficiency, it was analyzed with existing techniques, such as FPA, CSO, GA and SSA [33].Basically, iteration vs. fitness states that the method adheres to best fitness value and reduces the computational time within a minimal iteration.According to that, the analysis illustrated that the proposed HSSG algorithm tends to achieve a fitness value of 102 for the 25th iteration whereas the existing FPA, CSO, GA and SSA achieve fitness values of 79, 84, 89 and 92, respectively.From the analysis, it is known that the proposed HSSG algorithm tends to achieve the best fitness value compared to existing methods.The graphical analysis of the proposed algorithm with the existing method is represented in Figure 3. is carried out.The analysis of the proposed work is split based on proposed feature selec tion, sorting of features and selection of cloud server.

Performance Analysis of Proposed HSSG Algorithm for Selecting Features
The proposed HSSGA feature selection method is analyzed based on iteration vs fitness value and compared with existing techniques, such as lower pollination algorithm (FPA), cat swarm optimization algorithm (CSO), genetic algorithm (GA), and squirre search algorithm (SSA).The evaluation of iteration vs. fitness for the proposed method i tabulated.
Table 1 illustrates the evaluation of the proposed HSSG optimization algorithm based on iteration vs. fitness value to select the important features in order to achieve good ac curacy and low complexity time.To elaborate its efficiency, it was analyzed with existing techniques, such as FPA, CSO, GA and SSA [33].Basically, iteration vs. fitness states tha the method adheres to best fitness value and reduces the computational time within a minimal iteration.According to that, the analysis illustrated that the proposed HSSG al gorithm tends to achieve a fitness value of 102 for the 25th iteration whereas the existing FPA, CSO, GA and SSA achieve fitness values of 79, 84, 89 and 92, respectively.From the analysis, it is known that the proposed HSSG algorithm tends to achieve the best fitnes value compared to existing methods.The graphical analysis of the proposed algorithm with the existing method is represented in Figure 3.  From Figure 3 it can be graphically analyzed and illustrated that the proposed HSSG algorithm tends to achieve better fitness outcomes for the respective iteration than the existing FPA, CSO, GA and SSA algorithms.The better fitness values within a low itera tion may help to decrease the time complexity and may also help achieve good accuracy without further iteration proceeding processes.From Figure 3 it can be graphically analyzed and illustrated that the proposed HSSG algorithm tends to achieve better fitness outcomes for the respective iteration than the existing FPA, CSO, GA and SSA algorithms.The better fitness values within a low iteration may help to decrease the time complexity and may also help achieve good accuracy without further iteration proceeding processes.

Performance Analysis of Proposed I-ANFIS for Selection of Cloud Server
The proposed I-ANFIS is analyzed based on various performance metrics, such as accuracy, precision, recall, training time, memory usage and F-measure with various existing methods, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN) and ANFIS.The evaluation of the performance metrics for the proposed I-ANFIS and the existing methods is tabulated.
Table 2 illustrates the analysis of various performance metrics, such as accuracy, recall, F-measure and precision, for the proposed method and evaluated against the existing methods, such as SVM, KNN, ANN and ANFIS [34].The analysis illustrates that the proposed I-ANFIS achieves precision, recall, F-measure and accuracy values of 87.58%, 91.19%, 90.51% and 94.24%, respectively, which ranges between 87.58 and 94.24%, whereas the existing methods tend to achieve metric values ranging between 84.33 and 92.55% which is relatively low compared to the proposed method.The metrics estimate the efficiency of the proposed I-ANFIS based on true positive (TP), false positive (FP), true negative (TN) and false negative (FN).Therefore, the proposed method tends to determine the best server that suits the corresponding application features.The graphical analysis of the proposed method along with the existing method is illustrated in Figure 4.The proposed I-ANFIS is analyzed based on various performance metrics, such accuracy, precision, recall, training time, memory usage and F-measure with various isting methods, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), A tificial Neural Network (ANN) and ANFIS.The evaluation of the performance metrics the proposed I-ANFIS and the existing methods is tabulated.
Table 2 illustrates the analysis of various performance metrics, such as accuracy, call, F-measure and precision, for the proposed method and evaluated against the existi methods, such as SVM, KNN, ANN and ANFIS [34].The analysis illustrates that the p posed I-ANFIS achieves precision, recall, F-measure and accuracy values of 87.58 91.19%, 90.51% and 94.24%, respectively, which ranges between 87.58 and 94.24 whereas the existing methods tend to achieve metric values ranging between 84.33 a 92.55% which is relatively low compared to the proposed method.The metrics estim the efficiency of the proposed I-ANFIS based on true positive (TP), false positive (FP), tr negative (TN) and false negative (FN).Therefore, the proposed method tends to det mine the best server that suits the corresponding application features.The graphical an ysis of the proposed method along with the existing method is illustrated in Figure 4.  Figure 4 illustrates the graphical analysis of the metrics, such as accuracy, precisio recall and F-measure, for the proposed I-ANFIS [35] technique with the existing SV KNN, ANN and ANFIS.The analysis illustrates that the proposed techniques tend to d tect the cloud server with better accuracy and tends to reduce the false negative and fa positive rate by achieving a better recall and precision value.However, the existing tec Figure 4 illustrates the graphical analysis of the metrics, such as accuracy, precision, recall and F-measure, for the proposed I-ANFIS [35] technique with the existing SVM, KNN, ANN and ANFIS.The analysis illustrates that the proposed techniques tend to detect the cloud server with better accuracy and tends to reduce the false negative and false positive rate by achieving a better recall and precision value.However, the existing techniques tend to achieve comparatively fewer metric values.
Training Time and Memory Usage analysis of Proposed I-ANFIS Based on training time and memory usage, the proposed I-ANFIS is evaluated with existing SVM, KNN, ANN and ANFIS.From Figure 4 it can be stated that the proposed I-ANFIS tends to utilize less training time as well as CPU and memory usage in order to perform more efficiently.Figure 5a illustrates that the proposed technique achieves a training time of 23,457 ms to train a dataset, which is comparatively better compared to existing SVM, KNN, ANN and ANFIS methods, which achieve training times of 40,325 ms, 36,457 ms, 32,457 ms and 28,745 ms, respectively.Figure 5b illustrates that the proposed technique consumes 17,515 kb of memory, whereas the existing SVM, KNN, ANN, and ANFIS technique achieve memory usage of 26,354 kB, 24,568 kB, 21,457 kB and 19,634 kB respectively, which is comparatively higher than the proposed method.Thus, the proposed I-ANFIS attempts to achieve a better selection of server based on the application and cloud server features by requiring low time as well as low usage of memory to avoid complexity.

Performance Analysis of Proposed Quick Sort for Sorting the Selection Features
The performance analysis of the quick-sort algorithm is done based on sorting time with the existing heap, bubble, insertion and merge sort algorithms in order to compare the time consumption between the algorithms and to state its efficiency.The graphical analysis of the sorting time for the proposed and existing algorithms is represented in Figure 6.Thus, the proposed I-ANFIS attempts to achieve a better selection of server based on the application and cloud server features by requiring low time as well as low usage of memory to avoid complexity.

Performance Analysis of Proposed Quick Sort for Sorting the Selection Features
The performance analysis of the quick-sort algorithm is done based on sorting time with the existing heap, bubble, insertion and merge sort algorithms in order to compare the time consumption between the algorithms and to state its efficiency.The graphical analysis of the sorting time for the proposed and existing algorithms is represented in Figure 6.
Figure 6 represents the graphical analysis of the sorting time taken by the proposed quick-sort algorithm and the existing heap, bubble, insertion and merge sort algorithms so as to compute the best algorithm.The sorting algorithm helps the I-ANFIS algorithm to select the cloud server with respect to the applications.There should be a low sorting time in order to reduce the computational time.According to that, the proposed methods tend to achieve a sorting time of 6322 ms, whereas, the existing heap, bubble, insertion and merge sort algorithms tend to achieve sorting times of 11,969 ms, 11,347 ms, 10,457 ms and 9478 ms, respectively, which is relatively high compared to the proposed algorithm.
The performance analysis of the quick-sort algorithm is done based on sorting time with the existing heap, bubble, insertion and merge sort algorithms in order to compare the time consumption between the algorithms and to state its efficiency.The graphical analysis of the sorting time for the proposed and existing algorithms is represented in Figure 6. Figure 6 represents the graphical analysis of the sorting time taken by the proposed quick-sort algorithm and the existing heap, bubble, insertion and merge sort algorithms so as to compute the best algorithm.The sorting algorithm helps the I-ANFIS algorithm to select the cloud server with respect to the applications.There should be a low sorting time in order to reduce the computational time.According to that, the proposed methods tend to achieve a sorting time of 6322 ms, whereas, the existing heap, bubble, insertion and merge sort algorithms tend to achieve sorting times of 11,969 ms, 11,347 ms, 10,457 Thus, the proposed framework for cloud interoperability achieves a unilateral provision of computing capabilities of a cloud server without the help of human interaction and allows utilization of applications and services across various domains.

Conclusions
The work introduces an efficient application cloud interoperability framework, which is used to manage various applications or data and to execute over the respective cloud server.The framework provides efficient solutions for the implementation of collaboration among services for better quality in providing services.In order to overcome the technological and non-technological challenges, the framework has developed a hybrid search optimization algorithm for selecting relevant features and the I-ANFIS technique to select the cloud server.It avoids most of the redundant data and utilizes less memory so as to avoid complexity or congestion.The efficiency of the proposed methods is evaluated using various performance metrics, and overall, the evaluation states that the framework achieved an accuracy rate of 94.24%, precision of 87.58%, recall of 91.19% and F-measure of 90.51%, which is highly efficient compared to existing works.Therefore, in future the proposed framework can be improvised by indulging load balancing techniques for handling multiple requests and deep learning techniques to increase the accuracy rate of selecting a cloud server.

y 2021 ,
13, x FOR PEER REVIEW 4 of 18

Figure 3 .
Figure 3. Comparative analysis of proposed HSSG algorithm based on iteration vs. fitness.

Figure 3 .
Figure 3. Comparative analysis of proposed HSSG algorithm based on iteration vs. fitness.

Figure 4 .
Figure 4. Comparative analysis of proposed I-ANFIS technique based on precision, accuracy, recall and F-measure.

Figure 4 .
Figure 4. Comparative analysis of proposed I-ANFIS technique based on precision, accuracy, recall and F-measure.

Figure 5 .
Figure 5. Comparative analysis of proposed I-ANFIS technique based on: (a) CPU memory usage and (b) Training time.

Figure 6 .
Figure 6.Comparative analysis of proposed quick sort based on sorting time.

Figure 5 .
Figure 5. Comparative analysis of proposed I-ANFIS technique based on: (a) CPU memory usage and (b) Training time.

Figure 6 .
Figure 6.Comparative analysis of proposed quick sort based on sorting time.

Figure 6 .
Figure 6.Comparative analysis of proposed quick sort based on sorting time.

Table 1 .
Evaluation of proposed hybrid squirrel search genetic (HSSG) algorithm based on iteration vs. fitness.

Table 1 .
Evaluation of proposed hybrid squirrel search genetic (HSSG) algorithm based on iteration vs. fitness.

Table 2 .
Evaluation of proposed I-ANFIS based on precision, accuracy, recall and F-measure.

Table 2 .
Evaluation of proposed I-ANFIS based on precision, accuracy, recall and F-measure.
Symmetry 2021, 13, x FOR PEER REVIEW 16 of 18 training time of 23,457 ms to train a dataset, which is comparatively better compared to existing SVM, KNN, ANN and ANFIS methods, which achieve training times of 40,325 ms, 36,457 ms, 32,457 ms and 28,745 ms, respectively.Figure 5b illustrates that the proposed technique consumes 17,515 kb of memory, whereas the existing SVM, KNN, ANN, and ANFIS technique achieve memory usage of 26,354 kB, 24,568 kB, 21,457 kB and 19,634 kB respectively, which is comparatively higher than the proposed method.