An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization

Task scheduling in the cloud computing paradigm poses a challenge for researchers as the workloads that come onto cloud platforms are dynamic and heterogeneous. Therefore, scheduling these heterogeneous tasks to the appropriate virtual resources is a huge challenge. The inappropriate assignment of tasks to virtual resources leads to the degradation of the quality of services and thereby leads to a violation of the SLA metrics, ultimately leading to the degradation of trust in the cloud provider by the cloud user. Therefore, to preserve trust in the cloud provider and to improve the scheduling process in the cloud paradigm, we propose an efficient task scheduling algorithm that considers the priorities of tasks as well as virtual machines, thereby scheduling tasks accurately to appropriate VMs. This scheduling algorithm is modeled using firefly optimization. The workload for this approach is considered by using fabricated datasets with different distributions and the real-time worklogs of HPC2N and NASA were considered. This algorithm was implemented by using a Cloudsim simulation environment and, finally, our proposed approach is compared over the baseline approaches of ACO, PSO, and the GA. The simulation results revealed that our proposed approach has shown a significant impact over the baseline approaches by minimizing the makespan, availability, success rate, and turnaround efficiency.


Introduction
The cloud computing paradigm provides on-demand services to users to deploy or migrate their existing on-premises infrastructure into the cloud environment, to develop cloud naïve applications in the cloud environment, or for users to make use of the existing services running in the cloud environment. This can be possible in the cloud environment by provisioning a virtual infrastructure to corresponding users with the use of virtualization technology. Users can access all types of services such as memory, computing, storage, and network on-demand whenever they need to by subscribing to those services. This subscription of services between the cloud provider and the user is named the SLA. Based on the SLA made between the user and the cloud provider, corresponding services need to be leveraged to the users. The cloud computing paradigm has characteristics [1], i.e., scalability, which renders services to its users by dynamically scaling up or down and scaling in or out based on the situation. Resource pooling is another characteristic of this paradigm which provides services to cloud users from a shared pool of resources that are, logically, synchronized with each other when a resource is allocated to a user. All these resources are given to users on demand whenever they request a cloud provider and, in turn, the cloud provider will automatically provision virtual resources to the customers based on the SLA made between them. All these services are to be billed according to their corresponding usage and payper-usage policy of services. Whenever a cloud user chooses services in the cloud paradigm, they are provisioned with respective services virtually from the cloud provider; in order to provide these services to customers without any issues, the cloud provider needs a scheduling mechanism that automatically assigns virtual resources to users based on their requests. Therefore, to map corresponding requests to virtual machines and assign those resources to customers, an effective task scheduler is needed in this paradigm as this paradigm is to be used by many users around the world; to assign resources to these large-scale users, an automatic provisioning mechanism should be presented in the cloud paradigm. The requests coming onto the cloud console are of variable sizes and different types that include text, images, video, streaming data, etc. These requests may come from heterogeneous resources. Therefore, to schedule these dynamic, variable, and heterogeneous requests onto precise virtual machines, an effective task scheduling algorithm is needed in this paradigm. An ineffective task scheduling mechanism leads to the degradation of the quality of the services by violating the SLAs parameters and thereby the trust between the cloud provider and the user will be degraded. Therefore, an effective task scheduling algorithm is needed for this paradigm to provide a high quality of service by maintaining the SLA and thereby improving the trust between the cloud providers and users. Many of the existing authors proposed several task scheduling mechanisms by using metaheuristic approaches, i.e., PSO [2], ACO [3], and the GA [4]. Earlier authors proposed task schedulers using metaheuristic approaches as it is the NP-hard problem to identify the best possible solution to schedule tasks to precise virtual machines. Therefore, in this research, we propose an efficient trust-aware task scheduling algorithm that calculates the priorities of dynamic-natured incoming tasks and the priorities of virtual resources based on the unit cost of electricity. This research uses the firefly optimization algorithm [5] to model scheduling in the cloud paradigm. This research mainly concentrates on the trust parameters based on the SLA metrics, i.e., the availability of virtual resources, the success rate of virtual resources, and, finally, the turnaround efficiency. The entire experiment was conducted on Cloudsim and our proposed approach, TAFFA, was evaluated against the baseline approaches for the above specified parameters.

Motivation and Contributions
The cloud computing paradigm provides on-demand flexible, scalable services to users based on their needs with the help of virtualization. These services automatically assign resources to customers based on their SLA made between the cloud vendor and user. To assign these virtual resources precisely based on the type of requests posed onto the cloud console, an efficient task scheduling algorithm is needed. An improper and ineffective task scheduler in the cloud paradigm degrades the quality of service by violating the SLAs parameters and thereby the degradation of trust in the cloud provider. This situation motivates us to develop a task scheduling algorithm that considers the priorities of tasks, VMs using the unit cost of electricity, and schedules tasks accordingly while minimizing the makespan and improving the availability, success rate, and turnaround efficiency. Below are the highlights and contributions of our research in this manuscript.

•
Developed an efficient trust-aware task scheduling algorithm using firefly optimization (TAFFA).

•
The effective scheduling of the priorities of tasks and VMs with electricity unit costs are calculated. • Identified the relation between the makespan and trust based on the SLAs parameters.

•
The calculation of trust based on the SLA is identified, i.e., the success rate of the virtual resource, the availability of VMs, and the turnaround efficiency. • A deadline constraint in used in this research in order to assign tasks to virtual resources after the current execution of the pending tasks. • Extensive simulations carried on Cloudsim. • Fabricated (different workload distributions, i.e., uniform, normal, left, and right skewed distributions) and realtime worklogs from HPC2N and NASA are used.

•
The proposed TAFFA is evaluated over PSO, ACO, and the GA and finally the simulation results proposed TAFFAs improved parameters, i.e., the availability, success rate, and turnaround efficiency while minimizing the makespan.
The rest of the manuscript is organized as follows. Section 2 discusses the related works conducted by various authors, Section 3 discusses the problem definition and system architecture, Section 4 discusses the proposed efficient trust-aware task scheduling algorithm in cloud computing using firefly optimization, Section 5 discusses the simulation and results, and Section 6 discusses the conclusion and proposed future work.

Related Work
In [6], the authors proposed a task scheduling approach modeled by adaptive PSO which balances between the local and global search space by modifying the inertia weights of particles. It mainly addressed the parameters makespan, throughput, and average resource utilization. The entire simulation was conducted on Cloudsim and evaluated over the baseline approaches. The results revealed that linearly descending adaptive PSO shows an impact over the existing mechanisms for the mentioned parameters. Energy consumption is an important parameter and an important aspect in the cloud computing paradigm. For this approach, the authors in [7] proposed a scheduling mechanism by a variant of PSO. In this approach, the PSO algorithm was made as an adaptive approach by changing the inertia weight and accelerated coefficient. This approach was implemented on the Cloudsim toolkit. It was evaluated over the existing max-min and min-min algorithms. From the simulation results, it is evident that the proposed approach shows a significant improvement over the baseline algorithms for the makespan and energy consumption. The cloud computing paradigm is advantageous only when users achieve a high satisfaction when resources in the cloud paradigm are properly utilized. The authors in [8] proposed a task scheduling mechanism which is a resource and deadline aware task scheduling algorithm based on PSO. It was implemented on Cloudsim. It was compared against the existing algorithms, i.e., max-min and min-min approaches. From the results, it was proved that PSO-RADL outperforms over the existing baseline algorithms for the parameters, i.e., the makespan, resource utilization, task rejection, penalty, and total cost. In [9], the authors focused on formulating a task scheduling approach using CSO and based on the behavior of cats, it was proposed and aimed at addressing the parameters, i.e., the makespan, total power cost at datacenters, migration time, and energy consumption. The scheduling model implemented on Cloudsim and two types of workloads are used for checking the efficacy of the algorithm, i.e., the random workload and real time worklogs. It was compared over PSO and CS algorithms and the results revealed that the mentioned parameters show a significant improvement over the baseline algorithms. In [10], the authors formulated a task scheduling mechanism for the balancing of the load by hybridizing lateral wolf and PSO algorithms. The LW-PSO approach works based on the fitness value of LW. It was implemented on Cloudsim with python and evaluated over the baseline approaches and it has shown a significant improvement over these algorithms while minimizing the execution time, turnaround time, and response time. In [11], the authors formulated a scheduling mechanism which addresses the parameters of the execution time and wait time. The scheduling of tasks is to be done by using the PSO approach and also by balancing the load of tasks. The entire experiment was conducted on Gridsim and evaluated with random and realtime clusters computational workloads. It was compared over different baseline approaches and from the results, it proved that Ba-PSO outperforms the existing approaches for the specified parameters. In [12], the authors formulated a task scheduling mechanism by combining PSO and opposition-based learning. This approach accelerates the searching process of PSO by using opposition-based learning to avoid a premature convergence. It was implemented on Cloudsim and evaluated over the different variations of PSO and the proposed OPSO shows a huge impact over the existing approaches for the minimization of the makespan and energy consumption. In [13], a task scheduling mechanism formulated MVO and PSO approaches. MVO works as a global search process and PSO acts as a local search. It was implemented on the Cloudsim toolkit and compared over the existing MVO and PSO algorithms and the results revealed that EMVO shows a huge impact over the baseline algorithms by minimizing the makespan while improving the resource utilization. In [14], the task scheduling algorithm was formulated by the hybridization of the GA and FPA. It was simulated on the Cloudsim toolkit and the addressed parameters were resource utilization, the completion time, and computation cost. It was compared against the existing GA and FPA and the simulation results revealed that GA-FPA outperformed the existing GA and FPA approaches for the specified parameters. The authors in [15] designed an energy efficient task scheduling mechanism in a heterogeneous cloud environment. They used a deadline constraint to execute tasks in the specified time to put the new task into the execution mode. This approach was formulated in two stages. The first stage discusses the minimization of the execution time while scheduling tasks and the second stage discusses the reassignment of tasks based on their priorities to obtain an optimized execution time and energy consumption. It was implemented on Cloudsim and compared over the existing baseline approaches of RC-GA, AMTS, and E-PAGA and from the simulation results, it is evident that the energy consumption was greatly minimized by 5 to 20% for EPETS over the baseline algorithms. In [16], a hybrid task scheduling model was developed by the authors to address the parameters of the makespan, cost, and response time. It was developed by combining two algorithms, i.e., the GA and electro search. The entire simulations were carried out on Cloudsim. It was evaluated over the baseline algorithms, i.e., HPSOGA, ES, and ACO algorithms. From the results, it was proved that HSGA was dominant over the existing algorithms for the specified parameters. Fault tolerance can be considered as one of the important parameters in the cloud paradigm as the quality of the cloud services depends on the fault tolerance. The quality of service will be preserved when the cloud provider follows the SLA promptly. For this purpose, in [17], the authors formulated a multi-phase scheduling mechanism formulated using the GA. In the initial phase, the individual fitness value was calculated and it was considered as the local fitness value; then, in the next phase, the global fitness value was identified. After calculating both fitness values, the tasks from the different users were mapped to the virtual resources according to the fitness values optimally. The total simulation was conducted on Cloudsim and it was evaluated over the baseline GA and its variants. From the results, it was observed that MFTGA showed dominance over the existing baseline approaches for an SLA violation, the execution time, memory utilization, cost, and energy consumption. When the number of user requests increased in the cloud paradigm, the computation overhead will be increased in order to schedule tasks on to respective virtual resources. The tasks in this paradigm arise from various heterogeneous resources, therefore, the authors in [18] extracted the features of the tasks and reduced the task features using mapreduce framework. It was implemented on Cloudsim and was modeled by combining the WAO and GA. It was evaluated over the existing approaches of the GA and WAO. From the results, it showed a huge impact over the baseline approaches by minimizing the processing cost, computation cost, and processing time. In [19], the authors formulated a task scheduling model based on a consciousness of energy and the cost incurred by the cloud provider. It was modeled using a GA and was formulated in two phases. In the first phase, the task priorities were calculated, but in the second phase, these tasks were allocated to the respective processors by using the GA approach. This experiment was conducted on MATLAB. It was compared against several baseline approaches and it was finally identified that it outperforms the baseline approaches while minimizing the makespan and energy consumption. For achieving a high performance in the cloud paradigm, the authors in [20] designed a task scheduling mechanism which is a hybridized approach in which the GSA is used as a local search and the GA is used as the global search. It was implemented on Cloudsim and the workload captured in this approach is a real time workflow benchmark dataset of a heterogenous nature. GSAGA was compared over Profit-MTC and IWC algorithms. From the results, it evident that GSAGA showed a huge impact over the compared approaches while maximizing its processing capacity and minimizing the energy consumption. In [21], a hybridized task scheduling approach was formulated using the FPSO and GA. The main aim of this approach was to efficiently balance the tasks among VMs. It was implemented on Cloudsim and compared over the existing PSO and GA. The results revealed that FPSO-GA had a fuzzy nature, which proved that it outperforms the existing approaches by efficiently balancing the load among VMs. In [22], the authors formulated an efficient power-aware task scheduling algorithm developed using a modified ACO approach. This algorithm was modeled based on updating pareto by accelerating the convergence using adaptive probability distribution. All the simulations were carried out on Cloudsim and evaluated over the existing state-of-the-art approaches and, finally, from the results, it was proved that MOTS-ACO was dominant over the existing approaches while distributing tasks and utilizing power efficiently. In [23], the authors designed a hybridized task scheduling algorithm which was combined using ACO and PSO, where PSO was used for the global search and ACO was used as the local search. The simulations were carried out on Cloudsim and it was evaluated over the variants of PSO and ACO approaches. From the results, it is evident that AC-PSO showed its dominance over the existing variants by minimizing the makespan, total cost, and an increase in resource utilization. In [24], the authors designed a task scheduling approach which was designed for the minimization of the makespan and improved the performance of the cloud computing paradigm. It was modeled using a R-ACO, a reinforced approach through which ant patterns can be rewarded. The experimentation was conducted on MATLAB 2015. It was evaluated over the baseline ACO approach and identified that R-ACO minimizes the makespan by 60%. In [25], a load balancing strategy was developed based on the ACO algorithm. This algorithm aims at the minimization of the makespan and cost. Cloudsim was used as a simulation tool for this approach. It was evaluated over PSO and hybrid approaches and, from the results, Mr-LBA showed its dominance over the existing algorithms for the specified parameters. In [26], a hybridized task scheduling approach was developed and aimed to address energy consumption and resource utilization. This approach was developed using ACO and BF algorithms. It was implemented using the Cloudsim toolkit. It was compared to ACO and BF approaches and the results showed its dominance in terms of the makespan, energy consumption, and resource utilization over the existing state-of-the-art algorithms. In [27], the authors designed a task scheduling approach based on the adaptive nature of the ACO algorithm. This adaptive nature was generated by using polymorphic ants based on the pheromone updation of ants. This algorithm aimed at the minimization of the execution cost and the execution time. It was implemented on Cloudsim and evaluated over the state-of-the-art approaches. From the results, it was evident that AACO outperformed the existing algorithms for the specified parameters. In [28], the authors designed a task scheduling approach for challenges such as the workflow intensity, task heterogeneity, and task dependencies. It was modeled using the hybrid approach of HEFT-ACO. It was implemented in the AWS cloud environment and it was evaluated over the existing variants of ACO. From the results, it was shown to have a huge impact over the existing variants by minimizing the makespan and cost. In [29], the authors focused on developing a task scheduling algorithm which was meant for computationaland delay-sensitive tasks at the mobile edge cloud for resource allocation. It mainly aimed at the efficient utilization of radio and computational resources to the user's requests. It was implemented using MATLAB. It was modeled as a joint scheduling mechanism using ACO and GA approaches; ACO was used as the local and the GA was used as the global search and it was compared against the existing GA and ACO. From the results, it is evident that it increased its capability in the efficient utilization of resources and allocated them in an optimized manner. In [30], a load balancing mechanism was developed to distribute resources effectively among VMs in the cloud paradigm. Heterogeneous users' requests arrived at a cloud console and, in order to balance these tasks properly onto the corresponding VMs, the authors in [30] used an ACO-NN based on nearest neighbor among all the ants; the best possible ants with solutions needed to be taken into the situation, and, therefore, the scheduling problem needed to be solved. It was implemented on Cloudsim and evaluated over the baseline approaches; finally, the results showed that it efficiently balanced the load among the VMs compared with the existing approaches. In [31], the authors used DGWO to model and schedule the dependent tasks in cloud computing. The main aim of this approach was to Sensors 2023, 23, 1384 6 of 26 minimize the computation and transmission costs. They used the largest order value method to model scheduler in the cloud paradigm. They used workflowsim for their experimentation and, finally, they compared DGWO over PSO and BPSO algorithms; the results showed that it had a huge impact over the baseline approaches for the mentioned parameters.
In Table 1, it can be clearly observed that many of the existing authors in task scheduling addressed various parameters, i.e., the execution time, execution cost, response time, energy consumption, resource utilization, and load balancing, of the tasks. Earlier authors failed to address the parameters related to trust in the cloud provider, which is an important aspect in the cloud paradigm. An ineffective task scheduler in the cloud paradigm leads to a poorer quality of service, which in turn leads to damaging individuals' trust in a cloud provider. In the proposed approach, the SLA-based trust parameters are evaluated which impact the quality of service and, in turn, affect one's trust in a cloud provider. Therefore, an efficient trust-aware task scheduling algorithm based on the SLAs parameters is proposed to preserve trust in the cloud provider. For this scheduling paradigm, the priorities of incoming tasks and VMs are considered and then the tasks are carefully scheduled onto VMs while minimizing the makespan and improving the availability, success rate, and turnaround efficiency.

Problem Definition and System Architecture
This section carefully discusses the problem definition and system architecture in this research. The below subsection discusses the problem definition.

Problem Definition
Consider that a set of m t = {m 1 , m 2 , m 3 , . . . .m t } tasks are mapped to n v = {n 1 , n 2 , n 3 , . . . n v } virtual resources which are residing in a set of hosts indicated as ho q = ho 1 , ho 2 , ho 3 . . . ho q , which are in turn residing in d p = d 1 , d 2 , d 3 , . . . d p datacenters. In this research, the task scheduling problem is defined in such a way that m t tasks are mapped to n v virtual machines, which are in turn mapped to d p datacenters by considering the priorities of the tasks and VMs based on the unit cost of electricity, while minimizing the makespan and improving the SLA-based trust parameters, i.e., the availability, success rate, and turnaround efficiency.

System Architecture
This subsection discusses the system's architecture in a detailed manner. Figure 1 discusses the system's architecture. Initially, various cloud users from heterogeneous resources are submitted onto the cloud console. These requests are captured by a cloud broker on behalf of all the users and are submitted to the task manager. The task manager needs to examine the requests of the users and if it is a valid request from a user as per the SLA made between the cloud provider and users, then the service request dispatcher dispatches the corresponding request to the scheduler. In our architecture, we induced a mechanism at the task manager level where the priorities of tasks are based on the length, run time, and capacity of tasks. After calculating the tasks' priorities, the VM based on the unit cost of electricity is calculated. After calculating the priorities carefully, corresponding tasks are fed to the execution queue and then the scheduler sends tasks onto the appropriate VMs based on the consumption status of virtual resources at the resource manager level, while minimizing the makespan and improving the SLA-based trust parameters. In our research, we posed a deadline constraint where no other tasks are to be scheduled onto a VM when a task is currently being executed on a VM. After discussing the system's architecture, we carefully modeled our scheduler using a mathematical model in this architecture. Table 2 indicates the notations used in the system's architecture.     Priorities of VMs based on electricity unit cost.
Success rate of VMs.
Trust in a cloud provider.
Initially, in this mathematical modeling, we calculated the current workload of all the VMs. It was calculated using Equation (1).
where load v indicated the workload on all n VMs considered in our work. After the calculation of the current workload on all the VMs, we calculated the workload on all the physical hosts as all the VMs are hosted on the considered physical hosts. Therefore, the current workload on all the hosts was calculated using Equation (2).
where load ho q indicates the workload on all the considered hosts. After the calculation of the workload on the VMs and hosts, to map tasks onto the VMs and the priorities of the tasks are calculated based on the length and the various processing capacities of the VMs in which the tasks are running; the priorities of the tasks depend on the length of the tasks and the processing capacities of the VMs. The processing capacity of a VM was calculated using Equation (3).
After calculating the processing capacity of a VM, the processing capacity of all the VMs was calculated using Equation (4).
After the calculation of the total processing capacity of all the VMs, the size of the task was calculated using Equation (5).
After calculating the processing capacity from Equation (5), the priorities of the tasks were calculated using Equation (6).
where pr n v indicates the processing capacity of a VM and m size t indicates the size of a corresponding task. At this level, we evaluated the priorities of tasks but we evaluated the priorities of VMs based on the electricity unit cost as incoming tasks onto the cloud console should be precisely mapped onto the VMs by considering the length and processing capacity of the VMs. Therefore, we calculated the priorities of the VMs based on the electricity cost using Equation (7). It is defined as the ratio of the highest electricity cost among all the datacenters to the electricity cost at that corresponding datacenter.
After the calculation of the priorities of the tasks and the VMs from Equations (6) and (7), the task manager needs to submit these priorities to the scheduler and the scheduler will generate schedules to the corresponding tasks by mapping the high-priority tasks onto the VMs with low electricity price datacenters. Every time the scheduler generates schedules based on these priorities, the scheduler needs to check with the resource manager for the status of the virtual resources to see whether they are busy or idle, i.e., the resource status.
After the calculation of these priorities, we identified the SLA-based trust parameters, i.e., the availability, success rate, and turnaround efficiency, which impact the quality of service of a cloud provider; if these SLA parameters are violated, then it has an impact on the trust in the cloud provider.
Initially, the availability of a virtual machine is calculated and it can be defined as the ratio of the number of tasks accepted by the VM to the number of tasks submitted by the user. It is calculated using Equation (8).
Another SLA-based trust parameter which needs to be discussed is the success rate, which is defined as the ratio of successfully executed requests onto a VM to the number of submitted requests for a specified amount of time.
After calculating the success rate, the turnaround efficiency of a VM is calculated as it is also one of the SLA-based trust parameters. Therefore, it is calculated as a ratio of the estimated turnaround time supplied by a cloud provider to the actual turnaround time generated while scheduling tasks.
It is calculated using the following Equation (10).
After the calculation of the SLA-based trust parameters from Equations (8)-(10), then the trust in a cloud service provider can be calculated using Equation (11).
From Equation (11), X = {X1, X2, X3} are the positive weights assigned for the calculation of trust in a cloud service provider and the X value lies between 0 and 1; these weights are calculated using the method in [32]. For X1 = 0.5, X2 = 0.2, X3 = 0.1, these weights will vary with respect to the requests with arrive on the cloud console. Therefore, with these weights and using Equation (11), the trust in a cloud service provider is calculated.
After the identification of the trust parameters, the makespan for the set of considered tasks is calculated as the makespan indirectly affects the quality of service which, in turn, affects the trust in a cloud provider. Therefore, we identified that there is a relationship between the makespan and SLA-based trust parameters. For calculating the makespan, initially we need to identify the execution time of a task on a corresponding VM and we posed a deadline constraint in such a way that when a request is made by a user, then immediately it should be assigned to a VM or, otherwise, it should wait until the current task has being executed on that corresponding VM. Therefore, the execution time of a task can be calculated using Equation (12).
After the calculation of the execution time, we calculated the finish time of the corresponding task as we posed a deadline constraint; therefore, every time a request is raised by a user, a VM needs to be assigned after the completion of the current task or assigned immediately if no tasks are currently running. Therefore, the finish time of a task needs to be identified as the deadline constraint is posed so the finish time is always less than the deadline of a task. Therefore, we calculated the finish time using Equation (13).
To preserve trust in a cloud service provider, it is necessary to finish a task within its deadline. Therefore, the finish time of a task should be less than the deadline of task. This is calculated using Equation (14).
Now we can calculate the makespan; this indirectly impacts on the trust in a cloud provider as it is defined as the time taken to execute a task on a particular VM. It is calculated using Equation (15).
From Equation (16), ψ tv = 1 if a task t is assigned to a v VM; otherwise, it is to be set as 0.
Up to now, we carefully modeled the scheduler mathematically but in order to preserve trust and to optimize the parameters, we used a firefly optimization algorithm for improving the SLA-based trust parameters and the makespan.

Fitness Function for Trust-Aware Task Scheduling Algorithm (TAFFA) Using Firefly Optimization
This subsection discusses the fitness function used in our proposed trust-aware task scheduling algorithm. It is calculated using the following Equation (17). where From Equations (17) to (18), we calculated the parameters mentioned above. When this fitness function is evaluated, the makespan needs to be minimized and the availability, success rate, and turnaround efficiency need to be improved. The next section discusses the methodology used in our task scheduler and the proposed trust-aware task scheduling algorithm using firefly optimization is discussed.

Proposed Trust-Aware Task Scheduling Algorithm Using Firefly Optimization
This section discusses the proposed trust-aware task scheduling algorithm using the firefly optimization algorithm. Initially, we will discuss the methodology used to model our scheduling algorithm. The proposed approach was modeled using firefly optimization as it requires less number of iterations and much of the problem space is traversed by fireflies because it is suitable for solving the NP-hard problem, which is the exact analogy for task scheduling in the cloud computing paradigm. The firefly optimization approach is a natureinspired algorithm which mimics the behavior of fireflies and their attraction patterns based on the flashlight patterns discussed in [33]. The basic approach of firefly optimization and how two flies can be attracted to each other using flashlights are discussed in [34]. Given the nature of fireflies, they are more capable of solving NP-hard and multi-dimensional problems [35], which are similar to the task scheduling problems in cloud computing. Therefore, we chose the firefly optimization algorithm to solve the task scheduling problem in the cloud paradigm. These fireflies can be attracted to other flies based on the brightness of another fly. Naturally, they will be attracted towards flies of another gender. They can identify the gender of the fly based on the intensity of the flashlights, which are used for communication. Generally, in the firefly optimization approach, these cases need to be considered: (1) assuming that all flies belong to the same gender and they will attract each other irrespective of their gender. (2) A less-bright firefly will be attracted to the brighter firefly, i.e., attractiveness is directly proportional to the brightness. (3) If a fly is brighter than all the other flies in the search space, then it moves randomly. Therefore, if the flies' light is less intense, i.e., they are less bright, then the distance between them increases as attractiveness is directly proportional to brightness. Therefore, to proceed with this approach, we need to calculate the brightness, i.e., the intensity and attractiveness.
Initially, the intensity of a firefly can be calculated using Equation (19).
Attractiveness always depends on how light is absorbing and its brightness. Therefore, the brightness intensity and light absorption coefficient are related to each other. This is indicated in Equation (20).
After calculating the intensity of the flies, we then need to identify the distance between two flies. This is calculated using Equation (21).
where r is the dimension of a firefly. After the calculation of the distance, we identified the population of fireflies and dispersed them and defined the movement of a firefly i attracted towards j, which is a brighter firefly than i, for the number of iterations indicated as u. The movement of the firefly are calculated using Equation (22).
After defining the movement of the firefly, we need to calculate the randomized damping constant, which is calculated using Equation (23).
where θ lies in between 0 and 1. θ is a damping constant.

Proposed Trust-Aware Task Scheduling Algorithm (TAFFA) in Cloud Computing Using Firefly Optimization Algorithm
This algorithm initially starts with a random firefly population and thereafter calculated the priorities of the incoming tasks using Equation (6) and calculated the priorities of the VMs using Equation (7). After calculating the priorities, the fitness function was evaluated using Equation (17). In next step, the intensity of the fireflies was calculated using Equations (19) and (20). In next step, the distance between the fireflies was evaluated by Equations (21) and (22). Then, if a firefly has a better brightness, then we calculated the corresponding parameters by using Equations (8)- (10) and (15), respectively. If these parameters are optimized, then we identify the firefly as the best solution and update its value as the best value or if the parameters are not optimized, we repeat this procedure for all the iterations until it arrives at the best solution. Start. Initialize random firefly population. Calculate priorities of incoming tasks using Equation (6). Calculate priorities of VMs using Equation (7). Evaluate fitness function in Equation (17). for every firefly do calculate intensity using Equations (19) and (20). Calculate distance and movement of fireflies using Equation (21) and Equation (22). If a firefly has more brightness then calculate ma m t , AV(n v ), SR(n v ), TE(n v ) using Equations (8)- (10) and (15), respectively. Return best solutions else repeat the same procedure until best solution arrived for all iterations. End if Stop.

Simulations and Results
In this section, discussion about extensive simulations are conducted using the Cloudsim [36] simulator. This simulator presents the entire cloud environment with the support of Java programming. For conducting these simulations, in this research, we used two types of workloads, i.e., randomized workloads with different distributions fabricated as uniform, normal, left skewed, and right skewed distributions. The second type of workload considered is the real time worklogs taken from HPC2N [37] and NASA [38] computing clusters. After taking the workload from these two types, we evaluated our proposed TAFFA approach with these algorithms. After evaluating our proposed approach, we compared our proposed TAFFA with the existing ACO, GA, and PSO.

Settings Used for Simulation
We used Cloudsim [36] for our entire simulation. For this simulation, we used 16 GB RAM, M1 chip processor, and MAC operating system, which provides virtualization support. We used a randomized fabricated workload which is represented as b01, b02, b03, b04, b05, and b06 with different distributions. After using the randomized workload, the realtime parallel worklog archives were used from HPC2N [37] to NASA [38]. These randomized distributions are of different categories in which b01 indicates uniform distribution in which all tasks are equally distributed. b02 represents normal distribution, which represents a higher number of medium tasks and a lower number of small and large tasks. b03 represents left skewed distribution, which consists of a higher number of medium tasks and a lower number of small tasks. b04 represents right skewed distribution, which consists of a higher number of small tasks and a lower number of medium tasks. In our work, we represented HPC2N as b05 and NASA worklogs are represented as b06. We considered the simulation settings from [39].

Calculation of Makespan
In this work, the makespan is the evaluated makespan for the proposed TFFA scheduler and it is needed to calculate the makespan while designing a scheduler as the effectiveness of any scheduler depends on the generated makespan. The minimization of the makespan improves the performance of the scheduler. Therefore, we calculated the makespan by providing 100 to 1000 tasks and ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 4 indicates the generated makespan for different workloads given as the input to the TAFFA scheduler and from Figure 2, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated makespan for 50 iterations for the considered tasks.

Calculation of Availability
After calculating the makespan for the proposed TAFFA scheduler, the SLA-based trust parameters were calculated to improve the trust and to calculate the availability as it is an important parameter from both the perspectives of the cloud consumer and service provider. Improving the availability percentage benefits the cloud provider in view of the quality of service, which improves the reliability and, thereby, the trust in a cloud provider will be improved. Therefore, the availability of virtual resources is calculated by providing 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 5 indicates the generated availability for different workloads given as the input to the TAFFA scheduler and from Figure 3, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated availability for 50 iterations for the considered tasks.

Calculation of Success Rate
The success rate of a virtual resource improves the trust in a cloud provider indirectly as a successful execution of tasks improves the quality of service which, in turn, minimizes SLA violations and, thereby, the trust in the cloud provider increases. Therefore, the success rate of virtual resources was calculated by giving 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 6 indicates the generated success rate for the different workloads given as the input to the TAFFA scheduler and from Figure 4, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated success rate for 50 iterations for the considered tasks.

Calculation of Turnaround Efficiency
The turnaround efficiency of virtual resources impacts the quality of service as it relates to the successful execution of tasks and the time taken to execute and respond to the user for their corresponding requests. When the turnaround efficiency improves, this in turn improves the quality of service; thereby, the trust in the cloud provider increases. Therefore, the turnaround efficiency of the virtual resources was calculated by providing 100 to 1000 tasks and we ran a simulation for 50 iterations. The configuration settings in Table 3 were used for the simulation and the given workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach is compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 7 indicates the generated turnaround efficiency for different workloads given as the input to the TAFFA scheduler and from Figure 5, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated turnaround efficiency for 50 iterations for the considered tasks.

Discussion of Results
This subsection discusses the analysis of the simulated results and the improvement of the parameters which were mentioned in Sections 5.2-5.6. We already mentioned that we used Cloudsim [35] for extensive simulations. Our entire simulations were carried out with different datasets which were fabricated and with real time worklogs and they are mentioned as b01, b02, b03, b04, b05, and b06, respectively. We compared our proposed TAFFA scheduler with the existing baseline algorithms, i.e., ACO, GA, and PSO, by running a simulation with 50 iterations. We already presented the results for our measured parameters in the above Sections 5.2-5.5. From the results, we draw inferences in such a way that measure how far our proposed TAFFA improved the parameters and generated schedules effectively. We presented the improvement of the makespan over the baseline approaches in Table 8 for the different workloads we used; in Table 9, we presented the improvement of the availability over the baseline approaches for the different workloads; in Table 10, we presented the improvement of the success rate over the baseline approaches for the different workloads; and in Table 11, we presented the improvement of the turnaround efficiency over the baseline approaches for the different workloads. Finally, from all these results, we can say that our proposed TAFFA scheduler improves the basic parameters, i.e., the makespan, energy consumption, as well as the SLA-based trust parameters, thereby improving the quality of service and trust in the cloud provider for the satisfaction of the cloud user.    Table 3 were used for the simulation and the workloads of b01, b02, b03, b04, b05, and b06, respectively. The proposed TAFFA approach was compared with the baseline algorithms, i.e., ACO, GA, and PSO approaches. Table 5 indicates the generated availability for different workloads given as the input to the TAFFA scheduler and from Figure  3, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated availability for 50 iterations for the considered tasks.      gorithms, i.e., ACO, GA, and PSO approaches. Table 7 indicates the generated turnaround efficiency for different workloads given as the input to the TAFFA scheduler and from Figure 5, it is clearly evident that the proposed TAFFA scheduler outperforms the baseline approaches for the generated turnaround efficiency for 50 iterations for the considered tasks.    This subsection discusses the improvement of the makespan as the effectiveness of the task scheduler primarily relies on the makespan. Therefore, the minimization of the makespan is to be considered as the primary aspect. The proposed TAFFA was evaluated over the baseline approaches, i.e., ACO, GA, and PSO. The reason we compared the proposed TAFFA over these algorithms is that many earlier authors modeled task schedulers with these approaches, but with the as-discussed disadvantages of ACO, GA, and PSO algorithms in the related works and different workloads, i.e., different statistical distributions and realtime worklogs, we considered these for the simulation. In Table 8, the proposed TAFFA was minimized over ACO for the given workloads b01, b02, b03, b04, b05, and b06 with 25.09%, 27.73%, 25.49%, 11%, 22.55%, and 27.45%, respectively. It was minimized over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 37.6%, 26.62%, 30.1%, 19.86%, 32.08%, and 35.75%, respectively. It was minimized over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 32.38%, 16.92%, 25.65%, 16.74%, 33.48%, and 23.11%, respectively. Therefore, the proposed TAFFA greatly minimizes the makespan over the mentioned algorithms in Table 8.

Improvement of Availability
This subsection discusses the improvement of the availability as trust in a cloud provider is based on the availability of virtual resources in the cloud paradigm. Therefore, the improvement of the availability of virtual resources is related to the quality of service provided by a cloud paradigm. The degradation of the availability leads to damaging the quality of service, which in turn affects the trust in a cloud paradigm. Therefore, the availability of virtual resources is calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs are considered for the simulation. In Table 9, the proposed TAFFA was improved over ACO for the given workloads of b01, b02, b03, b04, b05, and b06 with 27.28%, 27.4%, 34.64%, 42.98%, 53.56%, and 58.49%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 19.54%, 15.32%, 17.93%, 34.82%, 49.96%, and 55.29%, respectively. It is improved over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 22.4%, 19.02%, 24.47%, 22.87%, 29.62%, and 30.25%, respectively. Therefore, the proposed TAFFA greatly improves the availability over the mentioned algorithms in Table 9.

Improvement of Success Rate
This subsection discusses the improvement of the success rate trust in a cloud provider is based on the success rate of virtual resources in cloud paradigm. Therefore, the improvement of the success rate of virtual resources is related to the quality of service provided by a cloud paradigm. The success rate of tasks highly impacts the quality of service provided by a cloud provider, which in turn relates to trust in a cloud provider. Therefore, the success rate of virtual resources is calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs are considered for the simulation. In Table 10, the proposed TAFFA improved over ACO for given workloads of b01, b02, b03, b04, b05, and b06 with 30.48%, 66.7%, 53.56%, 56.82%, 60.08%, and 65.14%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 24.84%, 53.79%, 58.11%, 40.23%, 48.43%, and 72.36%, respectively. It is improved over the PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 24.86%, 33.42%, 38.89%, 49.48%, 33.78%, and 55.55%, respectively. Therefore, the proposed TAFFA greatly improves the success rate over the mentioned algorithms in Table 10.

Improvement of Turnaround Efficiency
This subsection discusses the improvement of the turnaround efficiency as trust in a cloud provider is based on the turnaround efficiency of the virtual resources in a cloud paradigm. Therefore, the improvement of the turnaround efficiency of tasks on virtual resources is related to the quality of service provided by a cloud paradigm. The turnaround efficiency of tasks on virtual resources highly impacts the quality of service of a cloud provider, which in turn impacts the trust in a cloud provider. Therefore, the turnaround efficiency of tasks on virtual resources are calculated carefully using the proposed TAFFA. Different statistical distributions and realtime worklogs were considered for the simulation. In Table 11, the proposed TAFFA improved over ACO for the given workloads b01, b02, b03, b04, b05, and b06 with 88.75%, 59.21%, 38.92%, 41.85%, 82.38%, and 37.8%, respectively. It is improved over the GA for the given workloads of b01, b02, b03, b04, b05, and b06 with 71.57%, 51.91%, 36.24%, 26.77%, 73.66%, and 51.41%, respectively. It is improved over PSO for the given workloads of b01, b02, b03, b04, b05, and b06 with 67.62%, 38.33%, 29.77%, 26.55%, 41.04%, and 33.18%, respectively. Therefore, the proposed TAFFA greatly improves the turnaround efficiency over the mentioned algorithms in Table 11.

Conclusions and Future Work
Task scheduling is a challenging aspect in a cloud computing paradigm as workloads in the cloud paradigm as incoming workloads onto the cloud console arise from various heterogeneous resources with different types and different runtime processing capacities. Therefore it is difficult for a cloud provider to schedule these tasks onto the appropriate virtual resources. The ineffective mapping of tasks onto the virtual resources leads to the degradation of the quality of service and results in an increase in the makespan and which, in turn, leads to a violation of the SLA between the cloud provider and the user, which damages the trust in the cloud provider. Therefore, in order to address the SLA-based trust parameters, we carried out research in which the priorities of all incoming tasks and VMs are calculated and then precisely mapped the tasks onto the VMs. We chos the firefly optimization approach to model our trust-aware task scheduler (TAFFA) and the entire simulations were carried out on Cloudsim. We carried out extensive simulations by generating tasks using different randomized task distributions and to check the efficacy of our approach, we used the standard benchmark worklogs of HPC2N and NASA computing clusters. Finally, our proposed TAFFA was evaluated over the state-of-the-art algorithms and from the results, it evident that TAFFA shows its dominance over the existing approaches by minimizing the makespan and improving the avaialability, success rate, and turnaround efficiency. Our proposed TAFFA approch has some limitations as the tasks are generated from various heterogeneous resources and the proposed TAFFA is not able to predict upcoming tasks from various users. In the future, we intend to develop our task scheduling algorithm by using an artificial intelligence approach.