Building a Reputation Attack Detector for Effective Trust Evaluation in a Cloud Services Environment

: Cloud computing is a widely used technology that has changed the way people and organizations store and access information. This technology is versatile, and extensive amounts of data can be stored in the cloud. Businesses can access various services over the cloud without having to install applications. However, cloud computing services are provided over a public domain, which means that both trusted and non-trusted users can access the services. Although there are a number of advantages to cloud computing services, especially for business owners, various challenges are posed in terms of the privacy and security of information and online services. A threat that is widely faced in the cloud environment is the on/off attack, in which entities exhibit proper behavior for a given time period to develop a positive reputation and gather trust, after which they exhibit deception. Another threat often faced by trust management services is a collusion attack, which is also known as collusive malicious feedback behavior. This is carried out when a group of people work together to make false recommendations with the intention of damaging the reputation of another party, which is referred to as a slandering attack, or to enhance their own reputation, which is referred to as a self-promoting attack. In this paper, a viable solution is provided with the given trust model for preventing these attacks. This method works by providing effective security to cloud services by identifying malicious and inappropriate behaviors through the application of trust algorithms that can identify on/off attacks and collusion attacks by applying different security criteria. Finally, the results show that the proposed trust model system can provide high security by decreasing security risk and improving the quality of decisions of data owners and cloud operators.


Introduction
Many threats are faced by users of cloud computing services, including trust and reputation attacks. These threats are experienced due to the extremely dynamic, nontransparent, and distributed nature of cloud computing services [1,2]. Cloud service providers and customers face significant challenges in preserving and handling trust in the cloud system [3]. Threats also emerge because the provision of cloud computing services is in the public domain, which means that many users have access to it [4]. There are risks from the actions of malicious users toward other cloud service consumers, who often give feedback about their experiences.
Noor, Sheng, and Alfazi [1] asserted that the feedback of service providers was an effective means of obtaining information that helps in examining the trustworthiness of cloud service customers. Varalakshmi, Judgi, and Balagi [5] also noted that service provider feedback plays a vital part in generating trust for a cloud-based service. Cloud computing providers and consumers use trust management systems and detection systems for reputation attacks to provide improved protection and security of online data.
Although trust management and reputation attack detection systems are available, cloud computing systems continue to face targeted attacks from different sources [6,7].

Problem Statement
The on/off attack is a widely encountered threat in cloud environments. Here, a few entities show proper behavior for some time to create a positive reputation. Once they have generated trust, they start their deception. Malicious bodies win the trust of the trust management system (TMS) by exhibiting good behavior during interactions that have small impacts. However, when they come across a suitable opportunity for major interactions, their malicious intentions become evident. For example, sellers on eBay carry out various small transactions to generate a positive reputation, and then after gaining trust, cheat buyers on a more expensive product. Since these entities suddenly change their transactional behavior, it becomes difficult for other entities to sufficiently diminish the attacker's reputation. This also consists of oscillatory transaction behavior, where an entity keeps shifting from honest to dishonest behavior so that the attacker's reputation cannot be updated in a timely manner.
Collusion attacks are threats that emerge when a group or groups of malicious entities make attempts to destabilize the system. In most cases, when different malicious entities act together, there is a greater danger to the reliability of a reputation system than when every entity exhibits malicious behavior separately. Some specific instances of collusion attacks are described below.
Collusive slandering attack: also known as a collusive badmouthing attack, malicious users work together to propagate negative reviews about an honest user to significantly decrease their reputation. They also try to enhance their own reputations by giving positive reviews of each other.
Collusive reducing reputation: the reputation of honest users can be reduced through coordinated malicious nodes that criticize only some sections of the entities they deal with. They do so to generate conflicting views regarding the transactional behavior of the victims and the reputation of honest entities recommending the victims. There is very little effect on their own reputation, since they give honest feedback about other entities.
Collusive self-promoting attack: this kind of attack is also referred to as a ballot stuffing attack, in which negative behavior is exhibited by all entities of a group of malicious entities, but they give positive feedback about each other. This attack is carried out in different Appl. Sci. 2021, 11, 8496 3 of 18 ways. In one instance, dishonest behavior is exhibited by just one entity of the collusive group, while positive reviews are given by other entities. In another instance, unfavorable behavior is exhibited by a single entity some of the time to avoid being identified, whereas other members of this group give positive reviews for it. Often, the term "ballot stuffing" is used to refer to an attack of this category. In this kind of attack, ratings are given for fake transactions. For instance, a seller can collude in an online auction system with a group of buyers to achieve high ratings that are not reflective of actual transactions. This will increase their reputation, leading to a greater number of orders for that seller from other buyers and allowing them to sell at a greater price than justified.

Contributions
The objective of this research is to ensure that the cloud storage system is highly secure by developing a trust model system that inhibits reputation attacks of cloud services. On/off attacks are carried out by entities that present themselves as good nodes on an online platform; however, once they have acquired the trust of the system, they become malicious nodes. Nodes that carry out the on/off attacks may give false information or feedback to cause damage to the system's reputation. Another type of security issue that threatens reputation systems are the collusion attacks. These attacks are threats that emerge when a group or groups of malicious entities make attempts to destabilize the system. In the majority of cases, when different malicious entities act together, there is greater danger to the reliability of a reputation system as when every entity exhibits malicious behavior separately. A solution for trust model systems for preventing on/off and collusion attacks will be presented in this study. In this regard, we will put forward strategies that should be considered when developing the trust evaluation process. This paper will try to determine the solution that is most appropriate for trust issues regarding access control approaches and will then present trust models that can be used to increase information security in distributed storage frameworks that use cryptographic access control techniques. It was found during the investigation that accurate results should be offered by a trust model while assessing trustworthiness, which is what our plan for the recommended trust-based distribution storage framework is based on. In the plan, trust prototypes can be organized into a framework that uses the cryptographic access control approach. To ensure its effectiveness, this paper proposes a trust model that offers the following: Ensuring that the greatest security is provided to customers of cloud services because they may be dealing with extremely sensitive data when using trust management services; Securing cloud services by effectively detecting malicious and inappropriate behaviors through the use of trust algorithms that can identify on/off and collusion attacks; Making sure that trust management service availability is adequate for the dynamic nature of cloud services; Providing dependable solutions to avoid reputation attacks and increase the precision of customers' trust values by considering the significance of interaction.

Related Works
An increase has been experienced in the number of methods employed for examining and handling trust for online services. Noor, Sheng, and Alfazi [1] asserted that these methods were developed after taking into account the feedback of customers of cloud services. The drawback of this study, however, is that the methods do not stress the occasional and periodic reputation attacks that often hamper the privacy and security of cloud computing.
It is due to the dynamic nature of cloud computing that there has been very little focus on these periodic and occasional reputation attacks. The fact that multiple accounts may be held by a single user to access a single service makes the issue even more complex. The Noor, Sheng, and Alfazi [17] study offers vital information about the efficiency of occasional attack detection models in identifying occasional and periodic reputation attacks, Appl. Sci. 2021, 11, 8496 4 of 18 but the emphasis of the study was not on particular attacks, such as the on/off and collusion attacks.
A similar study was carried out by Labraoui, Gueroui, and Sekhri [18] in 2015 to examine whether trust and reputation networks were successful in preventing reputation attacks. They presented the on/off trust mitigation for trust systems employed in wireless sensor networks. The mechanism of the mitigation system punishes those with a history of misbehavior on every network node. The trust value of every network node is influenced by these penalties, so it is considered successful in preventing misbehavior. The advantage of this study is that it offers vital information about the on/off trust approach. Trust management in cloud computing was also examined by Noor, Sheng, and Bouguettaya [19] in 2014, but they did not include how on/off attacks could be prevented by trust management models. Tong, Liang, LU, and JIN [20] suggested in 2015 that the trust model simply took into account score value similarity and the collusion size score, but did not take into account the impact of feedback time.
Ghafoorian, Abbasinezhad-Mood, and Shakeri [21] carried out a study in 2018 to examine how the trust-and reputation-based RBAC model was used to provide security for data storage in the cloud. They determined that the RBAC model successfully dealt with security risks to the reputation and trust of cloud-based systems.
Other approaches have been examined to manage the trust and reputation of cloud systems. A study was carried out by Nwebonyi, Martins, and Correia [22] in 2019 to evaluate the effectiveness of various models that prevented these trust and reputation risks. However, the focus of these studies was essentially on the privacy and security of the system as a whole, and they did not deal with particular attacks, such as on/off and collusion attacks.
Many reputation-based models were developed in most of the earlier trust models for the detection of collusion attacks through different methods. A trust model defenseless against a collusion attack was presented in [3], which worked by computing the feedback density of all recommenders. Another was developed in [2] by determining the credibility of feedback and changing it in the system with alternative recommenders. An identitybased data storage mechanism was developed by the authors in [21], in which collusion attacks were inhibited, and intra-domain and inter-domain queries were supported. Mahdi et al. [22] suggested that to decrease the threat of this attack, it could be prevented by considering the least number of recommenders that should be involved in calculating the recommendation trust; otherwise, the RBAC system can easily use virtual recommenders for the compilation process of the recommendation trust.

Design and Methodology
The design methodology for this study is a literature review, which is a systematic research method through which findings from previous studies are collected and integrated [23][24][25]. When a literature review is carried out rigorously, adhering to all the rules and conditions of assessing the quality of research evidence, a sound basis is created for developing knowledge, as well as theory. There is a lot of evidence related to attacks carried out on cloud-based services and the way these attacks can be avoided [26,27]. However, this evidence is fragmented. The focus of this study will be on two types of attacks, which are similar in various ways to other attacks [28,29], despite their differences with respect to their features and how they are carried out in the cloud environment [30].

On/Off Attack
In this type of attack, malicious behavior is exhibited by the user of the cloud system in a short time frame. They begin by exhibiting good conduct so that they can deceive the trust system and gain a good reputation. This kind of attack mostly occurs when proper behavior is initially shown by a malicious cloud service user for some time to attain a good reputation and gain the trust of other users. The user then starts taking advantage of this trust. Malicious users win the trust of the TMS in most cases by behaving properly in Appl. Sci. 2021, 11, 8496 5 of 18 minor interactions. However, when they get an opportunity for a major interaction, their malicious intent becomes evident.
The opportunistic but malicious behavior of various nodes is characteristic of attacks that are made through direct trust, including the trust of the system. There is a shift in the good and bad behavior of switches, which creates a disguise, and the node is considered trustworthy despite its inappropriate behavior [4]. For instance, malicious behavior can be exhibited by a node that is considered trustworthy on an e-commerce website. This behavior may not be identified because the node was initially considered trustworthy by the system. The interaction trust (IT) will initially be computed by the trust model system, which offers an accurate result for the trust value of every cloud service consumer (CR) by calculating the service provider's (SP's) interaction importance (II). The feedback (F) with respect to an interaction is presented as a percentage. Equation (1) is used to compute the IT.
Here, positive feedback in a given time is shown by α t ; negative feedback in a given time is shown by β t ; P denotes the value of the new positive recommender feedback, N represents the value of the new negative recommender feedback; the Interaction Importance value is shown with II, and the feedback numbers are denoted by NF.
The risk of on/off attacks O 2 can be prevented by incorporating the penalty for the on/off attack (P O 2 ). This penalty is fixed from one to n, where one refers to no risk from this user and n represents high interaction importance multiplied by the danger rate (DR).
for any customer is calculated with the trust model using a novel process in which PC O 2 is the smallest value of the greatest interactions.
The penalty of trust decline (P TD ) should be added to prevent the risk of trust decline (TD), where the P TD ranges from one to n, in a way that one signifies no risk from this customer role and \ signifies high risk, PC TD refers to the curve of a penalty of trust decline and its integer greater than one, L I I denotes the limit of low interaction.
PC TD is used to find the value of penalty of trust decline (P TD ), and to prevent the attacks, the danger rate R and the penalty for on/off attack (P O 2 ) are used. Algorithm 1 is given below.

Collusion Attack
Collusion attacks are also studied in this paper. In the literature, the word collusion refers to an agreement reached by a collection of entities to develop biased or impractical recommendations [5]. This means that the attack takes place when a group of people work together to give false recommendations with the intention of damaging another entity's reputation. This is referred to as the slandering attack. They may also perform self-promoting attacks to enhance their own reputation [6]. The focus of this study will be on a single type of attack, which is similar to some extent to other kinds of attacks. However, they are distinct with respect to their attributes and the way they are carried out on the cloud computing platform.
In a collusion attack, malicious behavior is exhibited that involves giving fake feedback from a single node, with the aim of enhancing or reducing product ratings on an e-commerce website [25]. It is possible for the behavior to be non-collusive, with multiple misleading pieces of feedback given by the nodes to carry out self-promotion or slander another entity. The TMS will not be effective when malicious users make up more than 50% of the trust model system. The accuracy of the recommendation trust values is also threatened by the attack. There are two main types of collusion attacks: a self-promoting attack, in which malicious recommenders collude to enhance the trust value of a particular user in the TMS, and a slandering attack, in which malicious recommenders work together to reduce the trust value of a particular user in the TMS.
A novel solution is presented in this study to avoid collusion attacks; it involves computing three criteria that indicate distinct factors. Malicious recommendation detection (MRD) is the foremost criterion that identifies suspicious recommenders' groups by determining the probability that a group will be the collusion recommender's group. In this criterion, the time range of the collusion attack is calculated using the trust model to identify the time range of all attacks that have been carried out within a short period of time. If malicious recommenders have made any attacks on a particular user, there will be a very small time range of these attacks. The second criterion is then computed by the trust model: malicious recommenders' behavior (MRB). This indicates the similarity of malicious recommenders' behaviors, which are more similar when a particular user is attacked. The following Equation (4) is used to determine malicious recommendation detection (MRD) and malicious recommenders' behavior (MRB).
The time and value of all feedback in the feedback set (FS) provided to a particular user are compared in the TMS. In the first comparison, the time of the last feedback T(F n , CR) FS and the time of all T(F i , CR) FS feedback apart from the last is compared by the TMS. Following that, the value of the last feedback is compared in the TMS with the value of all V(F i , CR) FS feedback apart from the last. The trust model performs a comparison between the time T(F n , CR) FS and the value V(F n , CR) FS of the last feedback in the feedback set (FS) and the time T(F i , CR) SS and value V(F i , CR) SS of all feedback in the suspected set (SS). This is how the TMS keeps all suspected feedback in the suspected set (SS). The two parameters to identify the range of feedback time and feedback value can be determined through the two parameters TC and VC.
The third criterion, which is the collusion attack frequency (CAF) for each recommender that has feedback in the suspected set SS = {SF 1 , SF 2 , . . . , SF n }, is determined to identify the malicious recommendations, also called the collusion feedback. Here, the strength of the attack is higher when the frequency of attacks is greater.
The trust model initially calculates the collusion attack frequency (CAF), where FN(SR, CR) signifies the number of feedback items that are provided by a particular recommender to a particular user in the suspected set (SS). FN(SS, CR) refers to the number of feedback items in the suspected set (SS) that are provided to the same consumer. When the collusion attack frequency (CAF) is more than the feedback limit (FL), the trust model shifts the suspected feedback to the collusion set (CS). However, if this is not the case, the suspected feedback SF(SR, CR) is shifted by the trust model from a specific recommender to a particular consumer in the feedback set (FS).
The trust model determines the size of the collusion set (CS) to determine the attack scale (AS) for a given consumer, where the malicious recommenders in a recommender's community account for a major percentage of all recommenders who attack the trust model and cause harm to it. The following Equation (6) is used to determine the collusion attack scale (AS): Here, RN(CS) refers to the number of malicious recommenders within the collusion community and FN(CS, CR) refers to the number of malicious feedback items for all consumers within the collusion set (CS).
The trust model then determines the attack target scale (ATS), which gives a malicious feedback rate for a particular user. The following Equation (7) Here, the FN(CS, CR) refers to the feedback number obtained from malicious recommenders. FN(AFS, CR) is the feedback number that all feedback sets have given, in which the set of malicious recommenders belongs to all recommenders who have evaluated a particular consumer.
Finally, the strength of all collusions for a given user can be determined by calculating the collusion attack strength (CAS) using the trust model. For this, the value of all attack target scales (ATS) is considered, after which the collusion attack strength (CAS) is computed as follows (8): The collusion attack algorithm given below will be employed by the trust model to identify collusion groups (Algorithm 2). The example of the collusion attack given below can be evaluated. This example demonstrates how the trust value of the roles is influenced by the collusion attack algorithm. Here, we assume that the matrix given below consists of all feedback times. The different feedback times are compared by the algorithm, beginning with the final feedback time V n,n . We make the assumption that the time period is two hours and all the feedback is obtained on the same day. Six feedback items are found in this time period, and their values are compared by the trust model. The following feedback values are assumed: Here, the system shifts the six feedback items to the suspected set (SS); the collusion attack frequency for every recommender that has feedback in the suspected set (SS) will then be determined by the trust model. Using the condition given below, when the feedback limit (FL) = 10%, these feedback items will be transferred by the trust model to the collusion set (CS); if this is not the case, then they will be shifted to the FS.

Proposed Framework Architecture
The various elements of the trust model are analyzed in this section. The part played by each component in making sure the system functions effectively is also presented. The proposed system architecture is shown in Figure 1. The different feedback times are compared by the algorithm, beginning with feedback time , . We make the assumption that the time period is two hours an feedback is obtained on the same day. Six feedback items are found in this time and their values are compared by the trust model. The following feedback value sumed: Here, the system shifts the six feedback items to the suspected set ( ); the c attack frequency for every recommender that has feedback in the suspected set ( then be determined by the trust model. Using the condition given below, when the feedback limit (FL) = 10%, th back items will be transferred by the trust model to the collusion set (CS); if this i case, then they will be shifted to the FS.

Proposed Framework Architecture
The various elements of the trust model are analyzed in this section. The par by each component in making sure the system functions effectively is also presen proposed system architecture is shown in Figure 1.  In the proposed design, TMS will be the element that examines the degree to which cloud service customers rely on cloud service providers. It is responsible for offering cloud services, with the expectation that the quality cloud service providers have promised is actually offered. There are different subsections of the TMS, each of which is assigned distinct tasks that ensure that the data available in the cloud storage system are secure. The rules identified within the level of trust are examined based on the feedback given by service providers [19]. It is important to determine the identity of the user and monitor the activities carried out to make it easier to track unauthorized customers or attackers and to present evidence of any leaked data. The registered and unregistered customers were updated using the TMS. In addition, all activities carried out by customers are identified. This monitors the authorization of all those adding feedback into the system. The TMS can recognize invalid feedback and remove it from the system.

Trust Management System (TMS)
Different layers are added to the trust model to increase the effectiveness of the overall system. There are different subsections in the TMS, which are described in the following section.
Central Repository: this functions as the interaction store. It stores all kinds of trust records and interaction histories created by interacting tasks and roles for subsequent use by the decision engine trust for assessing task and role values. The central repository cannot access elements that are not present in the TMS.
Role Behavior Analyzer: this is the component that analyzes functions and roles related to the smallest levels of trust regulations with respect to shared resources. It evaluates those rules that have been identified in the level of trust based on the feedback given by the service providers in the central repository [19]. The roles are linked by the role behavior analyzer to obtain information about them and to identify any leakage that occurs. It is imperative to determine the user's identity and monitor all actions carried out by them so that unauthorized customers or attackers can easily be tracked, and evidence for any kind of data leakage can be presented. The accounts of registered and unregistered customers are also updated by the role behavior analyzer, and all events carried out by a customer are identified.
Task Behavior Analyzer: It is the responsibility of the task behavior analyzer to evaluate tasks and functions with respect to minimum trust level laws when accessing shared resources. The tasks identified within the trust level are analyzed in terms of the feedback of owners by calculating the trust value, and this value is then stored in the central repository. It obtains information from the channels, of which there are two herereports from tasks regarding data leakage and reports from the role behavior analyzer-to determine the histories of customers with respect to the stored data. Customers can be identified by the task behavior analyzer, and the tasks performed should be tracked. This makes it easier to track attackers or unauthorized customers and to present proof of data leakage if it has occurred. Registered customer accounts are updated, and it is determined whether the incident was carried out by a customer account.
Feedback Collector: it is the responsibility of the feedback collector to manage feedback from the owners of the service to the repository headquarters before its automatic allocation. The user's trustworthiness is shown by the feedback given on roles and tasks. To ensure security, the collector of tasks and role feedback secures its integrity. The authorization of the one uploading feedback into the system is ensured by this component. It has the ability to recognize invalid feedback and eliminate it from the system. In addition, information is acquired by the collector of role feedback regarding the data assignments of tasks and roles before it is uploaded to the central repository.
Trust Decision Engine: this section analyzes and identifies the value of trust of the data owners, the roles, and the values of the entity. It obtains all kinds of information regarding the interaction histories that are found in the repository center and the trust values of a specific customer before determining the kind of response the system should give.

Proposed Method
The system is initiated by the administrator, who also establishes the system's hierarchy of roles and tasks. Channel 1 makes it easier to upload the system's created parameters for roles and tasks to the cloud; When consumers want cloud data access rights, they first submit an access request through Channel 2 based on their tasks and roles; If the consumer request is accepted, Channel 5 is used by the role entity to transfer the request to the task's entity. The cloud responds by providing consumers with a cryptographic task-role based access control (T-RBAC) plan; Encryption and uploading of data performed through Channel 3 can only be given by the owner if they feel the role or task can be trusted. The owner also informs the feedback collector of the consumers' identities during this stage; When an owner finds leakage of their data due to an untrustworthy consumer, they should report this role or task to the feedback collector through Channel 14; When an approved owner provides new feedback, the feedback collector sends it to the centralized database through Channel 11 to archive each confidence report and interaction record produced when the roles and tasks interact; The interaction logs are then stored in a centralized database, which is used by the trust decision engine to determine the trust value of roles and tasks using Channel 10; At any point, the roles' entity may approve trust assessments for the roles from the TMS, whereupon the roles' entity responds to the trust management system through Channel 13 to receive feedback from the trust decision engine; The role entity reviews the role parameters that decide a consumer's cloud role membership until the feedback is obtained from the trust decision engine, and any malicious consumer membership is terminated; When an owner provides negative feedback regarding a role as a result of data leakage, the role entity transfers the leakage details to the role behavior analyzer through Channel 4; At any point, the tasks entity may approve trust assessments for the tasks from the TMS, whereupon the tasks entity responds to the trust management system through Channel 12 to receive feedback from the trust decision engine; The task entity reviews the parameters of the task that decide a consumer's cloud task membership until the feedback is obtained from the trust decision engine, and any malicious consumer membership is terminated; When an owner provides negative feedback regarding a task as a result of data leakage, the task entity transfers the leakage details to the task behavior analyzer through Channel 7; The analyzers then use Channels 6 and 8 to keep updating the trust details for the roles and tasks in the centralized database; If an owner requests that their data be uploaded and encrypted in the cloud, the TMS performs a trust assessment. Once the TMS receives the request, it follows up with the owners through Channel 9; The trust decision engine provides the owners with details of the trust assessment for their roles and tasks. The data owners then determine whether or not to allow consumers access to their services based on the results.

Simulation Results
To check the results of our system, we built a C#.net Windows Forms application, and used big data to check all the criteria applied to avoid these attacks. Experiments were used to determine the trust model's capability of resisting reputation attacks. The penalties for the on/off attack were determined by the TMS based on two conditions: whether the interaction importance was more than or equal to the danger rate (DR), and whether the feedback (F) of the recommender was less than the interaction importance (I I). When the recommender's feedback (F) was lower than the interaction importance (I I), the trust decline penalty P TD was determined by the trust model. The effect of the penalty of on/off attack and trust decline in the interaction trust values are illustrated in Figure 2.
ties for the on/off attack were determined by the TMS based on two co the interaction importance was more than or equal to the danger rate the feedback ( ) of the recommender was less than the interaction When the recommender's feedback ( ) was lower than the interactio the trust decline penalty ( ) was determined by the trust model. Th alty of on/off attack and trust decline in the interaction trust values are il 2.   the feedback ( ) of the recommender was less than the interaction When the recommender's feedback ( ) was lower than the interactio the trust decline penalty ( ) was determined by the trust model. Th alty of on/off attack and trust decline in the interaction trust values are il 2.    The trust model's ability to endure reputation attacks is examined carrying out experiments. The collusion attack frequency (CAF) was TMS to prevent collusion attacks. Here, the feedback frequency had a d with feedback collusion and an inverse relationship with the feedback feedback frequency was determined by the number of recommender fe the total number of feedback items within the suspected set (SS). The feedback frequency of seven suspected recommenders is pres and as we assume in Table 1. It can be seen that the feedback frequency recommenders was more than the feedback limit (FL), which implies transfer the feedback given by these recommenders to the collusion se pected feedback by a particular recommender for a particular user will trust model to the feedback set (FS). The trust model's ability to endure reputation attacks is examined in this section by carrying out experiments. The collusion attack frequency (CAF) was calculated by the TMS to prevent collusion attacks. Here, the feedback frequency had a direct relationship with feedback collusion and an inverse relationship with the feedback's credibility. The feedback frequency was determined by the number of recommender feedback items and the total number of feedback items within the suspected set (SS).
The feedback frequency of seven suspected recommenders is presented in Figure 5 and as we assume in Table 1. It can be seen that the feedback frequency of five suspected recommenders was more than the feedback limit (FL), which implies that the TMS will transfer the feedback given by these recommenders to the collusion set (CS), or the suspected feedback by a particular recommender for a particular user will be shifted by the trust model to the feedback set (FS). The trust model's ability to endure reputation attacks is examined carrying out experiments. The collusion attack frequency (CAF) was c TMS to prevent collusion attacks. Here, the feedback frequency had a d with feedback collusion and an inverse relationship with the feedback' feedback frequency was determined by the number of recommender fee the total number of feedback items within the suspected set (SS). The feedback frequency of seven suspected recommenders is pres and as we assume in Table 1. It can be seen that the feedback frequency recommenders was more than the feedback limit (FL), which implies t transfer the feedback given by these recommenders to the collusion set pected feedback by a particular recommender for a particular user will trust model to the feedback set (FS).  Figure 6 shows how the trust model is used to determine the attack s refers to the size of attack scale for various collusion sets. This is done to   Figure 6 shows how the trust model is used to determine the attack scale (AS), which refers to the size of attack scale for various collusion sets. This is done to find out the size of the collusion set (CS) in which the malicious recommenders in a recommender's community consist of a significant percentage of all recommenders to attack and destroy the trust model.  The value of the attack target scale (ATS), which gives the malicious from one collusion set (CS) for a particular user, is illustrated in Figure 7.  The value of the attack target scale (ATS), which gives the malicious feedback rate from one collusion set (CS) for a particular user, is illustrated in Figure 7.
ppl. Sci. 2021, 11, x FOR PEER REVIEW of the collusion set (CS) in which the malicious recommenders in a recomm munity consist of a significant percentage of all recommenders to attack an trust model. The value of the attack target scale (ATS), which gives the malicious from one collusion set (CS) for a particular user, is illustrated in Figure 7. The value of the collusion attack strength (CAS) determined by comp of all malicious feedback from distinct collusion sets (CS) for a given user is The value of the collusion attack strength (CAS) determined by computing the rate of all malicious feedback from distinct collusion sets (CS) for a given user is illustrated in Figure 8.

Comparison of Security and Accuracy
Any type of trust management service is vulnerable to a variety of attac attacks have the potential to either enhance or destroy the reputation of a [32,33]. In order to build an accurate and secure trust model system, we fo menting various metrics to prevent these attacks, which enables us to creat able, and accurate trust model framework. Table 2 shows a comparison bet posed TMS and those discussed in related works.

Conclusions
Authorization concerns regarding access to cloud computing storage ar challenge for users, particularly big data of cloud computing, which is often tive. A trust model was presented in this paper that provides dependable preventing on/off and collusion attacks, which are major security issues f computing users. To adequately handle these concerns, control models sh grated with trust models for decentralized systems through the proposed tr which can identify on/off and collusion attacks and ensure the highest conf

Comparison of Security and Accuracy
Any type of trust management service is vulnerable to a variety of attacks [31]. These attacks have the potential to either enhance or destroy the reputation of a specific entity [32,33]. In order to build an accurate and secure trust model system, we focus on implementing various metrics to prevent these attacks, which enables us to create a stable, reliable, and accurate trust model framework. Table 2 shows a comparison between our proposed TMS and those discussed in related works.

Conclusions
Authorization concerns regarding access to cloud computing storage are a significant challenge for users, particularly big data of cloud computing, which is often highly sensitive. A trust model was presented in this paper that provides dependable solutions for preventing on/off and collusion attacks, which are major security issues faced by cloud computing users. To adequately handle these concerns, control models should be integrated with trust models for decentralized systems through the proposed trust algorithm, which can identify on/off and collusion attacks and ensure the highest confidentiality for cloud service users.