Next Article in Journal
Feasibility Study for Determination of Trace Iron in Red Sandstone via O-Phenanthroline Spectrophotometry
Previous Article in Journal
Proposal for Two-Stage Machine Learning-Based Algorithm for Dried Moringa Leaves Quality Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis

by
Amal S. Alamro
and
Fawaz A. Alsulaiman
*
Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11432, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 242; https://doi.org/10.3390/app16010242
Submission received: 10 November 2025 / Revised: 21 December 2025 / Accepted: 22 December 2025 / Published: 25 December 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

In recent years, the number of interconnected computers and resources has increased drastically. To ensure the privacy of these resources, a secure access control mechanism must be implemented. Traditional access control models lack adequate emergency handling. Threshold-based collaborative access control (T-CAC) addresses the issue of handling emergencies without overriding the access control model by shifting trust from individuals to groups, thereby enforcing collaboration among different actors. Given the risks associated with improper and uncontrolled delegation of authority, along with the need to enforce the zero-trust principle of continuous verification, this study proposes a secure and adaptable model, Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis (ATACHOBA). It enables user delegation based on both trust and behavior analyses. In the proposed model, access decisions are determined by trust values and recommendations provided by the machine learning algorithm. ATACHOBA imposes penalties for any abnormal or malicious activity. Moreover, it utilizes honey objects and honey requests to ensure appropriate user behavior.

1. Introduction

In recent years, the number of interconnected computers and resources has increased drastically. Many offline, paper-based systems are undergoing digital transformation, enabling authorized individuals to access large volumes of data more easily. However, the security objectives of confidentiality and privacy must also be maintained. To address this issue, many access control algorithms have been proposed [1,2,3,4,5], tackling various requirements and providing different levels of security. Most access control mechanisms are designed to handle only typical situations and lack provisions for emergency cases. For instance, in a hospital environment, an access control policy may specify that doctors are granted access only to the medical records of patients they have previously treated. However, if a patient arrives in the emergency room in critical condition, any attending doctor should have access to the patient’s medical record. Delays in accessing such vital information could jeopardize the patient’s health. Many users address such situations by overriding the access control system to gain access to unauthorized data, which is considered undesirable.
Role-based access control (RBAC) is an access control model that supports the principle of least privilege, which means assigning only the permissions and operations required to perform a specific job. However, not all members with the same role need the same privileges. Moreover, RBAC is unable to handle emergency cases effectively and relies solely on overriding the access control system. Certain access control algorithms offer more effective solutions for managing emergencies. Threshold-based collaborative access control (T-CAC) [6] allows users to collaborate and delegate access to resources.
Delegation occurs when a user contributes a portion of their permitted weight to another user, enabling that user to access a specific resource. Delegations should be limited and managed in a controlled manner. Users must carefully review delegation requests and verify the requester’s identity before approving them. Delegation decisions should be based on trust to mitigate risks to the system’s confidentiality, privacy, and integrity. Access via delegation should be granted exclusively to trustworthy users. The enforcement of the zero-trust principle of continuous verification [7,8,9] should be prioritized. To meet present-day requirements, an access control model should possess the following characteristics:
  • Restricts access permissions based on the role to ensure the principle of least privilege.
  • Controls user abuse of delegation requests.
  • Predict malicious activity.
  • Provides a solution for unforeseen situations, such as emergencies.
  • Continuously verifies and evaluates the trustworthiness of the actors.
Therefore, in this study, we propose an Adaptive Trust-Based Access Control model with Honey Objects and Behavior Analysis (ATACHOBA). ATACHOBA introduces a two-level verification process in which a user is granted access only after both a dynamic trust-based decision and a behavior-based recommendation concur. Moreover, ATACHOBA imposes penalties on users for any abnormal or malicious activity to mitigate potential harm. The proposed model includes honey objects and honey requests as traps to detect malicious or irresponsible behavior, such as indiscriminately accepting all delegation requests without thoroughly examining each one.

2. Related Work

The optimal access control model has three key characteristics [10]: It is inescapable, which means the inability to breach policies by bypassing the access control model. It is invisible, meaning user and administrative interactions are transparent to the model. Finally, the model’s design and construction must be feasible and within a sufficient budget. The literature on access control models can be divided into the following categories: basic, improvements to RBAC, trust-based, behavior-based, collaboration-based, emergency, and global.

2.1. Basic Access Control Models

Mandatory Access Control (MAC) is based on the idea of using security labels for both subjects (clearance) and objects (sensitivity) to protect resources [4,11]. The decision to grant or deny an access request is based on a comparison of the user’s clearance level with the resource’s sensitivity level. The system centrally defines and enforces access policies, ensuring that users cannot modify them [12]. MAC suffers from several drawbacks, including inflexibility, limited scalability, and high maintenance overhead, because it relies on a centralized architecture that disallows dynamic changes [13].
On the other hand, Discretionary Access Control (DAC) is a decentralized model that grants subjects full discretion to determine access rights to their own objects. Consequently, the overall security level depends entirely on the decisions of the object owners [13]. Furthermore, DAC has some weaknesses, such as its vulnerability to Trojan Horse attacks, in which malicious programs steal a legitimate user’s credentials. In addition, it lacks the ability to explicitly deny access to specific users, a feature known as negative authorization [13].
Attribute-based Access Control (ABAC) approaches an access decision by comparing subject, object, and environmental context attributes against the policy. ABAC’s main advantage over previous basic access control models is its context-awareness [14,15]. ABAC is a flexible and effective model for handling a large number of users in a dynamic environment [16,17].
Role-based access control (RBAC) is based on the role of the subject, not the individual. Each role adheres to the least privilege principle, meaning it is assigned only the permissions and operations required for its job [18]. In RBAC, user assignments to roles and role permissions are performed statically. This requirement reduces the model’s flexibility and makes it unsuitable for dynamic environments [19,20]. Traditional access control models form the foundation of many recent models and are effective at handling routine access requests. However, they lack mechanisms to address emergency situations that may require user collaboration for legitimate privilege escalation. Furthermore, these models lack user behavior analysis and do not comply with zero-trust principles. These limitations motivate our choice of a base model that supports user collaboration during emergencies while integrating trust evaluation and honey object mechanisms to deter malicious actors and negligent users who approve requests without proper verification, thereby reinforcing zero-trust principles.

2.2. Improvements in the RBAC Model

Madani et al. [21] proposed an access control model for cloud collaboration applications called the collaborative task role-based access control (CTRBAC). In a collaboration application, a group of users shares resources to work on a corporate task. The proposed model allows a user to access shared resources within the same or different tenants. This access is conditionally bound to a collaborative session. Moreover, they use a trust relation between tenants, including trust roles and trust share relations. In a trust–role relations, a user can share some permissions with a trusted user. However, in a trust–share relations, a user can choose precisely which object in the session and what actions on that object to share with other trusted users. Almheiri et al. [22] proposed tuning large language models (LLMs) to provide fine-grained access to privileged information for authorized users. The trained LLM considers the user’s role and the privileges associated with it, based on role-based access control (RBAC). Recently, various studies [23,24] have extended the use of LLMs to assist users in expressing and formulating access control policies in natural language, thereby capturing human intent more effectively during policy creation. The CTRBAC model supports collaboration among users by enabling the sharing of resources within a session or as a prerequisite for completing a task. Trust is utilized to delegate roles to different user groups in a collaborative session to grant access to shared objects. The model implements a secure collaborative environment that facilitates object exchange and the sharing of access rights. However, it does not account for potential abuse in authority delegation, nor does it incorporate user behavior analysis or continuous verification mechanisms. In addition, recent studies have explored the integration of large language models (LLMs) to enhance the usability and adaptability of access control models. This feature could be explored in a future extension to this study.

2.3. Trust-Based Access Control Models

Nazerian et al. [1] proposed an access control model, namely emergency role-based access control (E-RBAC). In emergencies, they propose following the Break the Glass (BTG) policy, allowing users to override the access control model to obtain the required access. To limit users’ actions and ensure accountability, every action will be documented in such cases. Many constraints limit users’ activity in emergencies, such as the separation of duties (SOD). The E-RBAC model defines three system statuses: normal, emergency (predicted situation), and exception (unpredicted situation). Users in E-RBAC have many attributes used to calculate their trust level, including user experience (Aue), training hours (Aut), and skills (Aus).
Daoud et al. [25] introduced an access control model based on trust evaluation and activity monitoring to ensure a high level of security. When a user requests access to data, the model calculates the requester’s trust level and compares it against a predetermined threshold. Based on the comparison, the model decides whether to grant or deny access rights. It then uses an equation to calculate users’ trust levels. The trust equation considers parameters such as the user role, the requester’s device, the application protocol, sensor features, and medical data features.
Behera et al. [26] introduced a trust-based access control model for the cloud environment. This model evaluates both user and resource trust values before making a decision on the access request. Only users with trust values above a threshold obtain access to the requested resources. The model classifies users as trusted or untrusted based on their behavior. Trust is computed from user behavior parameters, including Bogus Request Rate (BRR), Resource Affected Rate (RAR), and Unauthorized Operation Rate (UAR).
L. Chunge et al. [27] proposed a trust-based access control model for cloud computing that evaluates the trustworthiness of users and devices to make informed access control decisions. The model uses trust metrics such as reputation and past behavior, and considers the context of access requests to refine the trust evaluation. To implement the proposed model, the authors developed a prototype system that includes a trust management module, a decision-making module, and a policy enforcement module. The implementation indicates that the proposed model provides a novel approach to addressing security and privacy challenges in cloud computing. Overall, the study suggests that trust-based access control models can be a valuable tool for improving the security and privacy of cloud computing environments.
A. Singh et al. [28] proposed a trust-based access control model for securing electronic healthcare systems that evaluates user and device trustworthiness based on metrics such as past interactions, successful transactions, communication duration, and data access time. The model considers contextual factors, such as time and location, when making access control decisions. The model is implemented using role-based access control (RBAC) augmented with a trust evaluation module. The authors compared the model with existing access control models and demonstrated its effectiveness in preventing unauthorized access attempts and enhancing security. This study suggests that the proposed model provides an efficient approach to securing electronic healthcare systems by enabling healthcare providers to make informed access-control decisions based on user trustworthiness.
Trust-based models rely on trust calculations for access control, considering various relevant features and factors that mitigate the risk of granting access to untrustworthy users. There is no collaboration or only limited collaboration (as in the case of E-RBAC) among users seeking access. To address this shortcoming and further ensure continuous validation and adherence to zero-trust principles, our proposed model leverages honey objects and honey requests to detect malicious intent, in addition to user collaboration.

2.4. Behavior-Based Access Control Models

M. H. Yarmand et al. [3] introduced an access control model that dynamically analyzes users’ behavior and determines their access rights accordingly in a distributed environment. The model also compares user behavior with the expected behavior. Users’ behavior is built by observing their specific patterns from a sequence of action records. In an emergency, users can access certain resources regardless of their behavior analysis results. The emergency level is determined by time, location, role, and resource.
A. Adler et al. [29] proposed a behavior-based access control (BBAC) model that uses machine learning techniques to analyze users’ behavior dynamically and take an access decision correspondingly. The proposed model uses a combination of clustering and classification techniques. Applying clustering as a pre-processing step enhances scalability and reduces the model’s false positives. The model uses K-means clustering algorithm and support vector machine (SVM) classifiers.
T. Mujawar et al. [4] introduced a cloud access control model that is based on behavior-based trust computations. Access to cloud resources is granted exclusively to trustworthy users. This system also employs machine learning to classify users and assist in access decisions. The proposed model computes users’ trust based on their behavior. The user’s recommendations from other users and the cloud service provider are then collected. Both behavior-based trust values and recommendations are inputs to the K-means clustering. The K-means clustering categorizes users into three clusters based on their trust levels: low, medium, and high. Additionally, the model monitors user activity and updates their trust level accordingly.
M. Afshar et al. [30] proposed a machine-learning-based approach for attribute-based access control (ABAC) in computer systems that leverages user behavior data. The traditional ABAC model is insufficient for modern systems that require more dynamic, adaptive access control based on user behavior. The proposed model consists of a behavior data collector, a behavior analyzer, and an access decision maker. The behavior data collector collects user behavior data, which the behavior analyzer analyzes to extract behavioral attributes. These attributes are used to train a machine learning model, which the access decision maker then uses to make access control decisions. The results show that the proposed model achieves higher accuracy in access-control decisions and is more effective at detecting anomalous behavior.
Kang et al. [31] addressed access control in a microsegmentation cloud computing environment, where users’ behavior tends to be more random and diverse. They proposed a dynamic user trust evaluation model based on zero-trust principles that observes user behavior. It considers the current sequence of user requests, along with their history, to form a sliding window that dynamically assigns weights to trust attributes. Moreover, time decay, as well as the importance and type of the requested resource, are taken into consideration. Similarly, Wang et al. [32] proposed a based access control model for cloud computing that dynamically updates the trust level using a Deep Q-Network. The aforementioned behavior-based access control models leverage various concepts to capture normal user behavior, including request frequency, location, time, and other relevant features. However, their trust calculations remain limited to individual user behavior, without including an additional protection layer based on collaboration with other users. Furthermore, these models do not employ honey objects or honey requests to support zero-trust principles and detect malicious intent.
In contrast, our proposed model leverages behavioral analysis, user collaboration, and an adaptive trust mechanism, thereby addressing these limitations and enhancing the resilience of the access control model.

2.5. Collaboration-Based Access Control Models

Alsulaiman et al. [6] introduced a collaboration-based access control model. They addressed the issue of privilege abuse and ensured the principle of least privilege. To obtain permission, T-CAC enables cooperation between different users. This collaboration is based on the intuitive concept that user cooperation reduces the likelihood of privilege abuse and builds trust by distributing authority to groups of users rather than a single individual. Based on users’ roles, T-CAC specifies the required weights for granting permissions. Furthermore, each permission is associated with a collaborative policy that specifies the roles, the possible weight assigned, and the thresholds. The threshold determines the maximum weight that a specific role can delegate. Therefore, the collaborative policy determines the required cooperation among multiple users in different positions. T-CAC provides many features, such as handling unforeseen situations, supporting collaboration across different roles and organizations, upholding the principle of least privilege, and maintaining the privacy, confidentiality, and integrity of protected resources.
Belguith et al. [33] introduced a novel emergency access control model that leverages multiple technologies to ensure a high level of security. Their model is based on QR codes, Shamir’s Secret Sharing Scheme, and attribute-based encryption. The goal of using Shamir’s distribution scheme is to provide the flexibility needed in an emergency situation. The previous scheme requires a specified number of users (i.e., k of n where kn) who must collaborate in order to recover the access key. Thus, the authors establish a distribution of responsibility among users and prevent unauthorized users from exposing the access key.
The concept of user collaboration in access control is essential, as it adds an additional layer of protection and ensures collective agreement in critical and emergency situations. Our model employs T-CAC to facilitate collaboration among users. Moreover, additional features not supported by the original framework are considered, namely, behavior analysis, adaptive trust, and the use of honey requests and honey objects to enable continuous verification and help maintain zero-trust principles.

2.6. Emergency and General Access Control Models

Khan et al. [34] proposed a flexible and secure access control model. Their model provides the primary function of an access control model, the ability to cover emergency situations, and delegating access control privileges. Most existing access control models presuppose that an authentication step has already been implemented within the system. However, this model includes an authentication step. eTRON is a secure architecture used to implement authentication and the delegation of access privileges. This model treats the emergency as a context attribute. Moreover, they incorporate context verification before responding to emergency situations.
K. Kawagoe et al. [5] introduced an emergency access control model that combines role-based access control (RBAC) and team-based access control (TMAC). The model introduces a new component, a situation, that represents the contextual information of both users and objects, enabling dynamic permission assignment. Managing emergencies is complex and requires a flexible access control model with a short response time. During emergencies, situations evolve rapidly, and the proposed access control model adapts flexibly to these changes. In such cases, the model grants users the necessary permissions but limits their duration. Similarly, Djilali et al. [35] introduced a model that combines RBAC and TMAC for IoT applications. However, the proposed model assigns a team administrator who acts during emergencies to manage team members and objects, even though the administrator lacks the privilege to view those objects, thereby preventing conflicts of duty and ensuring proper separation of duties.
Salehi et al. [36] addressed cross-domain access by introducing a dynamic access control policy (CADP) based on ABAC and an attribute-based group signature (ABGS). It facilitates cross-domain access once a group membership is verified and the required attributes are present. This process is achieved by using AGBS, which combines a group signature with an attribute-based signature. As a result, user privacy is preserved during the cross-domain access.
Handling emergencies is an important aspect of any access control model, as ignoring such scenarios may lead to unauthorized privilege escalation, whether for legitimate or illegitimate purposes. To address this challenge, our proposed model employs T-CAC, which handles emergencies in accordance with business requirements. In addition, our proposed model extends T-CAC by employing adaptive trust and behavior analysis. The extension, in addition to the base model, provides multiple layers of protection and upholds zero-trust principles.

3. Adaptive Trust-Based Access Control Model with Honey Objects and Behavior Analysis (ATACHOBA)

This section presents a scenario illustrating the necessity of our model. A description of our model’s basic idea and elements is introduced. Additionally, this section examines the system architecture and the trust calculation.

3.1. Scenario

Consider a hospital system comprising administrators, physicians, and nurses. In routine practice, each physician has access to their patients’ medical records to deliver appropriate care. Nurses require access to partial patient records to help physicians collect the patient’s illness history and background. To gain full access, a nurse requires the approval of three physicians. However, patients’ privacy may be compromised if three physicians consistently approve requests on behalf of others without thorough investigation, thereby undermining the collaborative policy. To address this issue, we propose considering trust, with each user having a specific level based on their behavior and how they handle requests. If untrusted users’ contributions to requests are fully considered, many unauthorized users would have access to patient information, resulting in serious harm. Moreover, if trusted users begin to act abnormally and abuse their trust, their trustworthiness should be reduced accordingly.

3.2. Model Basic Idea

The proposed model is an access control model, namely the Adaptive Trust-Based Access Control Model with Honey Objects and Behavior Analysis, based on the T-CAC model with significant enhancements. The model is based on users’ trustworthiness to make access decisions. The model considers users’ dynamic attributes, such as the number of unauthorized operations, roles, experience, and skills. It also considers the dynamic environmental context as an emergency and unpredictable situation. The model follows RBAC’s basic idea of performing operations based on role responsibilities. However, in unforeseen situations such as an emergency, the model allows users to collaborate to grant access to resources that are not accessible to a user due to user rights. By following this concept, the model achieves a fair distribution of responsibility and minimizes abnormal activities. Their trust value will determine users’ ability to delegate. Users can only contribute up to a weight limit determined by their trust value. Users’ collaboration is allowed only for permissions with a collaboration policy. The collaboration policy specifies the permission and the permitted degree of collaboration. It includes the role, the maximum weight per role member, and the maximum weight per user. The sensitivity level of a document affects the required weight and threshold.
To ensure a higher level of security and implement two-level verification, ATACHOBA uses machine learning to predict request decisions based on users’ behavior. The trust factors (ability, sustainability, relationship, and experience), the trust-based access decision, subject information, and object information are all inputs to the machine learning algorithm. The output will serve as a trigger to either support the trust-based decision or raise an alert against it. This further revision is based on users’ behavior analysis. The guidance would be one of the following: highly recommended, recommended, or not recommended. The behavior-based decision and the request will then be forwarded to the systems and security teams for further investigation. Consequently, a review of users with a high probability of exhibiting abnormal behavior would be conducted to verify their legitimacy, prevent potential attacks, and notify them of any negligence.

3.3. Model Elements

There are five labels for resource sensitivity: public, restricted, confidential, secret, and top secret. Each permission has a threshold determined by the resource’s sensitivity label. Moreover, if a resource is damaged by abnormal activity, its label will affect the penalty value. For instance, the penalty for irresponsible actions involving a confidential document will be significantly higher than that for a restricted document.
A honey object, or decoy object, is a trap-based mechanism designed to detect malicious activity from outside or inside attackers. The basic concept of honey objects is to confound attackers by making it difficult to distinguish between fake and real objects. If an attacker attempts to access a honey object, the system detects the action and restricts the user’s access to safeguard the system’s confidentiality. Honey objects have been deployed in many applications to improve security, such as honey files, honey tokens, honey encryption, and honey objects in access control models [37]. Indistinguishability and secrecy are two essential characteristics of honey objects. Indistinguishability means that the honey objects should be similar to real objects and indistinguishable from them. To achieve this property, mock objects should be constructed from the probability distribution of real objects. However, if the real objects are complex to construct, it will be hard to find honey objects. To maintain secrecy, the information indicating which objects are honey objects among all available objects should be kept secret. Only honey checkers and legitimate users know such information [38]. Honey objects can be implemented using decoy-document techniques [39], where a cryptographic tag generated via a keyed-hash message authentication code (HMAC) [40] is stored as a hidden attribute of the honey objects. Honey objects can be utilized in various ways within access control models. For example, RBAC models may include honey permissions. Such permissions include access to and the ability to perform operations beyond the role’s responsibilities. The role’s legitimate users will never use those permissions. Legitimate users assigned to a role will never utilize those permissions. For instance, if a request is made for honey permissions, it indicates that an attacker is attempting to access specific resources. This serves as an example of deploying honey objects in access control models using honey files. Legitimate users are not expected to request honey files. If the system detects such a request, the system administrator will terminate the requester’s access and impose a penalty.
The proposed model employs honey objects and honey requests to detect malicious or irresponsible activities, such as indiscriminately accepting all delegation requests without proper examination. Therefore, the proposed system generates fake patient records and requests. Each acceptance of a honey request or access to a honey object negatively impacts the trust value. As the number of accepted or accessed honey objects increases for a specific user, the penalty escalates. At a certain threshold, the user may be alerted and suspended, and a notification will be sent to the system administrator for further investigation.

3.4. System Architecture

Figure 1 depicts the proposed architecture of the ATACHOBA model. A requester seeks user collaboration to obtain permission p. Each collaborator contributes with a specific weight and issues a contribute certificate (CC). The weights are based on the requester’s trust level and the relationship between the trustor and trustee. After that, the model will calculate the requester’s dynamic trust and subtract the penalty value. Public-key encryption is utilized to create digital signatures for CC. The issuer signs the CC using their private key. The CC’s legitimacy and integrity are verified using this digital signature technique. The purpose of a digital signature is to prevent tampering and impersonation in digital communications. Moreover, to prevent reply attacks, the contribute certificate includes a unique identifier, a timestamp, and a validity interval. When the requester builds enough trust, equal to or greater than the permission threshold, the requester sends it to the permission enforcement point (PEP). PEP interrupts the request and checks the authorization. Next, the CCs will be passed to the permission decision point (PDP). The PDP double-checks the requester’s information and request and evaluates policy, considering environmental variables. PDP takes users (U), roles (R), collaborative policies (CPs), and permissions with thresholds (PTs) as input. After that, the PDP’s decision is passed to the machine learning trigger for guidance. The recommendation and the decision will be passed to PEP for enforcement. All users’ activity will be logged for behavior analysis and penalty decisions. All entities of ATACHOBA are authenticated using digital certificates, and communication channels are protected using secure transport such as TLS to maintain confidentiality and integrity.

3.5. Trust Calculation

The definition of trust between two users, as provided by Mayer et al. [41], is “Trust is the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.” In our proposed model, users are categorized into trustors, trustees, and third parties. The trustee must introduce themselves and submit a request to the trustor. The trustor is authorized to recommend access to the data. If no prior relationship exists between the trustor and trustee, a recommendation from a third party familiar with both will be considered.
Numerous security factors influence the trust value between the trustor (P) and trustee (Q), which can be incorporated into trust calculations. According to Mayer et al. [41], internal and external factors will impact users’ trust. Trust can be defined in terms of its tendency, ability, sustainability, and relationships between the trustor and the trustee. Therefore, for the proposed model, the considered factors are ability (A), sustainability (S), relationship (R), and experience (E). The values of all factors lie between zero and one. Most related work follows a common approach to trust calculation. They compute the trust value by summing the product of each security factor and its corresponding weight. The weight values should be
0 < w 1 , w 2 , w 3 , w 4 1 and w 1 + w 2 + w 3 + w 4 = 1
Each organization can define weight values to prioritize security factors according to its specific requirements. This design allows organizations to customize trust weighting to their specific security policies and risk profiles, consequently enhancing the flexibility and adaptability of the proposed model across different deployment environments. An alternative approach to setting weight values involves machine learning techniques. S. Ma et al. [42] employ fuzzy clustering to determine the weights of factors based on their impact on prior experiences, or use regression to predict the most effective weights to minimize potential damage to the system.
The trust value should be dynamic and updated based on user behavior observations and corresponding penalty values. We propose to calculate the direct trust ( D T ) based on the experience between the trustor (P) and trustee (Q) as follows:
D T c u r r e n t ( P , Q ) = ( w 1 × A P ) + ( w 2 × S P ) + ( w 3 × R P Q ) + ( w 4 × E P )
The trust level is a value between 0 and 1 (0 indicating untrusted and 1 indicating fully trusted). Moreover, based on users’ actions, the model recalculates trust value after the t-th transaction, considering current and previous direct trust as follows:
D T t ( P , Q ) = ( 1 α ) × D T c u r r e n t ( P , Q ) + α × D T t 1 ( P , Q )
where α is the influence rate of the previous direct trust. On the other hand, if the trustee and trustor lack prior experience with each other, the trustor will consult colleagues to gather their Q’s recommendations about the trustee. These recommendations may vary among colleagues. The greater the trust a trustor places in a colleague, the more significant the impact of their recommendation. The rationale for extending trust beyond direct trusted entities is inspired by human trust reasoning, in which individuals often rely on trusted references to identify reliable or knowledgeable entities when directly trusted contacts lack the required expertise. The proposed model adopts this principle by considering indirect trust derived from trusted entities, resulting in a broader scope of collaboration and enabling informed, weighted trust aggregation. P and Q’s common colleagues are denoted by the letters x and y. The trustor calculates the indirect trust ( I T ) by multiplying each recommendation by the trust value of its provider as follows:
I T t ( P , Q ) = θ × max ( D T t ( P , x ) × D T t ( x , Q ) ) + ( ( 1 θ ) × y = 1 n D T t ( P , y ) × D T t ( y , Q ) ) / n ; x = 1 , , n .
where n is the number of common colleagues, and θ is the weight assigned to the most trusted colleague’s recommendation. If a user engages in abnormal or malicious activities, such as delegating without proper examination, the model will impose a penalty. The penalty value C reduces the user’s trust level and prevents them from causing harm to the system. The calculation of the penalty will include the summation of the number of unauthorized operations ( U A ) multiplied by its significance ( S U ) and then divided by the number of all operations (G) performed by the trustee as follows:
C ( P ) = i = 1 k ( U A P i × S U P i ) G P
The number of dummy requests, such as accepting or requesting honey objects, along with the sensitivity of the compromised document, will increase the penalty. The final formula for calculating the dynamic trust value ( D y n a m i c T ), which combines direct trust and indirect trust and deducts the user’s penalty after all necessary calculations, is as follows:
D y n a m i c T ( P , Q ) = ( D T t ( P , Q ) × β + I T t ( P , Q ) × ( 1 β ) ) C ( P ) + C ( Q ) 2
where β is the reflection rate of the direct trust and 0 β 1 . The level of trust determines the actions a user is permitted or restricted from performing.

4. Behavior Analysis

Behavior can be defined as the special patterns in a sequence of recorded attributes for each user. To evaluate user trustworthiness, a behavior-based decision-making approach should be adopted. Machine learning techniques are employed in behavior analysis to predict users’ trust levels. Classification is one such machine learning algorithm that can be utilized for behavior analysis. Behavior analysis, along with continuous monitoring and updating of user trust levels, yields a dynamic model that responds to changes in user behavior. A combination of a rule-based model and a machine learning technique will lead to an accurate and reasonable access decision [3,29,43].
A classification algorithm that classifies the user’s trust level based on their attributes and actions would create a machine learning model for behavior analysis. The decision tree algorithm is chosen to build this model. The process involves two phases: generating the decision tree and applying it to test data. During the tree generation phase, the root node attribute is selected based on the Attribute Selection Measure (ASM). This process is repeated iteratively until further division into sub-nodes is no longer possible. First, the decision tree algorithm processes the user’s characteristics and behavior as input. Second, it evaluates the root attribute and compares it with the corresponding user value to determine the appropriate sub-tree. Then, using a series of if-else conditions, it sequentially examines all attributes. Finally, the algorithm reaches a terminal node and outputs the user’s trust level.
The goal of developing the behavior-based model is to predict users’ trust levels from their past activities. This prediction aims to enhance system protection by mitigating potential risks. Eight negative behaviors were selected for the simulation, as shown in Table 1. Some of these behaviors are general and could impact any system: virus traces, over-limit unsuccessful login, over-bandwidth usage, and long logged-in time [44]. The remaining four behaviors are healthcare system behaviors, including wrong dose, corrupt public file, corrupt secret file, and access to honey objects.
Users’ trust levels were monitored over time (measured in ticks within the simulator). Each negative behavior reduced the trust level by a corresponding penalty value. As the damage to the system escalates, the penalty value will increase proportionally. In predictive modeling, feature selection reduces the number of input variables. This reduction is advantageous as it lowers the computational cost of modeling and, in certain cases, enhances the model’s performance [45]. The behavior analysis relies on features that capture the user’s history of eight possible negative actions and their counters, as well as ability, sustainability, experience, and major. The aforementioned features would serve as the basis for assessing the user’s trustworthiness in handling permission requests and in providing input to machine learning algorithms. Therefore, the machine learning model classifies each user into one of the following trust levels: fully trusted, high trust, low trust, and not trusted. The prediction can change over time from one outcome to another (e.g., from low trust to high trust) as more positive actions are exhibited, and vice versa. Low-trust and untrusted users will have minimal permissions to delegate to others. However, they can perform their tasks and duties and receive permissions if highly trusted users approve their requests. Our aim is to improve the model’s reliability and prevent the abuse of privilege. The ATACHOBA model has both static and dynamic modules. Static configurations are the organization’s collaboration policies. On the other hand, dynamic events are represented by continuous user behaviors.

5. Dataset

To ensure that the ATACHOBA model is suitable for deployment in real-world systems, it should ideally be tested with actual data. However, obtaining such data is challenging, particularly in critical private systems such as healthcare, due to confidentiality and privacy concerns [46,47]. Therefore, synthetic data is generated through simulation as an alternative. The simulation represents tasks and users within a healthcare system. The dataset aims to mimic the healthcare environment by adhering to assumed probabilities reflective of everyday hospital operations. The pseudorandomly generated simulation data serves as the dataset for the machine learning algorithm. Building a behavior-based analysis involves training the model with an appropriate dataset. The challenge lies in examining how the classifier operates on features generated by a simulation tool. The data in our process is split into 70% for training and 30% for testing. Various user behaviors within the system can be modeled using simulation. These behaviors can be analyzed to derive valuable insights (e.g., user behavior can be used to predict future actions).
During the simulation process, 5000 users were assigned to five distinct roles within the healthcare system: 1000 front desk staff, 1000 doctors, 500 ER nurses, 1500 nurses, and 1000 administrators. The dataset was captured after 50 simulation ticks. It results in the following collection of user counts and their respective trust levels. The collection consists of 1200 fully trustworthy, 1950 high-trust, 1678 low-trust, and 172 not-trusted items.

6. Experiment

To ensure the ATACHOBA model is practical and applicable in real-world systems, it is advisable to validate it in an operational environment. For this study, a healthcare environment was selected. However, the ATACHOBA model is flexible and can be adopted in several organizations. Each organization should provide its collaboration policies and the weights for the security factors in the trust formula, based on its properties.
Healthcare management systems include critical patients’ information. Any information disclosure or misuse of the data without proper authorization may cause serious harm. Furthermore, to gain patients’ trust and uphold a good reputation, the system must maintain a high level of security to safeguard patients’ information. To achieve this, healthcare organizations should implement a robust access control model to secure their systems. Our model, ATACHOBA, safeguards patients’ privacy by granting access to sensitive information exclusively to trusted users. Furthermore, the healthcare system must ensure availability and rapid response times. To address this, ATACHOBA prioritizes and processes emergency cases without delay. In an emergency, ATACHOBA balances security and flexibility by allowing delegation exclusively to trusted parties. A series of simulations is conducted to validate the model, considering all potential scenarios.

6.1. Initial Settings

ATACHOBA model users are primarily system users who incorporate the ATACHOBA model as their access control model. Consider a healthcare system comprising numerous front-desk staff, doctors, ER nurses, and administrators. The resources used in the scenario include appointments, doctor and nurses’ shift schedules, medical reports, partial patient information, and all patient information, which were labeled as public, restricted, confidential, secret, and top-secret, respectively. The formula of direct trust includes the following factors: ability (A), sustainability (S), relationship (R), and experience (E). These factors are described as follows:
  • Ability: A random number is assigned between zero and one, where a value of one means the person can perform the actions, and zero otherwise.
  • Sustainability: A random number between zero and one is generated that represents sustained behavior, such as effective working hours and the number of common shifts.
  • Relationship: A random number between zero and one is generated to represent the level of interaction and relationship between colleagues, and the number of common relationships.
  • Experience: A random number between zero and one is assigned to represent years of experience and level of expertise.
All these factors are generated randomly by the simulator within a pre-specified reasonable range.

6.2. Healthcare System Functions

Based on the software requirements specification for healthcare management systems [48], tthe following functions were selected in our scenario. In a normal situation, the model follows RBAC’s basic idea, which allows users to perform some operations based on their role responsibilities. Table 2 shows a mapping of system users and their tasks. Certain constraints are imposed on users’ responsibilities. Doctors and nurses are granted access only to the patients assigned to them. At the same time, front desk staff have access solely to patients’ basic information and cannot access their medical histories.

6.3. Healthcare System Collaboration Policy

ATACHOBA ensures effective responsibility distribution and reduces abnormal activities by assigning dynamic trust values to users and enabling their contributions. The trust value determines their ability to contribute and delegate. Table 3 shows a mapping of trust levels and the allowed degree of collaboration. Users’ collaboration is allowed only for permissions with a collaboration policy. It includes the role, the maximum user weight, the maximum role member weight, and permissions. The healthcare system scenario consists of many collaboration policies; here are some examples:
  • CP (Administrator, 10, 20, Assign doctor);
  • CP (Nurse, 15, 60, Discharge patient);
  • CP (Nurse, 15, 40, Evaluate ER patient’s case);
  • CP (ER nurse, 20, 40, Treat primary wounds);
  • CP (Doctor, 20, 30, Update checkup info).
Moreover, the following list shows some of the permissions with a threshold for the healthcare system application:
  • P (Assign doctor, 20);
  • P (Update checkup info, 30);
  • P (Evaluate ER patient’s case, 40);
  • P (Treat primary wounds, 40);
  • P (Discharge patient, 60).

6.4. Healthcare System Unauthorized Operations

In a healthcare system, unauthorized operations are prohibited based on the user’s role. Performing such operations will result in a penalty being assigned to the user. The penalty value reduces the user’s trust level, thereby preventing further harm to the system. An unauthorized operation is any action that could harm the system or disclose sensitive information. In the simulation, a set of unauthorized operations was selected, including the following:
  • A doctor attempts to access a patient’s information that is not assigned to them and does so without an emergency.
  • A nurse attempts to access all patient information without a doctor’s delegation (categorized as confidential information).
  • A doctor attempts to update a patient’s drug information that is not assigned to them.
Moreover, if a doctor indiscriminately accepts all delegation requests without thoroughly examining each one, patients’ privacy may be violated. This may result in the delegation of sensitive operations to unlicensed users. To address this issue, ATACHOBA utilizes honey objects and honey requests to prevent the irresponsible practice of accepting requests without proper inspection. Each acceptance of a honey request or access to a honey object will negatively impact the trust value. In addition, if a resource is harmed by an abnormal or unauthorized operation, its label will affect the penalty value. For example, the penalty for unauthorized access to “confidential” patient information will be significantly higher than for accessing “public” appointment files. The resource label will determine the significance of unauthorized operation (Si).

7. Implementation

There are three phases in system development: the simulator, the web service, and the portal. Initially, NetLogo v6.2.0 was utilized as the simulation environment [49]. The entire simulation output was then used to populate the dataset, which was subsequently fed into a machine learning algorithm. Following this, a web service was developed to retrieve the recommendations. Finally, to test our model, a web application was developed.

7.1. Components Integration

The ATACHOBA system incorporates two integrations. First, the integration between the simulator, which generates the dataset as CSV file, and the machine learning classifier. Second, the integration of the Windows service and the classifier processes. This integration has been achieved using the RapidMiner Server (a custom RESTful web service, v9.1) [50]. Figure 2 illustrates the integration of the ATACHOBA system components.

7.2. Simulation Tool

In line with recommendations from many systems that incorporate simulation, such as [51,52,53], ATACHOBA utilizes NetLogo simulator for experimentation and testing [49]. Using the simulator, the interactions between healthcare system users and the system are simulated. Eight negative behaviors were selected for the simulation, some of which are general behaviors that could affect any system: virus traces, over-limit unsuccessful login, overbandwidth usage, and long logged-in time [44]. On the other hand, certain healthcare system behaviors, such as wrong dose, corrupt public file, corrupt confidential file, and access honey objects, are also considered. The users’ trust level has been monitored over time (represented by ticks in the simulator). Depending on their trust level, users’ colors will change over time to green, blue, yellow, and red for fully trusted, high trust, low trust, and not trusted, respectively.

7.3. Machine Learning

To enhance a system’s intelligence and capabilities, incorporating machine learning solutions is highly recommended, particularly in healthcare applications [54]. RapidMiner [50] is a data science tool utilized to develop an ATACHOBA machine learning model and web service. More specifically, RapidMiner is used to develop the machine learning model for behavior analysis and trust evaluation. Building a RapidMiner process requires properly adding and integrating components to achieve the desired results. The model is then deployed as a web service to enable its use within an application. For this purpose, RapidMiner offers an additional tool called RapidMiner Server. The web service is designed to accept the input parameters used during training.

8. Experimental Results

The experiments demonstrate that the ATACHOBA model is highly responsive, particularly in emergency situations. The time required to calculate and update all users’ trust values is considered in the system evaluation. The performance of the trust update function depends on the trained machine learning model. Additionally, the function’s response time is directly proportional to the number of users. Load testing shows that response time increases gradually as the number of users increases. However, an effective way to implement the trust update function is by utilizing a CRON job that runs periodically in the background. This approach ensures no delays in access requests, particularly during emergencies.
The safety property requires that no permissions be granted to unauthorized users. An access control model is considered safe when resources are accessible only to authorized users [55]. Therefore, the ATACHOBA model enhances the safety of the protected system by granting permissions only to highly trusted users, combined with the use of the machine learning algorithm’s recommendations as an additional measure. Moreover, ATACHOBA imposes penalties on users for any abnormal or malicious activity to mitigate potential harm.

8.1. Machine Learning Results

A well-populated user dataset generated during the simulation process, along with the decision tree machine learning algorithm [56], is used to train the model. This trained model evaluates user parameters and previously logged actions to determine the user’s trust level. Based on the assessed trust level, future access requests may be either approved or denied. Figure 3 presents the decision tree of the trained model.
Our trained model achieved 97.20% accuracy when the decision tree algorithm was applied. The data used in the process was split into 70% for training and 30% for testing. This accuracy was calculated based on the 30% testing sample of the generated dataset. Table 4 shows the prediction vs. actual results. For example, 565 users were predicted as ’high trust’ and were indeed ‘high trust’, while 15 were actually ‘low trust’ but incorrectly predicted as ‘high trust’. The metrics, including precision and recall, are measured by evaluating the entire dataset. The confusion matrix is a useful tool for calculating accuracy, precision, and recall. Additionally, confusion matrices can be applied to multi-class classifiers. The ratio of true positives to the total of true positives and false positives is known as precision. Precision evaluates how many false positives were placed into the mix. The precision percentages for the class labels fully trusted, high trust, low trust, and not trusted in our trained model are 95.73%, 97.25%, 98.58%, and 94.23%, respectively. Recall, on the other hand, considers the number of false negatives that were included in the prediction process. In our trained model, the recall percentages for fully trusted, high trust, low trust, and not trusted are 99.72%, 96.58%, 96.42%, and 94.23%, respectively.
The multi-class problem must be transformed into several binomial tasks, one for each class, to calculate the F-score. In our model, the weighted average F-score is 96.579%.
To better evaluate the generalization and robustness of the experiment, we complemented the 70 % training and 30 % test split with a 10-fold cross-validation. We also included the random forest classifier to extend the comparative evaluation. Table 5 and Table 6 present the classification results for the decision tree and random forest classifiers. The results indicate that the decision tree classifier outperformed the random forest classifier based on the performance measures.
It is worth mentioning that several methods can enhance classification accuracy, such as expanding the range of trust thresholds and adjusting the penalty values for negative behavior.

8.2. Collaboration Policy Testing

The test inputs for policy evaluation consist of access requests, whereas the test outputs are access decisions. Test input execution occurs while the PDP evaluates the request. The actual access decisions should be compared to the expected decisions to determine the model’s accuracy [57]. In this experiment, policy testing has covered all the collaboration policies of the ATACHOBA model. Table 7 provides a snapshot comparison of the actual and expected access decisions for the requests used to perform this test.

8.3. Simulation Results

During simulation tool testing, specific variables were thoroughly validated. Figure 4 illustrates two test cases in which the number of populated users increases, and the system monitors user behavior in both scenarios. Additionally, the distribution of users across the five roles adheres to the specified probabilities. Each color represents a different role. Users’ colors are black, purple, green, red, and blue for the administrator, front desk staff, nurse, ER nurse, and doctor, respectively.
The visualization of simulation outcomes offers several benefits, including improved understanding of the system and ensuring it performs as intended. During the simulation, statistical analysis was conducted through diagrams and calculated results. The following observations highlight the simulation results:
  • Users’ trust levels, categorized as good or bad, might shift over time based on their positive or negative actions. Therefore, a good user at a certain point in time might become a bad one, as shown in Figure 5.
  • The statistics in Figure 6 depict the percentages of users and their trust levels after 50 ticks across the entire population.
  • Figure 7 illustrates the number of users at each trust level within a total population of 5000 users. The simulation accurately replicated the real-world distribution of users across trust levels. For instance, the number of fully trustworthy users is the smallest, followed by high-trust users, with a noticeable gap between them.
  • The model also computed the average number of negative actions per user class, as shown in Figure 8. Furthermore, fully trustworthy users have an average of 0.5825 (per 50 ticks), compared to the other classes, which are 0.63 and above. This indicates that negative actions that impact trust are fewer at the fully trustworthy trust level than at other trust levels.
  • One of the key ATACHOBA contributions is the addition of honey objects to deter the irresponsible practice of accepting requests without proper examination. Honey objects mitigate the risk of harming the system, track malicious behavior, and identify untrustworthy users. Each access to a honey object negatively impacts the trust value. The simulation indicates that 5000 users requested a total of 370 honey objects per 50 ticks.
  • Figure 9 depicts the growth of action counts over time. All action counts increase as the simulation progresses. The simulation process assigns higher probabilities to certain actions, which explains why the counts for some actions increase more rapidly than others.
  • To highlight the effectiveness of the proposed model, ATACHOBA is evaluated against RBAC, ABAC, E-RBAC, and T-CAC using the False Positive Rate (FPR) and True Positive Rate (TPR). The results show that ATACHOBA achieves an FPR of 1% while maintaining a TPR of 96.7%. Classical RBAC does not support emergency situations and does not consider trust. Consequently, legitimate emergency requests are denied, resulting in FPR and TPR of 0%. For ABAC, performance varies depending on whether emergency context and trust attributes are taken into consideration. Low FPR and low TPR are expected if the emergency situation is not supported. However, if ABAC is designed to handle emergency situations and to incorporate trust, it may improve TPR and FPR. E-RBAC is a trust-based model designed to address emergencies and critical situations. While this approach improves TPR, the absence of continuous verification, such as the use of honey objects and honey requests, increases false positives and results in medium to high FPR. T-CAC emphasizes user collaboration, grounded in business requirements. However, because trust calculation is not considered, malicious users may exploit collaborative contributions, resulting in high TPR and high FPR.

9. Security and Operational Consideration

Regarding security and operational complexity, ATACHOBA introduces multiple layers of protection to ensure continuous verification. This would indeed increase operational complexity; however, the overhead can remain minimal for several reasons. In the administration of an organization’s IT infrastructure, indicators of malicious or anomalous behavior, such as traces of malware, excessive failed login attempts, or abnormal bandwidth consumption, are already monitored by existing systems. Security Information and Event Management (SIEM) solutions collect and aggregate these events, along with other audit logs, to ensure comprehensive protection of organizational networks and assets. ATACHOBA can be integrated with SIEM solutions to enable real-time behavior analysis. Since the adaptive trust and behavior analysis mechanisms in ATACHOBA operate autonomously without requiring human intervention, the resulting operational complexity is minimal. The generation of honey objects can also be automated using synthetic data generators that leverage natural language processing techniques. Therefore, the additional overhead imposed on the security team is limited. From the user perspective, honey objects can be concealed using decoy-document techniques [39], where a cryptographic tag generated via a keyed-hash message authentication code (HMAC) [40] is stored as a hidden attribute of the honey objects. An additional layer can be employed to control the visibility of an indicator that specifies whether an object is a honey object or a normal one. Consequently, this enables legitimate users to distinguish and avoid honey objects with minimal effort. Furthermore, the frequency of honey requests can be dynamically adjusted: high-trust users may be subjected to fewer challenges, whereas low-trust users may experience more frequent honey requests. This adaptive mechanism ensures that user productivity is not negatively affected while maintaining minimal operational overhead. At the same time, it preserves multiple layers of protection to support continuous verification.

10. Conclusions and Future Work

In conclusion, an adaptive trust-based access control model with honey objects and behavior analysis has been developed, enabling organizations to maintain the confidentiality and integrity of their transactions and data. ATACHOBA provides a two-level verification process in which a user is granted access only after both the dynamic trust-based decision and the behavior-based recommendation concur.
Moreover, ATACHOBA imposes penalties on users for any abnormal or malicious activities to mitigate potential harm. As a proof of concept, the healthcare system has been implemented as a web application to simulate a healthcare organization’s user behavior through practical scenarios. Finally, its feasibility was demonstrated via performance analysis and policy testing.
The experimental results of ATACHOBA are promising but are currently limited to simulation-based evaluation. The future direction involves applying the same model in real-life scenarios. ATACHOBA could be utilized in organizations handling confidential information, where a greater number of users, diverse roles, and more complex behaviors are present. Implementing the system in real-world scenarios would provide valuable feedback from actual users, which is crucial for its improvement. Moreover, a comprehensive comparison with baseline access control models and recent approaches would further enrich this study. The usability impact and user feedback regarding honey requests and honey objects will be explored in future work. The efficiency of the trained model can be further enhanced by refining the machine learning algorithm, including improvements in pre-processing, feature selection, and the exploration of alternative classification algorithms. Furthermore, studying the value and implications of the behavior-based model across a range of threshold-based systems would enhance the relevance of our research and facilitate further improvements in access decision-making while accounting for the context of the access control model. Some studies show that decision-making regarding the development and implementation of security models is influenced by societal and personal relationships. Therefore, it would be worthwhile to examine the model’s social implications, as they could yield valuable insights for refining the decision-making criteria. Future work will also explore integrating large language models (LLMs) with ATACHOBA to improve the proposed model’s usability and adaptability.

Author Contributions

Conceptualization, F.A.A.; methodology, A.S.A. and F.A.A.; software, A.S.A.; validation, A.S.A. and F.A.A.; formal analysis, A.S.A. and F.A.A.; investigation, A.S.A. and F.A.A.; resources, A.S.A. and F.A.A.; data curation, A.S.A.; writing—original draft preparation, A.S.A. and F.A.A.; writing—review and editing, A.S.A. and F.A.A.; visualization, A.S.A.; supervision, F.A.A.; project administration, F.A.A.; funding acquisition, F.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Saud University, Riyadh, Saudi Arabia, through the Ongoing Research Funding Program, (ORFFT-2025-033-2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available on Zenodo at https://doi.org/10.5281/zenodo.18045244.

Acknowledgments

The authors would like to thank Ongoing Research Funding Program, (ORFFT-2025-033-2), King Saud University, Riyadh, Saudi Arabia for financial support.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Nazerian, F.; Motameni, H.; Nematzadeh, H. Emergency role-based access control (E-RBAC) and analysis of model specifications with alloy. J. Inf. Secur. Appl. 2019, 45, 131–142. [Google Scholar] [CrossRef]
  2. Farhadighalati, N.; Estrada-Jimenez, L.A.; Nikghadam-Hojjati, S.; Barata, J. A Systematic Review of Access Control Models: Background, Existing Research, and Challenges. IEEE Access 2025, 13, 17777–17806. [Google Scholar] [CrossRef]
  3. Yarmand, M.H.; Sartipi, K.; Down, D.G. Behavior-based access control for distributed healthcare environment. In Proceedings of the IEEE International Symposium on Computer-Based Medical Systems, Jyvaskyla, Finland, 17–19 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 126–131. [Google Scholar]
  4. Mujawar, T.; Bhajantri, L.B. Trust computation framework based on user behavior and recommendation in cloud computing. In Proceedings of the International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 423–428. [Google Scholar]
  5. Kawagoe, K.; Kasai, K. Situation, team, and role-based access control. J. Comput. Sci. 2011, 7, 629–637. [Google Scholar] [CrossRef][Green Version]
  6. Alsulaiman, F.A.; Miège, A.; El Saddik, A. Threshold-based Collaborative Access Control (T-CAC). In Proceedings of the International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 25 May 2007; IEEE: Piscatway, NJ, USA, 2007; pp. 46–56. [Google Scholar]
  7. Kindervag, J. No More Chewy Centers: Introducing the Zero Trust Model of Information Security. Forrester Research. 2010. Available online: https://media.paloaltonetworks.com/documents/Forrester-No-More-Chewy-Centers.pdf (accessed on 14 March 2025).
  8. Liu, C.; Tan, R.; Wu, Y.; Feng, Y.; Jin, Z.; Zhang, F.; Liu, Y.; Liu, Q. Dissecting zero trust research landscape and its implementation in IoT. Cybersecurity 2024, 7, 20. [Google Scholar] [CrossRef]
  9. Gambo, M.L.; Almulhem, A. Zero Trust Architecture: A Systematic Literature Review. arXiv 2025, arXiv:2503.11659. [Google Scholar] [CrossRef]
  10. Ausanka-Crues, R. Methods for Access Control: Advances and Limitations; Harvey Mudd College: Claremont, CA, USA, 2006. [Google Scholar]
  11. De Capitani di Vimercati, S.; Samarati, P. Mandatory Access Control (MAC) Policies. In Encyclopedia of Cryptography, Security and Privacy; Jajodia, S., Samarati, P., Yung, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2025; pp. 1483–1484. [Google Scholar] [CrossRef]
  12. Parkinson, S.; Khan, S. A Survey on Empirical Security Analysis of Access-control Systems: A Real-world Perspective. ACM Comput. Surv. 2023, 55, 123. [Google Scholar] [CrossRef]
  13. Golightly, L.; Modesti, P.; Garcia, R.; Chang, V. Securing distributed systems: A survey on access control techniques for cloud, blockchain, IoT and SDN. Cyber Secur. Appl. 2023, 1, 100015. [Google Scholar] [CrossRef]
  14. Hu, V.C.; Kuhn, D.R.; Ferraiolo, D.F. Attribute-based access control. Computer 2015, 48, 85–88. [Google Scholar] [CrossRef]
  15. Rajasekharan, D. Simplifying Attribute-Based Access Control (ABAC) for Modern Enterprises. Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. 2025, 11, 3140–3149. [Google Scholar] [CrossRef]
  16. Bai, S.K.; Goveas, J.A.; Mitra, B. Enhancing Access Control in Distributed Systems Through Intelligent ABAC Policy Mining. In Proceedings of the 22nd International Conference on Security and Cryptography (SECRYPT 2025), Bilbao, Spain, 11–13 June 2025; SciTePress: Setúbal, Portugal, 2025; pp. 503–514. [Google Scholar] [CrossRef]
  17. Qiu, J.; Tian, Z.; Du, C.; Zuo, Q.; Su, S.; Fang, B. A Survey on Access Control in the Age of Internet of Things. IEEE Internet Things J. 2020, 7, 4682–4696. [Google Scholar] [CrossRef]
  18. Bello, A.J.; Diyan, M.; Asghar, I. Zero Trust Implementation for Legacy Systems using Dynamic Microsegmentation, Role-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC). In Proceedings of the 2025 4th International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 13–14 April 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 181–189. [Google Scholar] [CrossRef]
  19. Alturi, V.; Ferraiolo, D. Role-Based Access Control. In Encyclopedia of Cryptography, Security and Privacy; Jajodia, S., Samarati, P., Yung, M., Eds.; Springer: Cham, Switzerland, 2025; pp. 2124–2127. [Google Scholar] [CrossRef]
  20. Atlam, H.F.; Yang, Y. Enhancing Healthcare Security: A Unified RBAC and ABAC Risk-Aware Access Control Approach. Future Internet 2025, 17, 262. [Google Scholar] [CrossRef]
  21. Madani, M.A.; Erradi, M.; Benkaouz, Y. A collaborative task role-based access control model. J. Inf. Assur. Secur. 2016, 11, 348–358. [Google Scholar]
  22. Almheiri, S.; Bhardwaj, A.; Sanyal, D.; Maharana, U.; Bhargava, A.; Alnuaimi, N.; Al Qassem, L.; Al Sallab, A. Role-Aware Language Models for Secure and Contextualized Access Control in Organizations. arXiv 2025, arXiv:2507.23465. [Google Scholar] [CrossRef]
  23. Cheng, Y.; Xu, M.; Zhang, Y.; Li, K.; Wu, H.; Zhang, Y.; Guo, S.; Qiu, W.; Yu, D.; Cheng, X. Say What You Mean: Natural Language Access Control with Large Language Models for Internet of Things. arXiv 2025, arXiv:2505.23835. [Google Scholar] [CrossRef]
  24. Vatsa, A.; Patel, P.; Eiers, W. Synthesizing Access Control Policies Using Large Language Models. In Proceedings of the IEEE/ACM International Workshop on Natural Language-Based Software Engineering (NLBSE), Ottawa, ON, Canada, 27–28 April 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 13–16. [Google Scholar] [CrossRef]
  25. Ben Daoud, W. A trust-based access control scheme for e-Health cloud. In Proceedings of the International Conference on Computer Systems and Applications (AICCSA), Aqaba, Jordan, 28 October–1 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
  26. Behera, P.K.; Khilar, P.M. A novel trust-based access control model for cloud environments. In Proceedings of the International Conference on Signal, Networks, Computing, and Systems, New Delhi, India, 25–27 February 2016; Springer: New Delhi, India, 2016; pp. 285–295. [Google Scholar]
  27. Li, C.; Ma, M.; Li, B.; Chen, S. Design and implementation of trust-based access control model for cloud computing. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1934–1938. [Google Scholar] [CrossRef]
  28. Singh, A.; Chatterjee, K. Trust-based access control model for securing electronic healthcare system. J. Ambient Intell. Hum. Comput. 2019, 10, 4547–4565. [Google Scholar] [CrossRef]
  29. Adler, A.; Mayhew, M.J.; Cleveland, J.; Atighetchi, M.; Greenstadt, R. Using machine learning for behavior-based access control: Scalable anomaly detection on TCP connections and HTTP requests. In Proceedings of the IEEE Military Communications Conference (MILCOM), San Diego, CA, USA, 18–20 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1880–1887. [Google Scholar]
  30. Afshar, M.; Samet, S.; Usefi, H. Incorporating behavior in attribute-based access control model using machine learning. In Proceedings of the 2021 IEEE International Systems Conference (SysCon), Vancouver, BC, Canada, 15 April–15 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
  31. Kang, C.; Li, E.; Liu, D.; You, X.; Li, X. A dynamic and fine-grained user trust evaluation model for micro-segmentation cloud computing environment. J. Comput. 2023, 34, 215–232. [Google Scholar] [CrossRef]
  32. Wang, R.; Li, C.; Zhang, K.; Tu, B. Zero-trust based dynamic access control for cloud computing. Cybersecurity 2025, 8, 12. [Google Scholar] [CrossRef]
  33. Belguith, S.; Gochhayat, S.P.; Conti, M.; Russello, G. Emergency access control management via attribute-based encrypted QR codes. In Proceedings of the IEEE Conference on Communications and Network Security (CNS), Beijing, China, 30 May–1 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  34. Khan, M.F.F.; Sakamura, K. A secure and flexible e-Health access control system with provisions for emergency access overrides and delegation of access privileges. In Proceedings of the 2016 18th International Conference on Advanced Communication Technology (ICACT), Pyeongchang, Republic of Korea, 31 January–3 February 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 541–546. [Google Scholar]
  35. Djilali, H.B.; Tandjaoui, D.; Khemissa, H. Enhanced dynamic team access control for collaborative Internet of Things using context. Trans. Emerg. Telecommun. Technol. 2021, 32, e4083. [Google Scholar] [CrossRef]
  36. Salehi, A.; Han, S.R.; Rudolph, C.; Grobler, M. DACP: Enforcing a dynamic access control policy in cross-domain environments. Comput. Netw. 2023, 237, 110049. [Google Scholar] [CrossRef]
  37. Bowen, B.M.; Hershkop, S.; Keromytis, A.D.; Stolfo, S.J. Baiting inside attackers using decoy documents. In Proceedings of the Security and Privacy in Communication Networks, 5th International ICST Conference, SecureComm 2009, Athens, Greece, 14–18 September 2009; Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Berlin/Heidelberg, Germany, 2009; Volume 19, pp. 51–70. [Google Scholar]
  38. Juels, A. A bodyguard of lies: The use of honey objects in information security. In Proceedings of the ACM Symposium on Access Control Models and Technologies (SACMAT), London, ON, Canada, 25–27 June 2014; ACM: New York, NY, USA, 2014; pp. 1–3. [Google Scholar]
  39. Ben Salem, M.; Stolfo, S.J. Decoy document deployment for effective masquerade attack detection. In Proceedings of the 8th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA’11), Amsterdam, The Netherlands, 7–8 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 35–54. [Google Scholar]
  40. FIPS 198-1; The Keyed-Hash Message Authentication Code (HMAC). National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2008.
  41. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integration model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  42. Ma, S.; He, J.; Shuai, X. An access control method based on scenario trust. Int. J. Comput. Intell. Syst. 2012, 5, 942–952. [Google Scholar] [CrossRef]
  43. Rodríguez, L.C.; Montes-Lopez, J.M.; López, D.R.; Serrano, P. DEBAC: Dynamic Explainable Behavior-Based Access Control. In Proceedings of the 2025 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Poznan, Poland, 3–6 June 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 733–738. [Google Scholar] [CrossRef]
  44. Alruwaythi, M.; Kambampaty, K.; Nygard, K.E. User behavior and trust evaluation in cloud computing. In Proceedings of the 34th International Conference on Computers and Their Applications, Honolulu, HI, USA, 18–20 March 2019; EasyChair: Stockport, UK, 2019; pp. 378–386. [Google Scholar]
  45. Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  46. Van Scharen, A.; Cruyt, K.; Colon, J.; Sutter, S.D.; Duerinck, J.; Forsyth, R.; Olsen, C.; Quinn, P.; Tzavella, K.; Dooren, S.V.; et al. Unlocking Health Data for Research: Legal, Technical, and Organisational Lessons from a Belgian Interdisciplinary Case Study. J. Healthc. Inform. Res. 2025. [Google Scholar] [CrossRef]
  47. Nguyen, M.-D.; La, V.H.; Cavalli, R.; de Oca, E.M. Towards improving explainability, resilience and performance of cybersecurity analysis of 5G/IoT networks (work-in-progress paper). In Proceedings of the IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Valencia, Spain, 4–13 April 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 7–10. [Google Scholar] [CrossRef]
  48. Sigdel, S.; Yadav, R. Software Requirements Specification of E-Health Architecture for Nepal. 2017. Available online: https://www.researchgate.net/profile/Sanjog-Sigdel/publication/320110839_Software_Requirements_Specification_of_E-Health_Architecture_for_Nepal/links/59ce98eeaca2721f434efbbb/Software-Requirements-Specification-of-E-Health-Architecture-for-Nepal.pdf?_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6InB1YmxpY2F0aW9uIiwicGFnZSI6InB1YmxpY2F0aW9uIn19 (accessed on 5 November 2024).
  49. Wilensky, U. NetLogo, version 6.2.0; Northwestern University: Evanston, IL, USA, 1999. Available online: http://ccl.northwestern.edu/netlogo/ (accessed on 5 November 2024).
  50. Mierswa, I.; Klinkenberg, R. RapidMiner Studio (9.1) [Data Science, Machine Learning, Predictive Analytics]. RapidMiner. 2018. Available online: https://rapidminer.com/ (accessed on 1 May 2022).
  51. Raab, R.; Lenger, K.; Stickler, D.; Granigg, W.; Lichtenegger, K. An Initial Comparison of Selected Agent-Based Simulation Tools in the Context of Industrial Health and Safety Management. In Proceedings of the 2022 8th International Conference on Computer Technology Applications (ICCTA ’22), Vienna, Austria, 12–14 May 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 106–112. [Google Scholar] [CrossRef]
  52. Qin, J. User Behavior Trust Based Cloud Computing Access Control Model. Master’s Thesis, Faculty of Computing, Blekinge Institute of Technology, Karlskrona, Sweden, 2016. Available online: https://www.diva-portal.org/smash/get/diva2:942588/FULLTEXT02.pdf (accessed on 23 January 2025).
  53. Farooqi, N. Testing a trust management system for cloud computing using simulation. Int. J. Inf. Secur. Res. 2016, 6, 636–642. [Google Scholar] [CrossRef]
  54. Sarker, I.H. Data science and analytics: An overview from data-driven smart computing, decision-making and applications perspective. SN Comput. Sci. 2021, 2, 377. [Google Scholar] [CrossRef] [PubMed]
  55. Hu, V.C.; Kuhn, R.; Yaga, D. NIST Special Publication 800-192—Verification and Test Methods for Access Control Policies/Models; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2017. [Google Scholar] [CrossRef]
  56. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques, 4th ed.; Morgan Kaufmann: Burlington, MA, USA, 2022. [Google Scholar]
  57. Hwang, J.; Martin, E.; Xie, T.; Hu, V. Testing access control policies. In Encyclopedia of Software Engineering; Wiley: Hoboken, NJ, USA, 2010; pp. 673–683. Available online: https://taoxie.cs.illinois.edu/publications/policytest.pdf (accessed on 15 January 2025).
Figure 1. Overall architecture of ATACHOBA model, where a requester seeks user collaboration to obtain permission p.
Figure 1. Overall architecture of ATACHOBA model, where a requester seeks user collaboration to obtain permission p.
Applsci 16 00242 g001
Figure 2. Integration of ATACHOBA system components.
Figure 2. Integration of ATACHOBA system components.
Applsci 16 00242 g002
Figure 3. Portion of ATACHOBA decision tree.
Figure 3. Portion of ATACHOBA decision tree.
Applsci 16 00242 g003
Figure 4. Population simulation demonstrates two test cases in which the number of users increases, enabling the system to observe user behavior in both scenarios. In this simulation, “box” represents a healthcare action, while “person” denotes a healthcare user. Furthermore, the distribution of users across five roles adheres to the specified probabilities. Each color corresponds to a specific role: black, purple, green, red, and blue represent administrator, front-desk staff, nurse, ER nurse, and doctor, respectively.
Figure 4. Population simulation demonstrates two test cases in which the number of users increases, enabling the system to observe user behavior in both scenarios. In this simulation, “box” represents a healthcare action, while “person” denotes a healthcare user. Furthermore, the distribution of users across five roles adheres to the specified probabilities. Each color corresponds to a specific role: black, purple, green, red, and blue represent administrator, front-desk staff, nurse, ER nurse, and doctor, respectively.
Applsci 16 00242 g004
Figure 5. Trust levels of two users over time, where the green line represents User 1, with a trust value of 0.80 or higher, who is considered a good user at the moment. However, if User 1’s negative actions continue, the system will consider User 1 a bad user. The red line represents User 2, a bad user with a trust value less than 0.80.
Figure 5. Trust levels of two users over time, where the green line represents User 1, with a trust value of 0.80 or higher, who is considered a good user at the moment. However, if User 1’s negative actions continue, the system will consider User 1 a bad user. The red line represents User 2, a bad user with a trust value less than 0.80.
Applsci 16 00242 g005
Figure 6. Percentages of users and their trust level after 50 Ticks.
Figure 6. Percentages of users and their trust level after 50 Ticks.
Applsci 16 00242 g006
Figure 7. Users count of each trust level.
Figure 7. Users count of each trust level.
Applsci 16 00242 g007
Figure 8. Negative action count.
Figure 8. Negative action count.
Applsci 16 00242 g008
Figure 9. The growth of the negative actions count over time. The growth of negative action counts over time. Traces of viruses are represented by the violet line; over-limit unsuccessful login attempts by the lime line; over-bandwidth usage by the orange line; long logged-in time by the yellow line; wrong dose by the magenta line; corrupt public profile by the cyan line; corrupt secret profile by the brown line; and accessing honey objects by the red line.
Figure 9. The growth of the negative actions count over time. The growth of negative action counts over time. Traces of viruses are represented by the violet line; over-limit unsuccessful login attempts by the lime line; over-bandwidth usage by the orange line; long logged-in time by the yellow line; wrong dose by the magenta line; corrupt public profile by the cyan line; corrupt secret profile by the brown line; and accessing honey objects by the red line.
Applsci 16 00242 g009
Table 1. User negative behaviors.
Table 1. User negative behaviors.
BehaviorDescription
Traces of VirusesThe user’s PC contains traces of viruses that were previously detected.
Over-Limit Unsuccessful LoginUsers who exceed the allowed number of failed login attempts.
Over Bandwidth UseThe user’s PC consumes more bandwidth than anticipated.
Long Logged-In TimeUsers who remain logged in for extended periods.
Wrong DoseCaregivers who make errors when giving prescriptions.
Corrupt Public FileUsers who damage a file classified as public.
Corrupt Secret FileUsers who damage a file classified as secret.
Access Honey ObjectUsers who attempt to access unauthorized resources, such as requesting a honey object.
Table 2. System users tasks.
Table 2. System users tasks.
UserTask
Front-desk staffUpdate patient.
AdministratorAssign doctor; assign nurse.
NurseNursing diagnosis; update checkup info; review patient’s partial information.
ER nurseDetermine medication; treat primary wounds; evaluate the ER patient’s case.
DoctorUpdate drug info; discharge patient; review all patients’ information.
Table 3. Trust levels and their corresponding contribution weight percentages.
Table 3. Trust levels and their corresponding contribution weight percentages.
Trust LevelTrust ValuePermitted Contribution Weight
0(0, 0.25]0% not trusted
1(0.25, 0.50]50% low trust
2(0.50, 0.75]75% high trust
3(0.75, 1]100% fully trusted
Table 4. Confusion matrix, which shows the prediction vs. actual results of classifying users to trust levels based on 70%training and 30% testing split.
Table 4. Confusion matrix, which shows the prediction vs. actual results of classifying users to trust levels based on 70%training and 30% testing split.
True “High Trust”True “Fully Trustworthy”True “Low Trust”True “Not Trusted”Class Precision
Pred. “high trust”565115097.25%
Pred. “fully trustworthy”163590095.73%
Pred. “low trust”40485398.58%
Pred. “not trusted”0034994.23%
Class recall96.58%99.72%96.42%94.23%
Table 5. Classification results based on 70% training and 30% testing split.
Table 5. Classification results based on 70% training and 30% testing split.
ClassifierAccuracyPrecisionRecallF1-Score
Decision Tree97.20%96.96%96.11%96.52%
Random Forest94.7394.58%88.85%91.12%
Table 6. Classification results based on 10-fold cross-validation.
Table 6. Classification results based on 10-fold cross-validation.
ClassifierAccuracyPrecisionRecallF1-Score
Decision Tree97.00%96.94%95.93%96.41%
Random Forest94.73%94.74%89.14%91.40%
Table 7. Policy testing cases.
Table 7. Policy testing cases.
Access RequestExpected ResponseActual Response
A fully trusted nurse requests to “Review all patient info”RejectedRejected
Two highly trusted nurses request to “Review all patient info”RejectedRejected
Three fully trustworthy, one highly trusted, and one low-trusted nurse requests to “Review all patient info”AcceptedAccepted
A fully trusted doctor requests to “Update drug info”RejectedRejected
Two fully trusted doctors request to “Update drug info”AcceptedAccepted
Two fully trusted and one highly trusted nurse request to “Evaluate ER patient case”AcceptedAccepted
A fully trusted ER nurse requests to “Assign Doctor”RejectedRejected
Two highly trusted ER nurses request to “Assign Doctor”AcceptedAccepted
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alamro, A.S.; Alsulaiman, F.A. Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis. Appl. Sci. 2026, 16, 242. https://doi.org/10.3390/app16010242

AMA Style

Alamro AS, Alsulaiman FA. Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis. Applied Sciences. 2026; 16(1):242. https://doi.org/10.3390/app16010242

Chicago/Turabian Style

Alamro, Amal S., and Fawaz A. Alsulaiman. 2026. "Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis" Applied Sciences 16, no. 1: 242. https://doi.org/10.3390/app16010242

APA Style

Alamro, A. S., & Alsulaiman, F. A. (2026). Adaptive Trust-Based Access Control with Honey Objects and Behavior Analysis. Applied Sciences, 16(1), 242. https://doi.org/10.3390/app16010242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop