Next Article in Journal
GenU-Net++: An Automatic Intracranial Brain Tumors Segmentation Algorithm on 3D Image Series with High Performance
Previous Article in Journal
A Proximal Algorithm with Convergence Guarantee for a Nonconvex Minimization Problem Based on Reproducing Kernel Hilbert Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Naming Heterogeneity Resolution for XACML Policy Evaluation in a Distributed Environment

1
Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Serdang 43400, Malaysia
2
School of Theoretical & Applied Science, Ramapo College of New Jersey, Mahwah, NJ 07430, USA
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2394; https://doi.org/10.3390/sym13122394
Submission received: 16 November 2021 / Revised: 1 December 2021 / Accepted: 7 December 2021 / Published: 12 December 2021
(This article belongs to the Section Computer)

Abstract

:
Policy evaluation is a process to determine whether a request submitted by a user satisfies the access control policies defined by an organization. Naming heterogeneity between the attribute values of a request and a policy is common due to syntactic variations and terminological variations, particularly among organizations of a distributed environment. Existing policy evaluation engines employ a simple string equal matching function in evaluating the similarity between the attribute values of a request and a policy, which are inaccurate, since only exact match is considered similar. This work proposes several matching functions which are not limited to the string equal matching function that aim to resolve various types of naming heterogeneity. Our proposed solution is also capable of supporting symmetrical architecture applications, in which the organization can negotiate with the users for the release of their resources and properties that raise privacy concerns. The effectiveness of the proposed matching functions on real XACML policies, designed for universities, conference management, and the health care domain, is evaluated. The results show that the proposed solution has successfully achieved higher percentages of Recall and F-measure compared with the standard Sun’s XACML implementation, with our improvement, these measures gained up to 70% and 57%, respectively.

1. Introduction

Policy evaluation is a process to determine whether a request submitted by a user satisfies the access control policies defined by an organization. A practical distributed policy evaluation framework should be able to support the autonomy in policy specification, as well as interoperability, among parties and policy portability [1,2,3,4,5]. Naming heterogeneity arises due to the use of different combinations of characters which can represent the same term (syntactic variations), including typographical errors, similar terms belonging to different grammar categories, and different terms which have the same meaning (terminological variations) [6,7].
Existing policy evaluation engines [8,9,10,11] employ a simple string equal matching function during policy evaluation. However, they are deemed inaccurate since they do not explore naming heterogeneity and rely on the assumption that different terms represent different concepts. It would be unrealistic to assume that different organizations from different security domains would share the same vocabulary to represent their policies.
Several researchers have used ontologies for the specification of policies or add on semantic knowledge-based functions for semantic interoperability [12,13,14,15,16,17,18,19]. However, ontology forming is a labor-intensive, error-prone, and time-consuming task because, in general, it involves human input during the policy design stage to manually perform the ontology concept mapping, and with an assumption that the security officer is trusted to perform an accurate mapping. Moreover, the ontology needs to be reformed once a new party joins the collaboration. Therefore, developing a matching function that attempts to achieve effectiveness has been one of the main tasks in policy evaluation.
Several matching functions are proposed in this work to resolve the issue of naming heterogeneity between the attribute values of a request and a policy during policy evaluation. The proposed solution is domain-independent as it does not rely on any specific rules of a particular domain, hence a predefined knowledge of the domain is not required. Tokenization and concatenation are applied to the attribute values in order to remove unnecessary delimiters, which are considered as noise, before the proposed matching functions are executed. N-gram and WordNet are adopted as well in the proposed solution. N-gram is effective in matching terms with minor syntactic differences [13]; while WordNet could identify the equivalence and inheritance relationships between the attribute values of a request and a policy.
This work is based on the discretionary access control (DAC) model. The eXtensible access control markup language (XACML) is used to specify the policy since it is the OASIS standard language, and the standard defines a declarative access control policy language implemented in XML format, which is able to express policies in terms of rules over different kinds of attributes. Overall, the main contributions of this work are briefly described as follows:
  • We have proposed a naming heterogeneity resolution model with the main aim to resolve naming heterogeneity, which may arise due to syntactic variations and terminological variations during policy evaluation.
  • Several matching functions have been proposed. Each matching function has been designed to cater to certain type of variation (syntactic and/or terminological) by analyzing the terms that appeared in the attribute values of a request and a policy. N-gram and WordNet are utilized to provide the syntactic similarities and semantic relationships (synonym, hypernym, and hyponym) between terms, respectively.
  • The experimental results of the proposed solution are presented to prove its capability of identifying and resolving naming heterogeneity due to syntactic and terminological variations during policy evaluation.
The rest of the paper is organized as follows. Section 2 reviews the methods of policy evaluation proposed by previous studies. Section 3 introduces the necessary definitions and notations used throughout the paper, while Section 4 presents the proposed matching functions that aim to resolve naming heterogeneity, namely: Synonym Equal, Hyponym, Syntactical Synonym Equal, Syntactical Hyponym, Syntactical Equal, Hyponym Common Word, and Abbreviation Equal. An illustrative example based on the academy university domain is also given. Section 5 evaluates the performance of the proposed matching functions which is then compared to the performance of a previous notable work. The last section concludes this work and sheds light on some directions which can be used in the future.

2. Related Works

Numerous studies have proposed methods for integrating policies of collaborating parties into a global policy schema, which may support complex authorization specifications and requirements of the collaborating parties [19,20,21,22,23,24,25,26]. However, policy integration methods among various collaborating parties could become very complex due to domain heterogeneity and different vocabulary utilized by organizations in specifying their policies.
Several works have affirmed that collaborative partners may need to perform policy similarity by comparing their access control policies in order to determine which requests will be permitted among the policies [27,28,29,30]. Nevertheless, these works required the collaborative parties to provide their individual and independent policies that may be misused by adversaries with the intention to reveal sensitive information among those policies and may lead to unintended breaches of privacy. Due to the difficulty of integrating schemas from different organizations into a global schema, current researchers providing solutions for dynamic policy evaluation, which fit in the large scale of distributed systems, are receiving particular attention [8,9,10,11].
Sun’s XACML implementation [11] is a policy evaluation mechanism that is specifically designed to provide a full support for determining applicability of policies and evaluating requests against policies in XACML. The major problems are that XACML is unable to handle semantics that are associated with the elements and unable to properly detect policy conflicts among complex policies [5].
In order to achieve an efficient XACML policy evaluation that is able to deal with a large volume of requests, several works have focused on the performance of processing requests by improving the Sun’s XACML implementation [8,9,10]. These works, which mainly focus on evaluation time, have adopted a simple string equal matching function to match the string values. However, the simple string-based method is unable to solve naming heterogeneity in a distributed environment since we cannot expect that policies belonging to different organizations are based on the same vocabulary. Nevertheless, there are also works such as [2,31,32,33,34,35,36,37,38,39,40] that made attempts to improve the policy decision point (PDP) evaluation performance with regard to evaluation time by grouping/clustering the whole rule set into several subsets; hence, resolving the issue of semantic interoperability is not part of their solutions. Meanwhile, our work focuses on the effectiveness of a policy evaluation engine in which accuracy is the main measurement used.
A number of works have supported semantic interoperability to resolve naming heterogeneity [4,7,12,13,14,15,17,18]. These works combined policy rules with ontologies in order to improve the query answering support to infer domain knowledge. However, the ontology-based knowledge management in these works is a labor-intensive, error-prone, and time-consuming task because it needs intensive human involvement during the access control policy design stage to manually map the ontology. The human perception error that might occur while performing mapping, especially for policies of larger sizes, further hinders the full acceptance of such solution.
All possible violations that might exist among a request and a policy are identified based on the subject, object, action, and condition attributes of the request and policy. However, existing works are still lacking in terms of providing solutions to resolve naming heterogeneity, and it is yet to be validated whether the results returned by the evaluation engines are accurate. According to the work in [41], to reduce human involvement, string-based and language-based techniques and linguistic resources can be used to analyze strings. In [42], the authors measure syntactical similarity by using N-gram to compute the number of common N-grams between terms (i.e., sequences of N characters) while terminological analysis is performed by identifying equality between concepts utilizing the WordNet lexical database.
This work is an extension to our previous work [42], in which WordNet is further utilized to identify equivalence and inheritance relationships between the attribute values of a request and a policy. Table 1 presents a summary of the existing naming heterogeneity methods in a distributed system as described in this section.

3. Preliminaries

In this section, we present the necessary definitions and introduce the notations that are used throughout this paper. First, we give the general definitions related to policy evaluation that have been defined either formally or informally in the literature [6,7,15,43,44,45], based on the notations used in this paper (i.e., Definition 1 and Definition 2). This is then followed with specific definitions that are related to our work. Motivation examples are then put forward to further clarify the problem addressed in this paper.

3.1. Definitions and Notations

The definitions of policy and request are as follows:
Definition 1.
An access control policy,  P o l , is a tuple of the form:  P o l ( E f f e c t ,   T a r g e t ,   C o n d i t i o n ) .
Definition 2.
A request,  R e q , is presented in the form:  R e q ( S u b j e c t ,   R e s o u r c e ,   A c t i o n ,   C o n d i t i o n ) .
A T a r g e t is basically a set of conditions of a subject, resource, and action that must be met for a policy to be applied to a given request. S u b j e c t , R e s o u r c e , and A c t i o n are the components of a request and a policy. However, C o n d i t i o n is an optional attribute to further constrain the scope of a request or a policy. S u b j e c t identifies an individual user or a user role that can potentially invoke an action in the system. R e s o u r c e can be any objects for the subject to access (e.g., data or computer resources such as Webservers or database servers). Action represents any operations (e.g., delete or write a file) that can be applied to the resource. Finally, C o n d i t i o n is a Boolean expression that involves environment context of evaluation. Examples of a typical environment context are time (e.g., 2   p . m . t i m e 5   p . m . ) and spatial (e.g., l o c a t i o n = F a c u l t y   F l o o r ). The effect is the intended consequence of a satisfied policy (either P e r m i t or D e n y ).
We use the symbols as a logical disjunction and as a logical conjunction when multiple terms are joined into a single attribute of an access control policy. Our work covers the domain elementary expressions which are classified into three categories [7], as follows:
Category 1.
One variable equality constraints,  x = c , where x is a variable and c is a constant.
Category 2.
One variable inequality constraints,  x c , where  x is a variable,  c is a constant, and    { < , , , > } .
Category 3.
Compound Boolean expression constraints. This category combines the categories 1 and 2 using the logical operators or .
The domain of the terms that appeared in the above constraints belongs to string data type (e.g., E m a i l = g s 23442 @ u p m . e d u . m y ) and date/time data type (e.g., T i m e = 12 : 30 ). A policy is said to be applicable to a request if the term of a subject, resource, action, and condition of the request corresponds to the term of a subject, resource, action, and condition of the policy, respectively. The definition of an applicable policy can be formally defined as follows:
Definition 3.
A policy,  P o l i , is said to be applicable to a request,  R e q j , if and only if the subject of the request,  S u b j e c t R e q j , corresponds to the subject of the policy,  S u b j e c t P o l i , the resource of the request,  R e s o u r c e R e q j , corresponds to the request of the policy,  R e s o u r c e P o l i , the action of the request,  A c t i o n R e q j , corresponds to the action of the policy,  A c t i o n P o l i , and the condition of the request,  C o n d i t i o n R e q j , corresponds to the condition of the policy,  C o n d i t i o n P o l i .
Definition 4.
A term of  R e q j , a v R e q j , is said to correspond to a term of P o l i , a v P o l i , if and only if:
[ ( a v R e q j = a v P o l i ) ( a v R e q j a v P o l i ) ] S C ( a v R e q j , a v P o l i ) τ ,
where is an equivalence symbol,  S C is a similarity score, and  τ is a similarity threshold. Here,  t e r m implies the explicit value of an attribute of a request and a policy as specified by the user and administrator, respectively.
Definition 5.
The semantic relationship between a term of  R e q j , a v R e q j , and a term of  P o l i , a v P o l i , can be one of the following: synonym, hyponym, and hypernym. Synonym is a relation that exists between a v R e q j and a v P o l i that have the same meaning. Hyponym is a relation between  a v R e q j and  a v P o l i that implies one of terms is a specific meaning than the other term which is the general or superordinate term. The opposite relationship of hyponym is hypernym. Other relationships like antonym, homonym, and polysemy are not considered in this work since antonym represents a term opposite in meaning to another, homonym means that the terms are having the same spelling or pronunciation but different meanings and origins, and polysemy is the coexistence of many possible meanings for a term.

3.2. Illustrative Example

This section presents an illustrative example, based on the academy university domain. It attempts to highlight the following: (i) the various forms of terms used in a request as well as a policy and (ii) the different types of naming heterogeneity that occur during policy evaluation. These variants of terms and heterogeneity further hinder the process of matching and evaluating the similarities between the attribute values of a request and a policy during policy evaluation. Table 2 presents five explicit access control policies, based on Definition 1, while Table 3 presents four requests, based on Definition 2.
Altogether, there are 20 comparisons ( 5 × 4 ) for this illustrative example but only those comparisons that imply the policies are applicable to the requests are shown in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10. From this example, it is found that different forms of terms are used in a request and a policy, as further elaborated below:
  • A compound noun is a noun that is made of two or more words. For instance, referring to Table 3, the term T e a c h i n g   C o u r s e in the resource attribute of R e q 1 is a compound noun.
  • An abbreviation is a shortened or contracted form of a word or phrase. For instance, referring to Table 8, the term P r o f which is part of the term A s s o c i a t e P r o f in the subject attribute of R e q 3 is a shortened form of P r o f e s s o r .
  • An acronym is a word formed as an abbreviation from the initial letters in a phrase or a word [43]. For instance, referring to Table 2, R A in the subject attribute of P o l 1 is formed from the initial letters of R e s e a r c h A s s i s t a n t in the subject attribute of R e q 2 (Table 3).
  • A word may appear at the beginning of another word which is in the form of a compound noun. For instance, referring to Table 9, A s s i g n in the action attribute of P o l 4 occurs at the beginning of A s s i g n G r a d e in the action attribute of R e q 4 .
  • A word may appear at the end of another word which is in the form of a compound noun. For instance, referring to Table 6, G r a d e s in the resource attribute of P o l 1 occurs at the end of E x t e r n a l G r a d e s in the resource attribute of   R e q 2 .
  • A word may contain delimiter characters (i.e., dash, underscore, capital letters, etc.). For example, referring to Table 2, the term F a c u l t y _ M e m b e r in the subject attribute of P o l 5 contains “_” as delimiter.
The different forms of terms cause naming heterogeneity between a request and a policy. Based on the illustrative example, two types of naming heterogeneity among terms that need to be addressed during policy evaluation are identified, which are syntactic and terminological. It is crucial to be able to recognize the form of the terms so as to achieve the appropriate matching functions in resolving the naming heterogeneity. String-based techniques (i.e., N-gram, prefix, and suffix), language-based techniques (i.e., tokenization), and linguistic resources [41] (i.e., WordNet) are the techniques that are suitable to be applied in resolving naming heterogeneity automatically, because of its ability to reduce human involvement.

4. Naming Heterogeneity Resolution

Figure 1 shows the general process flow of our proposed naming heterogeneity resolution model that aims to resolve naming heterogeneity that might occur during policy evaluation. It is possible that the policies from the resource organization do not directly match with the request since the terms used in the subject, resource, action, and condition are different. This is because each organization manages its own vocabulary of policies in order to serve their own authority’s principal concern. Hence, naming heterogeneity is one of the heterogeneity issues that should be addressed in policy evaluation since the policies belonging to different organizations are not based on the same vocabulary.
A user may send a request to access the resources of an organization. The conflict resolution algorithm in this work compares the terms of a request, R e q , against the terms of a policy, P o l . WordNet is applied as an external thesaurus with the purpose of identifying synonym, hypernym, or hyponym relationships between terms. The term may be vague in meaning if null is returned from WordNet, but it is considered non-vague if gloss is returned instead. A term may be vague due to it contains delimiter characters; thus, a preprocessing step is needed to remove the unnecessary delimiter characters (i.e., underscore, dash, etc.) as they are considered as noise. The preprocessing step is performed by tokenizing a term of a request, a v R e q , and a term of a policy, a v P o l , into fragments of words if they contain tokens separated by delimiter characters. The tokens of a v R e q are stored into an array, a r r a y a v R e q , whereas the tokens of a v P o l are stored into an array, a r r a y a v P o l .
There are two fundamental string concatenation operators in concatenating multiple tokens into a meaningful term. They are space concatenation and abuttal concatenation, as elaborated below:
  • Space concatenation is performed on the tokens of the a r r a y a v R e q and a r r a y a v P o l . The tokens of the a r r a y a v R e q are concatenated with an intervening space and stored into t e r m R e q s p a c e . t e r m R e q s p a c e is further checked by WordNet as to whether it is a non-vague term. The same process goes for a r r a y a v P o l . For example, the tokens of a v R e q , F a c u l t y _ M e m b e r and { F a c u l t y ,   M e m b e r } , are concatenated with an intervening space to form a new meaningful term, F a c u l t y   M e m b e r , which is a non-vague term, as gloss is returned from WordNet.
  • In contrast, if the new term is a vague term, abuttal concatenation is performed by concatenating the tokens into a single term without an intervening space. Take a v P o l , U n d e r g r a d C l a s s _ r c as an example. U n d e r g r a d C l a s s _ r c is tokenized into { U n d e r g r a d ,   C l a s s ,   r c } and is further concatenated with an intervening space to form a new term, a v P o l n e w , U n d e r g r a d   C l a s s   r c . However, U n d e r g r a d   C l a s s   r c is apparently a vague term, thus, U n d e r g r a d , C l a s s , and r c are concatenated into a single term without an intervening space, U n d e r g r a d C l a s s r c . Intuitively, N-gram is applied during the matching process.
Algorithm 1 presents the preprocessing steps before running the matching functions in order to remove the unnecessary delimiters.
Algorithm 1: Preprocessing Steps Algorithm
Symmetry 13 02394 i001
The outputs of Algorithm 1 are the new form of both a v R e q and a v P o l denoted by a v R e q n e w and a v P o l n e w , respectively, which then become the input to the matching functions. A matching function will return a set of tuples of the form a v R e q , a v P o l , R e s u l t where a v R e q   ( a v P o l ) is the term of a request (policy, respectively) in its initial form and R e s u l t returns M a t c h e d if a v R e q n e w matched with a v P o l n e w , while returning N o t   M a t c h e d otherwise. We have devised eight matching functions, each cater a different form of a term, namely: String Equal, Synonym Equal, Hyponym, Syntactical Synonym Equal, Syntactical Hyponym, Syntactical Equal, Hyponym Common Word, and Abbreviation Equal. For String Equal, Synonym Equal, Hyponym, and Hyponym Common Word, the similarity score is equal to 1 when the matching functions return a match value and is equal to 0 otherwise. The following sections present how the proposed matching functions work on the string value.

4.1. String Equal

This function aims to find the similarity between two terms by analyzing its length and characters. a v R e q n e w and a v P o l n e w are considered matched if they are of the same length and all the characters of the two terms are matched exactly. For example, a v R e q n e w , S t u d e n t matches exactly with   a v P o l n e w , S t u d e n t . Obviously, both of the two terms are string equal match. Algorithm 2 presents the String Equal function algorithm.
Algorithm 2: String Equal Function Algorithm
Symmetry 13 02394 i002

4.2. Synonym Equal

This function attempts to resolve the terminological variation between two non-vague terms by analyzing the synonym relationship, based on WordNet. The proposed function uses WordNet as a dictionary to identify the synonyms of a v P o l n e w . All synonyms of a v P o l n e w are retrieved from WordNet and stored into an array, a r r a y P o l n e w s y n . If a v R e q n e w matches with one of the synonyms in the a r r a y P o l n e w s y n , then a v R e q n e w is matched with a v P o l n e w . For example, a v R e q n e w , U n d e r g r a d u a t e matches exactly with one of the synonyms of a v P o l n e w , U n d e r g r a d . Algorithm 3 presents the Synonym Equal function algorithm.
Algorithm 3: Synonym Equal Function Algorithm
Symmetry 13 02394 i003

4.3. Hyponym

This function aims to resolve the terminological variation between two non-vague terms by analyzing the hyponym relationship, based on WordNet. The proposed function uses WordNet as a dictionary to identify the hyponyms. All hyponyms of a v P o l n e w are retrieved from WordNet and stored into an array, a r r a y P o l n e w h y p . If a v R e q n e w matches with one of the hyponyms in the a r r a y P o l n e w h y p , then a v R e q n e w is matched with a v P o l n e w . For example, a v R e q n e w , U n d e r g r a d u a t e matches exactly with one of the hyponyms of a v P o l n e w , S t u d e n t . Algorithm 4 presents the Hyponym function algorithm.
Algorithm 4: Hyponym Function Algorithm
Symmetry 13 02394 i004

4.4. Syntactical Synonym Equal

This function attempts to resolve the syntactic and terminological variations between a vague term and a non-vague term by analyzing the synonym relationship. All synonyms of a non-vague term are retrieved from WordNet and stored into an array, a r r a y s y n t e r . The N-gram similarity measure is applied to calculate the similarity score, S C , between the vague term and each synonym of the non-vague term. If S C exceeds the similarity threshold, τ , then both terms are considered matched. The values of S C and τ are between 0 and 1. For example, consider a vague term of a v R e q n e w , U n d e r g r a d u a t e S t u d e n t and a vague term of a v P o l n e w , U n d e r g r a d . If N-gram with trigram (3) is applied on the strings U n d e r g r a d u a t e S t u d e n t and U n d e r g r a d , the S C between both strings is 0.38 and it is only considered matched if τ is set less than or equal to 0.38. Therefore, all synonyms of U n d e r g r a d are retrieved and stored into an array, a r r a y s y n t e r , { u n d e r g r a d u a t e } . N-gram with trigram (3) is applied on the string U n d e r g r a d u a t e S t u d e n t and each synonym of U n d e r g r a d . The S C between U n d e r g r a d u a t e S t u d e n t and one of the synonyms of U n d e r g r a d , i.e., u n d e r g r a d u a t e , is found to be 0.53 and it is greater than the default value of τ which is 0.5. Thus, a v R e q n e w is matched with a v P o l n e w . Algorithm 5 presents the Syntactical Synonym Equal function algorithm.

4.5. Syntactical Hyponym

This function aims to resolve the syntactic and terminological variations between a vague term and a non-vague term by analyzing the hyponym relationship. All hyponyms of the non-vague term are retrieved from WordNet and stored into an array, a r r a y n e w h y p m . The N-gram similarity measure is applied to calculate the similarity score, S C , between the vague term and each hyponym of the non-vague term. If S C exceeds the similarity threshold, τ , both terms are considered matched. The values of S C and τ are between 0 and 1. For example, consider a vague term of a v R e q n e w , A s s o c i a t e P r o f and a vague term of a v P o l n e w , F a c u l t y   M e m b e r . Hyponym relation is transitive [46]. All hyponyms of F a c u l t y   M e m b e r are retrieved and stored into an array, a r r a y n e w h y p m , { p r o f e s s o r ,   p r o f ,   a s s o c i a t e   p r o f e s s o r } . N-gram with trigram (3) is applied on the string A s s o c i a t e P r o f   and each hyponym of F a c u l t y   M e m b e r . The S C between A s s o c i a t e P r o f   and one of the hyponyms of F a c u l t y   M e m b e r , a s s o c i a t e   p r o f e s s o r , is found to be 0.5 and it is equal to the τ value by default. Thus, a v R e q n e w is matched with a v P o l n e w . Algorithm 6 presents the Syntactical Hyponym function algorithm.
Algorithm 5: Syntactical Synonym Equal Function Algorithm
Symmetry 13 02394 i005
Algorithm 6: Syntactical Hyponym Function Algorithm
Symmetry 13 02394 i006

4.6. Syntactical Equal

This function aims to resolve the syntactic variation between two vague terms. The function N-gram is applied to calculate the similarity score, S C , between a v R e q n e w and a v P o l n e w . If the S C between a v R e q n e w and a v P o l n e w   exceeds the similarity threshold, τ , both terms are considered matched. The values of S C and τ are between 0 and 1. For example, consider the vague terms of a v R e q n e w , U n d e r g r a d C l a s s and a v P o l n e w , U n d e r g r a d C l a s s r c . N-gram with trigram (3) is applied on the strings U n d e r g r a d C l a s s and U n d e r g r a d C l a s s r c . The S C between both strings is 0.7 and it is greater than τ by default which is 0.5. Thus, U n d e r g r a d C l a s s is matched with U n d e r g r a d C l a s s r c . Algorithm 7 presents the Syntactical Equal function algorithm.
Algorithm 7: Syntactical Equal Function Algorithm
Symmetry 13 02394 i007

4.7. Hyponym Common Word

This function attempts to resolve the terminological variation between two terms by analyzing the hyponym relationship. This function checks whether a word appears at the beginning or at the end of another word, which is in the form of a compound noun. The common word between a v R e q n e w and a v P o l n e w could imply the semantic measure between them. If there is a common substring, its position will provide the evidence for the existence of a hyponymy [44]. Therefore, the same concept is applied to this function by assuming there is a hyponym relationship between the two terms if both terms share a common word. If a v P o l n e w is a common word of a v R e q n e w , then a v P o l n e w is more general than a v R e q n e w . In other words, a v R e q n e w is more specific than a v P o l n e w . The length of a v R e q n e w and a v P o l n e w should be greater than three before the function is performed to avoid invalid hits returned by this function (e.g., T e a c h i n g C o u r s e is not relevant for the term T e a ) [47]. If the length of a term is shorter than the other term, it is called short term, otherwise it is called long term. Each token of the long term is stored into an array, a r r a y l o n g t e r m . If the short term matches with one of the elements in the a r r a y l o n g t e r m , then a v R e q n e w is matched with a v P o l n e w . For example, consider a v R e q n e w , G r a d u a t e   S t u d e n t as a long term and a v P o l n e w , S t u d e n t as a short term. Each token of G r a d u a t e   S t u d e n t is stored into the a r r a y l o n g t e r m , { G r a d u a t e ,   S t u d e n t } . The a v P o l n e w , S t u d e n t is found to match with one of the elements of the a r r a y l o n g t e r m , S t u d e n t . Thus, a v R e q n e w is matched with a v P o l n e w . Algorithm 8 presents the Hyponym Common Word function algorithm.
Algorithm 8: Hyponym Common Word Function Algorithm
Symmetry 13 02394 i008

4.8. Abbreviation Equal

This function attempts to resolve the syntactic variation between two terms which may arise due to the short forms. If the length of a term is shorter than the other term, it is called short form, otherwise it is called long form. Every character in the short form must match a character in the long form, and the matched characters in the long form must be in the same order as the characters in the short form. An extraction process is performed on the first letter of each token in the long form and further concatenated into a new term, t e r m E V a l u e . N-gram is then applied to calculate the similarity score, S C , between the term, t e r m E V a l u e , and the short form. If S C exceeds the similarity threshold, τ , then both terms are considered matched. The values of S C and τ are between 0 and 1. For example, consider a v R e q n e w , T e a c h i n g A s s i s t a n t as a long form and a v P o l n e w , T A as a short form. Each token of T e a c h i n g A s s i s t a n t is stored into an array, a r r a y l o n g t e r m , { T e a c h i n g ,   A s s i s t a n t } . Then, the initial letter of each token of T e a c h i n g A s s i s t a n t is extracted and concatenated into a single new term, t e r m E V a l u e , T A . The S C of t e r m E V a l u e , T A and the short form, T A is 1.0 and it is greater than τ by default, which is 0.5. Thus, T e a c h i n g A s s i s t a n t is matched with T A . Algorithm 9 presents the Abbreviation Equal function algorithm.
Algorithm 9: Abbreviation Equal Function Algorithm
Symmetry 13 02394 i009

5. Results and Discussion

Several experiments were conducted to measure the accuracy of the proposed matching functions in resolving naming heterogeneity that occurs between the attribute values of a request and a policy. These experiments aimed to show the strengths and weaknesses of the proposed solution in resolving naming heterogeneity between a request and a policy with respect to the subject, resource, action, and condition.
The proposed matching functions were implemented in Java and XACML policy language with Java 1.6.0 10. The match task was first conducted manually by three professional human experts who were either familiar with database management or English linguistics. Sun’s XACML implementation [11] was chosen as the comparison since several related works, such as [8,9,10,48], have selected Sun’s XACML implementation for their results comparison, with the strong justification that Sun’s XACML implementation is an open source. However, these works focused on the efficiency of their engine in processing the requests, whereas this work focuses on the accuracy of resolving naming heterogeneity between a request and a policy.
In this work, six sets of XACML policies (http://sourceforge.net/projects/xacmlpdp/ (accessed on 29 March 2017)) were taken from [9] that have been designed for a university and a conference management domain, namely: CodeA, CodeB, CodeC, CodeD, Continue-a, and Continue-b. Continue-a and Continue-b are designed for a conference management while CodeA, CodeB, CodeC, and CodeD are designed for a real-world web application supporting the university domain. Another two sets of policies that have been analyzed were taken from [49]. Those policies are based on the RBAC model and are designed for a university (http://www3.cs.stonybrook.edu/stoller/ccs2007/university-policy.txt (accessed on 29 March 2017)) and a health care (http://www3.cs.stonybrook.edu/stoller/ccs2007/healthcare.txt (accessed on 29 March 2017)) institution. The RBAC policies are presented in the syntax and structure of XACML with positive effects since negative authorizations are not supported in the RBAC model [3]. These sets of policies were modified by adding additional condition context since initially these policies do not contain condition context. This modification was necessary for the purpose of evaluating the accuracy of the proposed matching functions. A request-generation technique [50] was used to generate 10,000 requests at random since most of the real-world systems use less than 10,000 policies [16]. Eight sets of the modified XACML policy datasets mentioned above were used as the source to generate the random requests. Since there is a distinct lack of real request datasets in a distributed environment, the domain ontologies related to university (http://swat.cse.lehigh.edu/onto/univ-bench.owl (accessed on 29 March 2017)), conference management (http://data.semanticweb.org/ns/swc/swc2009-05-09.html (accessed on 29 March 2017)), and health care institution (https://loinc.org/discussion-documents/document-ontology/loinc-document-ontology-axisvalues?force_toc:int=1 (accessed on 29 March 2017)) domains were selected as the source to generate the random requests.
In order to measure the accuracy of the matching results in each experiment, Precision (P), Recall (R), and F-Measure (F), originating from the information retrieval field, were used [43]. Each experiment was conducted five times. The match results of P, R, and F in matching the attribute values of a request and a policy by the proposed solution and the Sun’s XACML implementation were compared to the real match results obtained by the human experts. The results were analyzed at various values of similarity thresholds. The match results of P, R, and F are presented in Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18.
The Improvement column in these tables presents the range of improvement achieved by the proposed solution as compared with the Sun’s XACML implementation with regard to P, R, and F. The range of improvement is presented by [Lowest, Highest], where Lowest = the lowest value of P (R or F) achieved by the proposed solution minus the value of P (R or F) achieved by the Sun’s XACML implementation and Highest = the highest value of P (R or F) achieved by the proposed solution minus the value of P (R or F) achieved by the Sun’s XACML implementation.
As shown in these tables, the Sun’s XACML implementation is able to attain perfect P in terms of matching the attribute values of subject, resource, action, and condition of a request and a policy. This indicates that the Sun’s XACML implementation never retrieved a false positive, thus, zero false positive was produced, and all the match results returned by the Sun’s XACML implementation were true positives, which are the same results as those produced by the human experts. Nevertheless, the Sun’s XACML implementation, which adopted the simple string equal matching function, will only return 0 if the strings are different and 1 if they are exactly the same; thus, it does not take into consideration the naming heterogeneity issues. Therefore, the number of false negatives of the Sun’s XACML implementation is higher than the proposed solution. As a consequence, the R achieved by the Sun’s XACML implementation is lower compared to the R achieved by the proposed solution because most of the match results between the attribute values of subject, resource, action, and condition of a request and a policy were not returned by the Sun’s XACML implementation.
It is observed that in the proposed solution, the result of P increased, and the result of R decreased, when the similarity threshold is set to a higher value. The proposed solution returned the same results of P, R, and F for different similarity thresholds for Continue-a and Continue-b policies, since both policies used the same attribute values. The only difference between Continue-a and Continue-b is the number of policies. It was also observed that there are two cases which have caused the proposed solution unable to achieve perfect precision in matching the attribute values of a request and a policy.
  • The proposed solution could not produce accurate match results when there are similarities in terms of characters presented in the terms being matched while these terms are actually not matched. For example, the terms E x t e r n a l G r a d e s and I n t e r n a l G r a d e s that appeared in the Continue-a policy are considered matched in the proposed solution. The application of N-gram with trigram (3) on the strings E x t e r n a l G r a d e s and I n t e r n a l G r a d e s gained a similarity score of 0.6, which satisfied the similarity threshold when it was set to at least 0.6. However, E x t e r n a l G r a d e s and I n t e r n a l G r a d e s are considered not matched by the human experts. Referring to Table 16 and Table 17, the proposed solution achieved a low value of P in matching the resource attribute of a request and a policy when the similarity threshold is 0.2. This is because the terms in the resource attribute of Continue-a and Continue-b policies have some similarities in terms of characters presented, thus making N-gram produce a higher similarity score than the similarity threshold 0.2 but lower than 0.4. In another example, P a p e r r e v i e w c o n t e n t _ r c and P a p e r r e v i e w i n f o s u b m i s s i o n S t a t u s _ r c are considered matched by the proposed solution since the similarity score of these terms is 0.27, which is higher than the similarity threshold 0.2. However, P a p e r r e v i e w c o n t e n t _ r c and P a p e r r e v i e w i n f o s u b m i s s i o n S t a t u s _ r c   are considered not matched by the human experts. Thus, the proposed solution produced a false positive in this case.
  • The proposed solution failed to match the terms that contain semantic relationship but do not have similarities in terms of characters presented. However, these terms are in fact matched. For example, R e s e a r c h   A s s i s t a n t in a request and F a c u l t y   M e m b e r in a policy. In this case, the proposed solution returned false match. Based on the human experts, R e s e a r c h   A s s i s t a n t is a hyponym of F a c u l t y   M e m b e r .
From the experiments, it is obvious that the higher the similarity threshold, the higher the percentage of P in matching the attribute values of a request and a policy. Furthermore, the proposed solution achieved higher percentages of R and F compared with the Sun’s XACML implementation. This is due to the fact that N-gram and WordNet are utilized in the proposed matching functions.
For most of the datasets, the proposed solution obtained negative improvement based on Lowest calculation with respect to P in matching the resource, action, and condition attribute of a request and a policy. This is because the Sun’s XACML implementation supports simple string equal matching function and thus produced zero false positive, while the number of false positives produced by the proposed solution exceeds the number of true positives. The proposed solution achieved the highest negative improvement in Continue-a and Continue-b datasets. This is because among the datasets, Continue-a and Continue-b datasets contained the largest number of attribute values which have similarities in terms of characters but are actually not matched; thereby, the number of false positives that is produced by the proposed solution is the highest in the Continue-a and Continue-b datasets. Considering naming heterogeneity, the proposed solution resulted in lower improvement value in P but higher improvement value in R and F compared with the Sun’s XACML implementation. The reason is that the Sun’s XACML implementation is restrictive to simple string equal matching, which does not consider functions that can resolve naming heterogeneity in matching the attribute value of a request and a policy.
In addition, the proposed solution resulted in no improvement based on the Highest calculation with respect to P for all sets of policies. This is because the proposed solution achieved 100% of P in matching the attribute values of a request and a policy when the similarity threshold is set to a higher value. The Sun’s XACML implementation also achieved 100% of P for all sets of policies since the Sun’s XACML implementation never gained false positives. However, since the proposed solution is able to resolve naming heterogeneity, the number of false negatives of the proposed solution is lower than the Sun’s XACML implementation. Thus, the R and F obtained by the proposed solution for most of the policies are higher than the R and F obtained by the Sun’s XACML implementation. Therefore, we can conclude that the proposed solution is better compared with the Sun’s XACML implementation.
Figure 2 and Figure 3 present the improvement based on the Lowest calculation and Highest calculation, respectively, in retrieving the application policies achieved by the proposed solution, as compared with the Sun’s XACML implementation. Based on Figure 2, we observed that there was no improvement made in terms of P by the proposed solution and, for most of the datasets, the proposed solution obtained negative improvement. The Sun’s XACML implementation never retrieved false positives in retrieving the applicable policies since the Sun’s XACML implementation supported simple string equal matching function that does not consider naming heterogeneity; all the matches returned by the Sun’s XACML implementation in matching the attribute values of the requests and the policies are true positives, which are the same matched results as those produced by the human experts.
However, the number of false negatives of the Sun’s XACML implementation in retrieving the applicable policies is higher than the proposed solution. The reason is that the proposed solution is able to resolve the naming heterogeneity; thus, the proposed solution could reduce the number of false negatives in retrieving the applicable policies. This caused the R and F achieved by the proposed solution higher than the R and F achieved by the Sun’s XACML implementation. Thus, the proposed solution still outperforms the Sun’s XACML implementation in terms of R and F for all sets of policies.
Based on Figure 3, we observed that the proposed solution obtained no improvement in terms of P for all sets of policies. This is because the proposed solution with similarity threshold set to a higher value and the Sun’s XACML implementation, both achieved 100% of P in retrieving the applicable policies. However, since the proposed solution is able to resolve the naming heterogeneity, the number of false negatives of the proposed solution in retrieving the applicable policies is lower than the Sun’s XACML implementation. This made the R and F obtained by the proposed solution higher than the R and F obtained by the Sun’s XACML implementation. Therefore, the proposed solution still performs better than the Sun’s XACML implementation in terms of R and F for all sets of policies.

6. Conclusions

This research addresses the significant need in resolving naming heterogeneity for XACML policy evaluation. The proposed matching functions are proven to be capable of resolving naming heterogeneity between the attribute values of a request and a policy during policy evaluation by considering both the syntactic and terminological variations. The proposed solution achieved higher percentage of P and F when the similarity threshold is set to a higher value. Besides, it also achieved higher percentage of R compared with the Sun’s XACML implementation. Various experiments have been accomplished, and the results confirmed that our proposed solution has significantly outperformed the Sun’s XACML implementation for the six sets of XACML policies that have been considered in the work. Most importantly, the results show that the improvement made by our solution in terms of Recall and F-measure, compared with the Sun’s XACML implementation, reached up to 70% and 57%, respectively.
The proposed solution can be further enhanced by considering other factors which could affect the authorization decisions, such as obligations, in which some actions should be launched once certain conditions are satisfied. Besides, the issue of providing a secure mapping of XACML policies and rules for relations with encrypted data using symmetric keys needs to be investigated. A design XML encryption algorithm, which uses symmetrical or asymmetrical keys to achieve the element level encryption before identifying policies and user rules for the encrypted XML elements, is required. Last but not least, further enhancement to the proposed solution in this area can be carried out by investigating the spatial context of a request and a policy which is organized based on the logical data model to be used by the geographic information system (GIS).

Author Contributions

Conceptualization, T.P.K., H.I., F.S. and N.I.U.; methodology, T.P.K. and H.I.; validation, T.P.K., H.I. and A.A.A.; formal analysis, T.P.K., H.I., F.S. and N.I.U.; investigation, T.P.K., H.I., F.S., N.I.U. and A.A.A.; resources, T.P.K. and H.I.; writing—original draft preparation, T.P.K. and H.I.; writing—review and editing, H.I., F.S., N.I.U. and A.A.A.; visualization, T.P.K. and H.I.; supervision, H.I., F.S., N.I.U. and A.A.A.; project administration, H.I.; funding acquisition, H.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Universiti Putra Malaysia under the Journal Publishing Initiative Year 2020 scheme (9053006).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Universiti Putra Malaysia for funding this research work. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. We thank the anonymous reviewers for their comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tejada, S.; Knoblock, C.A.; Minton, S. Learning object identification rules for information integration. Inf. Syst. 2001, 26, 607–633. [Google Scholar] [CrossRef]
  2. Thilakanathan, D.; Chen, S.; Nepal, S.; Calvo, R. SafeProtect: Controlled Data Sharing With User-Defined Policies in Cloud-Based Collaborative Environment. IEEE Trans. Emerg. Top. Comput. 2015, 4, 301–315. [Google Scholar] [CrossRef]
  3. Toosi, A.N.; Calheiros, R.N.; Buyya, R. Interconnected Cloud Computing Environments: Challenges, Taxonomy, and Survey. J. ACM Comput. Surv. 2014, 7, 1–47. [Google Scholar] [CrossRef]
  4. Trivellato, D.; Zannone, N.; Glaundrup, M.; Skowronek, J.; Etalle, S. A Semantic Security Framework for Systems of Systems. Int. J. Coop. Inf. Syst. 2013, 22, 1–35. [Google Scholar] [CrossRef] [Green Version]
  5. Trivellato, D.; Spiessens, F.; Zannone, N.; Etalle, S. POLIPO: Policies & Ontologies for Interoperability, Portability, and Autonomy. In Proceedings of the 10th IEEE International Conference on Policies for Distributed Systems and Networks (POLICY), London, UK, 20–22 July 2009; pp. 110–113. [Google Scholar]
  6. Castano, S.; Ferrara, A.; Montanelli, S.; Racca, G. Semantic Information Interoperability in Open Networked Systems. In Proceedings of the International Conference on Semantics of a Networked World (LCSNW), Paris, France, 17–19 June 2004; pp. 215–230. [Google Scholar]
  7. Drozdowicz, M.; Ganzha, M.; Paprzycki, M. Semantically Enriched Data Access Policies in eHealth. J. Med. Syst. 2016, 40, 238. [Google Scholar] [CrossRef] [Green Version]
  8. Ammar, N.; Malik, Z.; Bertino, E.; Rezgui, A. XACML Policy Evaluation with Dynamic Context Handling. J. IEEE Trans. Knowl. Data Eng. 2015, 27, 2575–2588. [Google Scholar] [CrossRef]
  9. Liu, A.X.; Chen, F.; Hwang, J.; Xie, T. Designing Fast and Scalable XACML Policy Evaluation Engines. IEEE Trans. Comput. 2010, 60, 1802–1817. [Google Scholar] [CrossRef] [Green Version]
  10. Ngo, C.; Demchenko, Y.; de Laat, C. Decision Diagrams for XACML Policy Evaluation and Management. J. Comput. Secur. 2015, 49, 1–16. [Google Scholar] [CrossRef]
  11. Proctor, S. Sun’s XACML Implementation. 2004. Available online: http://sunxacml.sourceforge.net (accessed on 29 March 2017).
  12. Ciuciu, I.; Zhao, G.; Chadwick, D.W.; Reul, Q.; Meersman, R.; Vasquez, C.; Hibbert, M.; Winfield, S.; Kirkham, T. Ontology based Interoperation for Securely Shared Services: Security Concept Matching for Authorization Policy Interoperability. In Proceedings of the 4th IFIP International Conference on New Technologies, Mobility and Security (NTMS), Paris, France, 7–10 February 2011; pp. 1–5. [Google Scholar]
  13. Ferrini, R.; Bertino, E. Supporting RBAC with XACML+OWL. In Proceedings of the 14th ACM Symposium on Access Control Models and Technologies (SACMAT), Stresa, Italy, 3–5 June 2009; pp. 145–154. [Google Scholar]
  14. Hu, L.; Ying, S.; Jia, X.; Zhao, K. Towards an Approach of Semantic Access Control for Cloud Computing. In Proceedings of the 1st IEEE International Conference on Cloud Computing, Beijing, China, 1–4 December 2009; pp. 145–156. [Google Scholar]
  15. Husain, M.F.; Al-Khateeb, T.; Alam, M.; Khan, L. Ontology based policy interoperability in geo-spatial domain. Comput. Stand. Interfaces 2011, 33, 214–219. [Google Scholar] [CrossRef]
  16. Mohan, A.; Blough, D.M.; Kurc, T.; Post, A.; Saltz, J. Detection of Conflicts and Inconsistencies in Taxonomy based Authorization Policies. In Proceedings of the 2011 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Atlanta, GA, USA, 12–15 November 2011; pp. 590–594. [Google Scholar]
  17. Priebe, T.; Dobmeier, W.; Schläger, C.; Kamprath, N. Supporting Attribute based Access Control in Authorization and Authentication Infrastructures with Ontologies. J. Softw. 2007, 2, 27–38. [Google Scholar] [CrossRef]
  18. Takabi, H. A Semantic based Policy Management Framework for Cloud Computing Environments. Ph.D. Thesis, University of Pittsburgh, Pittsburgh, PA, USA, 12 July 2013. [Google Scholar]
  19. Zhao, H. Security Policy Definition and Enforcement in Distributed Systems. Ph.D. Thesis, Columbia University, New York, NY, USA, 12 September 2012. [Google Scholar]
  20. Dia, O.A.; Farkas, C. A Practical Framework for Policy Composition and Conflict Resolution. Int. J. Secur. Softw. Eng. 2012, 3, 1–26. [Google Scholar] [CrossRef]
  21. Duan, L.; Zhang, Y.; Chen, S.; Zhao, S.; Wang, S.; Liu, D.; Liu, R.P.; Cheng, B.; Chen, J. Automated Policy Combination for Secure Data Sharing in Cross-Organizational Collaborations. IEEE Access 2016, 4, 3454–3468. [Google Scholar] [CrossRef]
  22. Haguouche, S.; Jarir, Z. Generic Access Control Model and Semantic Mapping Between Heterogeneous Policies. Int. J. Technol. Diffus. 2018, 9, 52–65. [Google Scholar] [CrossRef]
  23. Ioannidis, S. Security Policy Consistency and Distributed Evaluation in Heterogeneous Environments. Ph.D. Thesis, University of Pennsylvania, Philadelphia, PA, USA, 2005. [Google Scholar]
  24. Mazzoleni, P.; Crispo, B.; Sivasubramanian, S.; Bertino, E. XACML Policy Integration Algorithms. ACM Trans. Inf. Syst. Secur. 2008, 11, 1–29. [Google Scholar] [CrossRef]
  25. Rao, P.; Lin, D.; Bertino, E.; Li, N.; Lobo, J. Fine-grained integration of access control policies. Comput. Secur. 2011, 30, 91–107. [Google Scholar] [CrossRef]
  26. Shafiq, B.; Joshi, J.B.D.; Bertino, E.; Ghafoor, A. Secure interoperation in a multidomain environment employing RBAC policies. IEEE Trans. Knowl. Data Eng. 2005, 17, 1557–1577. [Google Scholar] [CrossRef]
  27. Ferrini, R. EXAMS: An Analysis Tool for Multidomain Policy Sets. Ph.D. Thesis, University of Bologna, Bologna, Italy, 20 March 2009. [Google Scholar]
  28. Kalam, A.A.E.; Deswarte, Y.; Baina, A.; Kaaniche, M. Access Control for Collaborative Systems: A Web Services based Approach. In Proceedings of the IEEE International Conference on Web Services (ICWS), Salt Lake City, UT, USA, 9–13 July 2007; pp. 1064–1071. [Google Scholar]
  29. Lin, D.; Rao, P.; Ferrini, R.; Bertino, E.; Lobo, J. A Similarity Measure for Comparing XACML Policies. IEEE Trans. Knowl. Data Eng. 2012, 25, 1946–1959. [Google Scholar] [CrossRef]
  30. Lin, D.; Rao, P.; Bertino, E.; Lobo, J. An Approach to Evaluate Policy Similarity. In Proceedings of the 12th ACM symposium on Access Control Models and Technologies (SACMAT), Sophia Antipolis, France, 20–22 June 2007; pp. 1–10. [Google Scholar]
  31. Ahmadi, S.; Nassiri, M.; Rezvani, M. XACBench: A XACML policy benchmark. Soft Comput. 2020, 24, 16081–16096. [Google Scholar] [CrossRef]
  32. Deng, F.; Zhang, L.; Zhang, C.; Ban, H.; Wan, C.; Shi, M.; Chen, C.; Zhang, E. Establishment of rule dictionary for efficient XACML policy management. Knowl. Based Syst. 2019, 175, 26–35. [Google Scholar] [CrossRef]
  33. Deng, F.; Wang, S.; Zhang, L.; Wei, X.; Yu, J. Establishment of attribute bitmaps for efficient XACML policy evaluation. Knowl. Based Syst. 2018, 143, 93–101. [Google Scholar] [CrossRef]
  34. Dıaz-Lopez, D.; Dolera-Tormo, G.; Gomez-Marmol, F.; Martınez-Perez, G. Managing XACML Systems in Distributed Environments through Meta-Policies. Comput. Secur. 2015, 48, 92–115. [Google Scholar] [CrossRef]
  35. Li, Y.; Deng, F. A Graph and Clustering-Based Framework for Efficient XACML Policy Evaluation. Int. J. Coop. Inf. Syst. 2020, 29, 1–17. [Google Scholar] [CrossRef]
  36. Marfia, F.; Neri, M.A.; Pellegrini, F.; Colombetti, M. Using OWL Reasoning for Evaluating XACML Policies. In Proceedings of the International Conference on E-Business and Telecommunications, Colmar, France, 20–22 July 2015; pp. 343–363. [Google Scholar]
  37. Mourad, A.; Tout, H.; Talhi, C.; Otrok, H.; Yahyaoui, H. From model-driven specification to design-level set-based analysis of XACML policies. Comput. Electr. Eng. 2016, 52, 65–79. [Google Scholar] [CrossRef]
  38. Mourad, A.; Jebbaoui, H. SBA-XACML: Set-based Approach Providing Efficient Policy Decision Process for Accessing Web Services. Expert Syst. Appl. 2014, 42, 165–178. [Google Scholar] [CrossRef]
  39. Skandhakumar, N.; Reid, J.; Salim, F.; Dawson, E. A policy model for access control using building information models. Int. J. Crit. Infrastruct. Prot. 2018, 23, 1–10. [Google Scholar] [CrossRef]
  40. Turkmen, F.; Hartog, J.D.; Ranise, S.; Zannone, N. Formal analysis of XACML policies using SMT. Comput. Secur. 2017, 66, 185–203. [Google Scholar] [CrossRef]
  41. Shvaiko, P.; Euzenat, J. A Survey of Schema-based Matching Approaches. J. Data Semant. IV 2005, 3730, 146–171. [Google Scholar]
  42. Kuang, T.P.; Ibrahim, H.; Sidi, F.; Udzir, N.I. Heterogeneity XACML Policy Evaluation Engine. In Proceedings of the Malaysian National Conference of Databases (MaNCoD), Selangor, Malaysia, 17 September 2014; pp. 230–238. [Google Scholar]
  43. Do, H.-H.; Melnik, S.; Rahm, E. Comparison of Schema Matching Evaluations. In Proceedings of the Web, Web-Services, and Database Systems, Erfurt, Germany, 7–10 October 2003; pp. 221–237. [Google Scholar]
  44. Liu, L.; Zhang, S.; Diao, L.; Cao, C. An Iterative Method of Extracting Chinese ISA Relations for Ontology Learning. J. Comput. 2010, 5, 870–877. [Google Scholar] [CrossRef]
  45. Mohan, A.; Blough, D.M. An Attribute-based Authorization Policy Framework with Dynamic Conflict Resolution. In Proceedings of the 9th Symposium on Identity and Trust on the Internet (IDTRUST), Gaithersburg, MD, USA, 13–15 April 2010; pp. 37–50. [Google Scholar]
  46. Miller, G.A. WordNet: A Lexical Database for English. J. Commun. ACM 1995, 38, 39–41. [Google Scholar] [CrossRef]
  47. Sabou, M.; Lopez, V.; Motta, E. Ontology Selection for the Real Semantic Web: How to Cover the Queen’s Birthday Dinner? In Proceedings of the 15th International Conference on Managing Knowledge in a World of Networks (EKAW’06), Podêbrady, Czech Republic, 2–6 October 2006; pp. 96–111. [Google Scholar]
  48. Deng, F.; Zhan, L.Y. Elimination of Policy Conflict to Improve the PDP Evaluation Performance. J. Netw. Comput. Appl. 2017, 80, 45–57. [Google Scholar] [CrossRef]
  49. Stoller, S.D.; Yang, P.; Ramakrishnan, C.R.; Gofman, M.I. Efficient Policy Analysis for Administrative Role Based Access Control. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, USA, 31 October–2 November 2007; pp. 445–455. [Google Scholar]
  50. Martin, E.; Xie, T.; Yu, T. Defining and Measuring Policy Coverage in Testing Access Control Policies. In Proceedings of the International Conference on Information and Communications Security (ICICS), Raleigh, NC, USA, 4–7 December 2006; pp. 139–158. [Google Scholar]
Figure 1. The naming heterogeneity resolution model.
Figure 1. The naming heterogeneity resolution model.
Symmetry 13 02394 g001
Figure 2. The improvement based on the lowest calculation in retrieving the applicable policies achieved by the proposed solution as compared with the Sun’s XACML implementation.
Figure 2. The improvement based on the lowest calculation in retrieving the applicable policies achieved by the proposed solution as compared with the Sun’s XACML implementation.
Symmetry 13 02394 g002
Figure 3. The improvement based on the highest calculation in retrieving the applicable policies achieved by the proposed solution as compared with the Sun’s XACML implementation.
Figure 3. The improvement based on the highest calculation in retrieving the applicable policies achieved by the proposed solution as compared with the Sun’s XACML implementation.
Symmetry 13 02394 g003
Table 1. The summary of the existing naming heterogeneity methods in a distributed system.
Table 1. The summary of the existing naming heterogeneity methods in a distributed system.
AIIIIII
BCDEFGHI
[23]-String EqualGraph MatchingString EqualString Equal
[26]-Vector similarity,
clustering, ontology graph matching
Vector similarity,
clustering, ontology graph matching
String Equal-
[24]-Equal, not equal, intersect, subset, supersetEqual, not equal,
intersect, subset,
superset
Equal, not equal,
intersect, subset,
superset
Equal, not equal,
intersect, subset,
superset
-
[25]-Addition, subtraction,
negation, domain projection
Addition, subtraction, negation, domain
projection
Addition,
subtraction,
negation, Domain, Projection
Addition,
subtraction,
negation, domain projection
-
[41]-Addition, subtraction,
intersection,
precedence, negation,
domain projection
Addition, subtraction,
intersection, precedence, negation, domain projection
Addition,
subtraction,
intersection,
precedence,
negation, domain projection
Addition,
subtraction,
intersection,
precedence,
negation, domain projection
-
[20]-Intersection, scoping
restriction, set difference
Intersection, scoping
restriction, set difference
Intersection, scoping restriction, set
difference
Intersection, scoping restriction, set
difference
-
[21]-String EqualString EqualString EqualAlgebraic-
[22]-Ontology graph matchingOntology graph
matching
Ontology graph matchingOntology graph matching
[30]-Ontology graph matchingOntology graph
matching
Ontology graph matchingOntology graph matching-
[29]-Domain specific thesauri,
WordNet,
ontology graph matching
Domain specific thesauri, WordNet, ontology graph matchingDomain specific
thesauri,
WordNet, ontology graph matching
Domain specific
thesauri,
WordNet, ontology graph matching
[27]-Domain specific thesauri,
WordNet, ontology graph matching
Domain specific thesauri, WordNet, ontology graph matchingDomain specific
thesauri,
WordNet, ontology graph matching
Domain specific
thesauri,
WordNet, ontology graph matching
[11]-String EqualString EqualString EqualString Equal-
[18]-Jena and Pellet reasonerJena and Pellet reasonerJena & Pellet
reasoner
Jena & Pellet
reasoner
[8]-String EqualString EqualString EqualString Equal-
[9]-String EqualString EqualString EqualString Equal-
[10]-String EqualString EqualString EqualString Equal-
[12]-JaroWinklerTF-IDF,
WordNet, user dictionary, ontology graph matching
JaroWinklerTF-IDF, WordNet, user
dictionary, ontology graph matching
JaroWinklerTF-IDF, WordNet, user
dictionary, ontology graph matching
-
[14]-Ontology graph matchingOntology graph
Matching
Ontology graph
matching
-
[13]-Pellet reasonerPellet reasonerPellet ReasonerPellet reasoner
[15]-Ontology graph matchingOntology graph
Matching
Ontology graph
matching
-
[17]-Jena & Pellet reasonerJena & Pellet reasonerJena & Pellet
Reasoner
Jena & Pellet
Reasoner
[4]-Jena reasonerJena reasonerJena reasonerJena Reasoner
[7]-Ontology graph matchingOntology graph
matching
Ontology graph matchingOntology graph matching
[42]-WordNet, N-gramWordNet, N-gramWordNet, N-gramWordNet, N-gram
Note: I—environment; II—matching methods; III—variations; A—authors; B—matching two policies; C—matching a request and a policy; D—subject; E—resource; F—action; G—condition; H—syntactic; I—terminological.
Table 2. The XACML policies applied in the University.
Table 2. The XACML policies applied in the University.
Policy No.EffectSubjectResourceActionCondition
P o l 1 P e r m i t R A G r a d e s A s s i g n     V i e w ( L o c a t i o n = A s s o c i a t i o n ) ( T i m e 12   p . m .   T i m e 2   p . m . ) ( E m a i l = u p m . e d u . m y )
P o l 2 D e n y S t u d e n t C o u r s e A s s i g n     V i e w ( L o c a t i o n = D e p a r t m e n t ) ( T i m e 12   p . m .   T i m e 1   p . m . ) ( E m a i l = u p m . e d u . m y )
P o l 3 P e r m i t U n d e r g r a d C o u r s e V i e w ( L o c a t i o n = D e p a r t m e n t ) ( T i m e 12   p . m .   T i m e 1   p . m . ) ( E m a i l = u p m . e d u . m y )
P o l 4 P e r m i t A s s o c i a t e P r o f e s s o r G r a d e s S u b m i t G r a d e C h a n g e   S u b m i t G r a d e     A s s i g n     V i e w ( L o c a t i o n = G r a d u a t e S c h o o l ) ( T i m e 12   p . m .   T i m e 1   p . m . ) ( E m a i l = u p m . e d u . m y )
P o l 5 D e n y F a c u l t y _ M e m b e r G r a d e s Assign   View ( L o c a t i o n = S c h o o l ) ( T i m e 12   p . m .   T i m e 1   p . m . )
Table 3. The requests for policy evaluation.
Table 3. The requests for policy evaluation.
Request No.SubjectResourceActionCondition
R e q 1 U n d e r g r a d u a t e   S t u d e n t T e a c h i n g   C o u r s e V i e w ( L o c a t i o n = U n i v e r s i t y   D e p a r t m e n t ) ( T i m e = 12 : 30   p . m .   ) ( E m a i l = g s 23442 @ u p m . e d u . m y )
R e q 2 R e s e a r c h A s s i s t a n t E x t e r n a l G r a d e s A s s i g n ( L o c a t i o n = I n s t i t u t e ) ( T i m e = 1 : 30   p . m .   ) ( E m a i l = g s 23442 @ u p m . e d u . m y )
R e q 3 A s s o c i a t e P r o f G r a d e s A s s i g n ( L o c a t i o n = G r a d u a t e S c h o o l ) ( T i m e = 12 : 30   p . m . )
R e q 4 F a c u l t y _ M e m b e r G r a d e s A s s i g n G r a d e ( L o c a t i o n = S c h o o l ) ( T i m e = 12 : 30   p . m . )
Table 4. The mapping results among R e q 1 and P o l 2 .
Table 4. The mapping results among R e q 1 and P o l 2 .
R e q 1 P o l 2 ResultType of Variations
S u b j e c t = U n d e r g r a d u a t e   S t u d e n t S u b j e c t = S t u d e n t MatchTerminological
R e s o u r c e = T e a c h i n g   C o u r s e R e s o u r c e = C o u r s e MatchTerminological
A c t i o n = V i e w A c t i o n = A s s i g n Not Match-
A c t i o n = V i e w MatchSyntactic
L o c a t i o n = U n i v e r s i t y   D e p a r t m e n t L o c a t i o n = D e p a r t m e n t MatchTerminological
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
E m a i l = g s 23442 @ u p m . e d u . m y E m a i l = u p m . e d u . m y MatchTerminological
Table 5. The mapping results among R e q 1 and P o l 3 .
Table 5. The mapping results among R e q 1 and P o l 3 .
R e q 1 P o l 3 ResultType of Variations
S u b j e c t = U n d e r g r a d u a t e   S t u d e n t S u b j e c t = U n d e r g r a d MatchSyntactic
R e s o u r c e = T e a c h i n g   C o u r s e R e s o u r c e = C o u r s e MatchTerminological
A c t i o n = V i e w A c t i o n = V i e w MatchSyntactic
L o c a t i o n = U n i v e r s i t y   D e p a r t m e n t L o c a t i o n = D e p a r t m e n t MatchTerminological
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
E m a i l = g s 23442 @ u p m . e d u . m y E m a i l = u p m . e d u . m y MatchTerminological
Table 6. The mapping results among R e q 2   and P o l 1 .
Table 6. The mapping results among R e q 2   and P o l 1 .
R e q 2 P o l 1 ResultType of Variations
S u b j e c t = R e s e a r c h A s s i s t a n t S u b j e c t = R A MatchSyntactic
R e s o u r c e = E x t e r n a l G r a d e s   R e s o u r c e = G r a d e s MatchSyntactic
A c t i o n = A s s i g n A c t i o n = A s s i g n MatchSyntactic
A c t i o n = V i e w Not Match-
L o c a t i o n = I n s t i t u t e   L o c a t i o n = A s s o c i a t i o n MatchTerminological
T i m e = 1 : 30   p . m . T i m e 12   p . m .       T i m e 2   p . m . Match-
E m a i l = g s 23442 @ u p m . e d u . m y E m a i l = u p m . e d u . m y MatchTerminological
Table 7. The mapping results among R e q 3 and P o l 4 .
Table 7. The mapping results among R e q 3 and P o l 4 .
R e q 3 P o l 4 ResultType of Variations
S u b j e c t = A s s o c i a t e P r o f S u b j e c t = A s s o c i a t e P r o f e s s o r MatchSyntactic
R e s o u r c e = G r a d e s R e s o u r c e = G r a d e s MatchSyntactic
A c t i o n = A s s i g n A c t i o n = A s s i g n MatchSyntactic
A c t i o n = V i e w Not Match-
A c t i o n = S u b m i t G r a d e Not Match-
A c t i o n = S u b m i t G r a d e C h a n g e Not Match-
L o c a t i o n = G r a d u a t e S c h o o l L o c a t i o n = G r a d u a t e S c h o o l MatchSyntactic
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
Table 8. The mapping results among R e q 3 and P o l 5 .
Table 8. The mapping results among R e q 3 and P o l 5 .
R e q 3 P o l 5 ResultType of Variations
S u b j e c t = A s s o c i a t e P r o f S u b j e c t = F a c u l t y _ M e m b e r MatchTerminological
R e s o u r c e = G r a d e s R e s o u r c e = G r a d e s MatchSyntactic
A c t i o n = A s s i g n A c t i o n = A s s i g n MatchSyntactic
A c t i o n = V i e w Not Match-
L o c a t i o n = G r a d u a t e S c h o o l L o c a t i o n = S c h o o l MatchTerminological
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
Table 9. The mapping results among R e q 4   and P o l 4 .
Table 9. The mapping results among R e q 4   and P o l 4 .
R e q 4 P o l 4 ResultType of Variations
S u b j e c t = F a c u l t y _ M e m b e r S u b j e c t = A s s o c i a t e P r o f e s s o r MatchTerminological
R e s o u r c e = G r a d e s R e s o u r c e = G r a d e s MatchSyntactic
A c t i o n = A s s i g n G r a d e A c t i o n = A s s i g n MatchTerminological
A c t i o n = V i e w Not Match-
A c t i o n = S u b m i t G r a d e Not Match-
A c t i o n = S u b m i t G r a d e C h a n g e Not Match-
L o c a t i o n = S c h o o l L o c a t i o n = G r a d u a t e S c h o o l MatchTerminological
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
Table 10. The mapping results among R e q 4 and P o l 5 .
Table 10. The mapping results among R e q 4 and P o l 5 .
R e q 4 P o l 5 ResultType of Variations
S u b j e c t = F a c u l t y _ M e m b e r S u b j e c t = F a c u l t y _ M e m b e r MatchSyntactic
R e s o u r c e = G r a d e s R e s o u r c e = G r a d e s MatchSyntactic
A c t i o n = A s s i g n G r a d e A c t i o n = A s s i g n MatchTerminological
A c t i o n = V i e w Not Match-
L o c a t i o n = S c h o o l L o c a t i o n = S c h o o l MatchSyntactic
T i m e = 12 : 30   p . m . T i m e 12   p . m .       T i m e 1   p . m . Match-
Table 11. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeA policy.
Table 11. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeA policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource56.57 *56.57 *56.57 *100100100[−43.43, 0]
Action100100100100100100[0, 0]
Condition86.67 *93.55 *100100100100[−13.33, 0]
Recall (R)Subject52.2242.2242.2242.2242.225.56[36.66, 46.66]
Resource100100100100100100[0, 0]
Action88.3356.6743.3343.3343.3320.00[23.33, 68.33]
Condition33.1219.0518.4718.4718.471.27[17.20, 31.85]
F-Measure (F)Subject68.6159.3859.3859.3859.3810.53[48.85, 58.08]
Resource72.26 *72.26 *72.26 *100100100[−27.74, 0]
Action93.8172.3460.4760.4760.4733.33[27.14, 60.48]
Condition47.9331.6531.1831.1831.182.52[28.66, 45.41]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 12. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeB policy.
Table 12. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeB policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource53.94 *53.94 *53.94 *100100100[−46.06, 0]
Action100100100100100100[0, 0]
Condition97.53 *100100100100100[−2.47, 0]
Recall (R)Subject53.9245.1045.1045.1045.1012.70[32.40, 41.22]
Resource100100100100100100[0, 0]
Action93.2056.3140.7840.7840.7813.59[27.19, 79.61]
Condition31.9825.5125.5125.5125.515.26[20.25, 26.72]
F-Measure (F)Subject70.0662.1662.1662.1662.1622.61[39.55, 47.45]
Resource70.08 *70.08 *70.08 *100100100[−29.92, 0]
Action96.4872.0557.9357.9357.9323.93[34.00, 72.55]
Condition48.1740.6540.6540.6540.6510.00[30.65, 38.17]
* The proposed solution match results are worse than the Sun’s XACML implementation match result.
Table 13. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeC policy.
Table 13. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeC policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource50.00 *50.00 *50.00100100100[−50.00, 0]
Action100100100100100100[0, 0]
Condition92.36 *98.04 *100100100100[−7.64, 0]
Recall (R)Subject53.9245.1045.1045.1045.1012.75[32.35, 41.17]
Resource100100100100100100[0, 0]
Action95.2156.1639.7339.7339.7310.96[28.77, 84.25]
Condition60.9246.0142.0242.0242.0211.34[30.68, 49.58]
F-Measure (F)Subject70.0662.1662.1662.1662.1622.61[39.55, 47.45]
Resource66.67 *66.67 *66.67 *100100100[−33.33, 0]
Action97.5471.9356.8656.8656.8619.75[37.11, 77.79]
Condition73.4262.6359.1759.1759.1720.38[38.79, 53.04]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 14. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeD policy.
Table 14. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the CodeD policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource53.94 *53.94 *53.94 *100100100[−46.06, 0]
Action100100100100100100[0, 0]
Condition93.17 *98.36 *100100100100[−6.83, 0]
Recall (R)Subject38.5131.7631.7631.7631.769.46[22.30, 29.05]
Resource100100100100100100[0, 0]
Action91.4156.4441.7241.7241.7215.95[25.77, 75.46]
Condition70.2247.0844.1244.1244.1213.60[30.52, 56.62]
F-Measure (F)Subject55.6148.2148.2148.2148.2117.28[30.93, 38.33]
Resource70.08 *70.08 *70.08 *100100100[−29.92, 0]
Action95.5172.1658.8758.8758.8727.51[31.36, 68.00]
Condition80.0863.6861.2261.2261.2223.95[37.27, 56.13]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 15. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the UniversityStoller policy.
Table 15. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the UniversityStoller policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource65.70 *91.67 *100100100100[−34.30, 0]
Action61.07 *100100100100100[−38.93, 0]
Condition89.69 *92.85 *98.98 *100100100[−10.31, 0]
Recall (R)Subject50.5235.0134.9834.9834.9817.01[17.97, 33.51]
Resource43.8636.1133.2533.2533.2531.33[1.92, 12.53]
Action61.6659.4559.4559.4559.4540.55[18.90, 21.11]
Condition47.9839.0438.0237.0437.049.13[27.91, 38.85]
F-Measure (F)Subject67.1251.8651.8351.8351.8329.07[22.76, 38.05]
Resource52.6051.8149.9149.9149.9147.71[2.20, 4.89]
Action61.3674.5774.5774.5774.5757.70[3.66, 16.87]
Condition62.5254.9754.9354.0654.0616.73[37.33, 45.79]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 16. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the Continue-a policy.
Table 16. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the Continue-a policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource13.33 *62.07 *82.35 *100100100[−86.67, 0]
Action100100100100100100[0, 0]
Condition100100100100100100[0, 0]
Recall
(R)
Subject66.1266.1266.1266.1266.1250.00[16.12, 16.12]
Resource10010077.7877.7877.7847.22[30.56, 52.78]
Action74.0774.0774.0774.0774.0774.07[0, 0]
Condition100100100100100100[0, 0]
F-Measure (F)Subject79.6079.6079.6079.6079.6099.00[19.40, 19.40]
Resource23.5376.6080.0097.1597.1564.15[−40.62, 33.00]
Action85.1085.1085.1085.1085.1085.10[0, 0]
Condition100100100100100100[0, 0]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 17. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the Continue-b policy.
Table 17. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the Continue-b policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource13.33 *62.07 *82.35 *100100100[−86.67, 0]
Action100100100100100100[0, 0]
Condition100100100100100100[0, 0]
Recall
(R)
Subject66.1266.1266.1266.1266.1250.00[16.12, 16.12]
Resource10010077.7877.7877.7847.22[30.56, 52.78]
Action74.0774.0774.0774.0774.0774.07[0, 0]
Condition100100100100100100[0, 0]
F-Measure (F)Subject79.6079.6079.6079.6079.6099.00[19.40, 19.40]
Resource23.5376.6080.0097.1597.1564.15[−40.62, 33.00]
Action85.1085.1085.1085.1085.1085.10[0, 0]
Condition100100100100100100[0, 0]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Table 18. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the HealthCare policy.
Table 18. Precision (P), Recall (R), and F-measure (F) of the proposed solution with different similarity thresholds and the Sun’s XACML implementation for the HealthCare policy.
Evaluation MetricAttributesPercentage (%)Improvement
[Lowest, Highest]
Proposed SolutionSun’s XACML
Implementation
Similarity Threshold (τ)
0.20.40.60.81.0
Precision (P)Subject100100100100100100[0, 0]
Resource68.20 *69.53 *100100100100[−31.80, 0]
Action60.71 *100100100100100[−39.29, 0]
Condition81.04 *97.35 *100100100100[−18.96, 0]
Recall (R)Subject88.3786.8282.9582.9582.9545.51[37.44, 42.86]
Resource26.5617.2515.4515.4515.4513.76[1.69, 12.80]
Action80.6880.6880.6880.6880.6880.68[0, 0]
Condition45.1342.6942.6942.6942.6923.20[19.49, 21.93]
F-Measure (F)Subject93.8392.9590.6890.6890.6863.49[27.19, 30.34]
Resource38.4427.5326.7626.7626.7624.19[2.57, 14.25]
Action69.28 *89.3189.3189.3189.3189.31[−20.03, 0]
Condition57.9759.3559.8459.8459.8437.66[20.31, 22.18]
* The proposed solution match results are worse than the Sun’s XACML implementation match results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kuang, T.P.; Ibrahim, H.; Sidi, F.; Udzir, N.I.; Alwan, A.A. An Effective Naming Heterogeneity Resolution for XACML Policy Evaluation in a Distributed Environment. Symmetry 2021, 13, 2394. https://doi.org/10.3390/sym13122394

AMA Style

Kuang TP, Ibrahim H, Sidi F, Udzir NI, Alwan AA. An Effective Naming Heterogeneity Resolution for XACML Policy Evaluation in a Distributed Environment. Symmetry. 2021; 13(12):2394. https://doi.org/10.3390/sym13122394

Chicago/Turabian Style

Kuang, Teo Poh, Hamidah Ibrahim, Fatimah Sidi, Nur Izura Udzir, and Ali A. Alwan. 2021. "An Effective Naming Heterogeneity Resolution for XACML Policy Evaluation in a Distributed Environment" Symmetry 13, no. 12: 2394. https://doi.org/10.3390/sym13122394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop