Enhancing Utility in Anonymized Data against the Adversary’s Background Knowledge

: Recent studies have shown that data are some of the most valuable resources for making government policies and business decisions in different organizations. In privacy preserving, the challenging task is to keep an individual’s data protected and private, and at the same time the modiﬁed data must have sufﬁcient accuracy for answering data mining queries. However, it is difﬁcult to implement sufﬁcient privacy where re-identiﬁcation of a record is claimed to be impossible because the adversary has background knowledge from different sources. The k -anonymity model is prone to attribute disclosure, while the t -closeness model does not prevent identity disclosure. Moreover, both models do not consider background knowledge attacks. This paper proposes an anonymization algorithm called the utility-based hierarchical algorithm (UHRA) for producing k - anonymous t -closed data that can prevent background knowledge attacks. The proposed framework satisﬁes the privacy requirements using a hierarchical approach. Finally, to enhance utility of the anonymized data, records are moved between different anonymized groups, while the requirements of the privacy model are not violated. Our experiments indicate that our proposed algorithm outperforms its counterparts in terms of data utility and privacy.


Introduction
In recent years, Big Data and Internet of Things (IoT) technologies have received serious attention because of the advancement in information and communications. Individual data can provide valuable insights to recipients for making business decisions, such as iPhone, Facebook, Twitter, LinkedIn, and some government departments. However, sensitive information, such as personal details about an individual's health, bank accounts, insurance, etc., may be at risk of disclosure if the recipient performs any analysis on the collected data. Therefore, organizations have a responsibility to develop privacy techniques that protect individual identities while disseminating data. Privacy-preserving data publishing (PPDP) is a technique used for disseminating user logs in search engines, graph data, such as social networks, high-dimensional data, such as DNA strings, and patient data at various stages of their diseases while preserving privacy [1][2][3][4][5][6].
In privacy-preserving data publishing (PPDP), data anonymization is one of the approaches used to protect users' sensitive information [2]. Anonymization refers to a modified version of the original microdata that does not display confidential personal information or any relations between individuals and records in the data. The simplest approach involves removing identity characteristics, such as names and national IDs of individuals [2]. However, this technique has been proven insecure [7]. Samarati [7] demonstrated that it is possible to identify 87% of US residents using a combination of their date of birth, five-digit zip code, and gender. The challenge of privacy preservation during data publishing involves releasing a modified form of the data that does not reveal identities or sensitive information (i.e., data are no longer personal). It is worth noting that reducing the accuracy of data analysis on anonymous data is inevitable and results in information loss.
Anonymization frameworks typically consist of an anonymization algorithm and a privacy model. Privacy models can be classified into two categories: syntactic and semantic. Syntactic models divide data into several groups, called quasi identifier (QI) groups, such that records in each QI group share the same QI attribute values (Assuming that data are stored in a relational data model, each record has three types of attributes: identifiers, QIs, and sensitive attributes (SA). For instance, in a patient dataset, the name is an identifier attribute, while age, gender, and geographic location are QI attributes, and disease is an SA). The initial syntactic model is k-anonymity [7], where each record must be similar to at least k-1 other records. While this model prevents identity disclosure, it remains vulnerable to attribute disclosure [7]. Several refinements of k-anonymity, such as t-closeness [8] and l-diversity [9], have been proposed to address attribute disclosure. In semantic models, noise is added to attribute values to protect individuals' privacy [10].
Each anonymization framework gives protection against a special adversary model. The adversary model includes assumptions about its knowledge. It is often assumed that (1) the adversary's victims are in the data and (2) the victims' QI attribute values are known [1][2][3][4][5][6][7][8][9]. Background knowledge (BK) is an additional fact that can help the adversary make accurate inferences on the victim's SA, named a BK attack. This work assumes the adversary's correlational background knowledge (CBK) about the correlations between SA values and QI values. Suppose a hospital publishes patient data anonymously to analyze the progression of diseases (A hospital may publish data on patients for various purposes. For example, it publishes data to analyze the effectiveness of a specific drug in stopping or slowing the disease's progression). CBK in medicine and healthcare can show the impact of age and gender (or individuals' geographical location) on the prevalence of various diseases. Examples of BK in a medical domain are "the prevalence of Bronchitis and Alzheimer's diseases among women 65 years old and older is higher than the prevalence among men in the same age group", "breast cancer in men is rare", etc. [11].
The work in this paper proposes a syntactic anonymization framework that enhances the data utility and prevents attribute disclosure, identity disclosure, and BK attack. When a QI group is released, the adversary can calculate the likelihood of potential connections between the individual and their sensitive information. In the absence of any background knowledge, in cases where there are multiple distinct sensitive values within the QI group, the probability of linking a record respondent to a specific sensitive value is equivalent. Modeling the adversary's BK is an open challenge in PPDP [6,12]. The proposed privacy model aims to minimize certainties in distinguishing record owners and their sensitive values in a QI group. We attempt to produce QI groups where patient records have similar BK distributions to prevent BK attacks. Therefore, adversaries will have reduced confidence in their ability to identify any specific association between a victim and their sensitive values. Ensuring similar BK distributions helps prevent CBK attacks, but it does not provide protection against attribute and identity disclosure. To meet the privacy goals of preventing attribute and identity disclosure, it is crucial to also ensure k-anonymity and t-closeness in each QI group. Therefore, our proposed framework produces QI groups that satisfy three privacy requirements: (1) similar BK distributions, (2) k-anonymity, and (3) t-closeness.
The proposed algorithm uses value generalization. A hierarchical procedure is implemented to fulfill the requirements of the proposed privacy model. Firstly, it uses agglomerative clustering to create clusters with the constraint that the difference of BK distributions between any pair of records within a cluster does not exceed a predefined threshold. Next, the anonymization algorithm partitioned each cluster into QI groups satisfying both t-closeness and k-anonymity requirements. Finally, records within each QI group are reordered to enhance the similarity of QI values, thereby improving the utility of the anonymized data.
We performed comprehensive experiments using two datasets, namely the adult dataset [13] and the BKseq dataset [6] (Accessed January 2023), to assess the efficacy of our anonymization platform. The evaluation assessed the proposed algorithm against a range of privacy and information loss measures. The results confirm that our algorithm creates anonymous data with high data utility. Moreover, we compared the proposed algorithm with two state-of-the-art anonymization algorithms: the k-anonymity-primacy algorithm [12] and the BK-based algorithm [6]. The experimental results reveal that the proposed algorithm outperforms the state-of-the-art anonymization algorithms because of low information loss and privacy loss.
We have made the following main contributions: 1-We propose an anonymization framework that simultaneously satisfies t-closeness and k-anonymity. Our framework also protects against adversaries with knowledge on correlations of attribute values.
2-We propose a generalization-based anonymization algorithm that satisfies the privacy models and improves data utility. To this end, the records are moved between the QI groups to maximize the usefulness of resultant anonymous data. We conduct comprehensive experiments on various aspects of the algorithms, including the runtime, privacy loss, data utility, and QI groups' size.
The rest of this paper is organized as follows. Section 2 demonstrates the works related to the privacy model, anonymization algorithm, and the adversary's BK. In Section 3, we define the problem, which includes the adversary's BK. Section 4 represents our anonymization framework containing an anonymization algorithm and a privacy model. In Section 5, we present the empirical evaluation, while the conclusion and future work are discussed in Section 6.

Related Work
The anonymization techniques are categorized below with respect to the adversary's background knowledge.

Anonymization without Adversary's Background Knowledge
The k-anonymity [7] and its refinements (t-closeness [8] and l-diversity [9]) are privacy models which divide the records into several QI groups. In k-anonymity, the possibility of identifying a record in a QI group is 1/k at most. In t-closeness, the distance between the distribution of the sensitive values in each QI group and those in the total data is no more than a threshold t value. It should be noted that the refinements (such as ldiversity and t-closeness) cannot replace the k-anonymity approach. Several studies have suggested a combination of privacy mechanisms to prevent against attribute and identity attacks [6,12,14]. Cao and Karras [15] recognized some drawbacks of t-closeness that have attracted broad attention to prevent against attribute disclosure [5,12,[16][17][18].
Many different anonymization algorithms have been proposed for satisfying syntactic privacy models. The algorithms use anonymization operations, such as generalization [12,[19][20][21][22][23][24], anatomy [16], and microaggregations [14,17,25]. The generalization refers to altering attribute values in microdata with a range of general values. For example, assume that Table 1 shows patient's data in a hospital. In Table 1, the microdata includes one record for each patient, called a record respondent. Each record in the dataset contains an identifier attribute (name), three quasi-identifier attributes (gender, age, and zip code), and a sensitive attribute (disease). Table 2 shows the corresponding generalized version of the data. An anonymization algorithm that uses generalization deletes identifiers and divides data into QI groups. The SA is published without a change to analyze the data.   Table 2 satisfies the 3-anonymous model because if the adversary knows the QI values of a victim, without any additional knowledge, the adversary can find the victim's record and disease with a maximum probability of 1/3.
A generalization operation is suitable for categorical data but cannot process numerical data with high accuracy. The anatomy operator results in more than one table and increases the difficulty of data processing and analysis. Microaggregation [17] is not suitable for categorical values. The synthetic records that are released no longer correspond to the actual values in the original dataset, rendering the released data meaningless. In contrast, the generalized data remain semantically compatible with the original raw data. However, due to the elimination of some values, the distribution of anonymous data using microaggregation may differ from that of the original data [12]. This work uses generalization due to its advantages over microaggregation and anatomy. It should be worth noting that each of these operations has its place in research.
A close work to ours is probably the one in [6]. The algorithm creates a list of records according to the Hilbert index. With this technique, records with similar QI values are also close, with high probability. Each k record satisfying t-closeness is released as a QI group. If the condition is not met, other records are added to the QI group. The main problem of said algorithm is high information loss. Another work that is close to ours is the one in [12]. The algorithm satisfies both β-likeness [15] and k-anonymity. They suggested a hierarchical algorithm to meet privacy models. In the algorithm, k-anonymity has priority over β-likeness. The experimental result showed that data utility is low.

Anonymization with Adversary's Background Knowledge
On the other hand, the type of adversary knowledge is one of the open challenges. None of the above models can preserve privacy against an adversary with BK [6,26,27]. The correlation between attribute values is a significant BK that complicates privacy protection [6,26,28,29]. This work mitigates the risk posed by data correlation between sensitive and quasi-identifier (QI) attributes. Methods for modeling the BK are divided into logic-based [27,30] and probability-based tools [6,18,26,31]. The first category proposes a language for describing a particular level of knowledge and a set of techniques for limiting disclosure attacks. The second category models background knowledge (BK) as a probability distribution that links sensitive attribute values to individual respondents in the dataset. Another type of privacy models are the differential privacy models. These models have also attracted attention because of some noise addition and are categorized as semantic privacy models [32,33]. However, studies have shown that such models are ineffective while the adversary possesses some BK [34]. Therefore, the work focused in this research (syntactic approach such as k-anonymity and its refinements) has more utility and privacy compared to other privacy measures. In addition, the proposed work is aware of any possible attack via the adversary's BK.

Problem Definition
Assume a data publisher plans to release the table T = {r 1 , r 2 , . . . , r n }, where every record r i belongs to an individual v i and includes an identifier ID, QI attributes A 1 ,A 2 , . . . , A D , and one SI of A D+1 . Here, [A j ] shows the domain of A j for 1≤ j ≤ D + 1; r i (A j ) is the value for attribute A j in record r i , and r i (QI) denotes the values for all QI attributes of record r i . This work assumes that: (1) the adversary is aware of the existence of the victims and their QI values, (2) the adversary has BK regarding the correlation between SA values and QI values which is described in Section 3.1.

Background Knowledge
In this work, we adopt the concept of background knowledge (BK) to represent the prior probability of accurately inferring sensitive attribute values based on quasi-identifier (QI) attributes for a given record respondent. The adversary's BK is defined as a function PD sv : Therefore, for a record respondent v with QI values of q, PD sv is modeled as (p 1 , p 2 , . . . , p M ) over the domain of SA, where p i is the probability of assigning v to the sensitive value s i given q. Therefore, each record in table T has a BK distribution. BK is distinct from the distribution of sensitive values, which indicates the frequency of their occurrence in dataset T. There are practical algorithms to obtain BK according to the available external data. This study used the incremental method [6] to extract the BK distributions. The mathematical symbols used in this study are provided in Appendix A.

Our Anonymization Framework
A syntactic privacy model aims to minimize the certainty in identifying each record's actual respondent in a given QI group. To this end, the anonymization framework reduces the adversary's confidence in separating one association among all possible associations. Based on the knowledge definition in Section 3.1, our anonymization algorithm creates QI groups where record respondents' BK distributions are the same. Lemma 1. Assume that Σ Q is a set of BK distributions of record owners in QI group q. The equal probability distributions in q minimizes the certainty in identifying the true record owners in q.
Lemma 1 was proven by Amiri et al. [12]. Since it is challenging to create QI groups with similar BK distributions, We enforce a constraint that limits the divergence between these distributions within a quasi-identifier (QI) group to be below a threshold J. Since the condition is not sufficient to prevent attribute and identity disclosure risks, this work also uses t-closeness [8] and k-anonymity [7]. Therefore, our privacy requirements are defined as the following: (1) The difference of BK distributions in every QI group is up to the threshold J; (2) The k-anonymity model; (3) The t-closeness model. This work suggests the utility-based hierarchical algorithm (UHRA), which hierarchically meets the privacy requirements. As can be observed in Algorithm 1, the proposed algorithm includes two steps: (1) BK-based clustering (line 2 of Algorithm 1): It partitions the table into a few clusters {N 1 , . . . , N |N| } so that in each cluster the difference in BK distributions is equal to or less than J; (2) The t-k-Utility (line 5 in Algorithm 1): To fulfill t-closeness and k-anonymity, each cluster N i is partitioned into a few QI groups. In the first step, a cluster of records is created using hierarchical agglomerative clustering. Jensen Shannon Divergence (JSD) [35] is used to calculate the similarity between probability distributions. The cutoff point is defined as the threshold J, below which the distribution difference between a node and its descendants is considered small enough. This step utilizes the complete linkage method to guarantee that the distance between any two records in a cluster is no greater than J. This step outputs a set of clusters, {N 1 , . . . , N |N| }, and for each cluster N i with at least k records, the t-k-Utility algorithm is executed. Clusters with less than k records are not disseminated.
The t-k-Utility algorithm partitions the cluster N i into groups where k-anonymity and t-closeness are fulfilled. To enhance utility, records are moved between groups without violating the privacy requirements. The following lemma describes a guarantee condition in which the t-k-Utility produces k-anonymous data satisfying t-closeness.

Lemma 2.
Assume N i is a record cluster (N i ∈N , N i ⊆ T, and |N i | ≥ k), and EMD(N i ,T) (Earth Mover's Distance [36]) calculates the difference between the distribution of N i 's sensitive values and that in T. The t-k-Utility outputs k-anonymous t-closeness data if EMD(N i ,T) ≤ t.
Proof. In the worst case, t-k-Utility produces a QI group consisting of complete microdata in N i (N i ≥k). When EMD(N i ,T) ≤t, N i satisfies t-closeness and k-anonymity. Hence, the t-k-Utility outputs a t-closeness k-anonymous dataset.
Pseudo-code of t-k-Utility is shown in Algorithm 2. If N i is a set of records, P T is the distribution of SA values in T, and [A D+1 ] N i is the domain of SAs in N i . Then, t-k-Utility accepts N i , P T , [A D+1 ] N i , k, and t as inputs and creates a set of QI groups. The proposed t-k-Utility consists of two steps: the creation of initial QI groups (Line 2 to 20 of Algorithm 2) and the refinement of QI groups (Line 22 to 28 of Algorithm 2). In the first step, records are partitioned into groups that satisfy t-closeness and k-anonymity conditions. In the second step, records are moved between different groups to improve the data utility. In Section 4.1, we outline the process for creating primary QI groups, while in Section 4.2, we detail the steps for refining the final QI groups.

Creation of the Initial QI Groups
At first, the t-k-Utility algorithm generates a sorted list, π, of complete records in N i (Line 3 in Algorithm 2). The t-k-Utility uses the nearest point next (NPN) technique [9] to create the list (After evaluating various techniques to create an ordered list of records based on QI values, including space-filling curves [6], we found that the experimental results were consistent for both the adult dataset and the BKseq dataset). It sorts the records according to the QI values.
The t-k-Utility algorithm follows a two-step process to ensure privacy and limit distortion caused by generalizing QI values. First, it creates primary QI groups by selecting the first k records as group q and checks if t-closeness is met. If not, it adds the next record to q and repeats the process until t-closeness is achieved (Lines 6 to 12 in Algorithm 2). Second, the algorithm attempts to add each record to an existing QI group, q', while maintaining t-closeness, and removes any records that cannot be placed in a group (Lines 13 to 21 in Algorithm 2). If t-closeness cannot be achieved, other solutions may be considered [6].

Algorithm 2:
The t-k-Utility algorithm. // a set of primary QI groups 3 π ← Create an order of all |N i | records using nearest point next method 4 q←∅ // an empty QI group 5 forr ← π 1 to π |N i | do 6 q← q ∪ {r} 7 if |q| ≥ k and t-closeness(q) then 8 Qtemp

Refinement of the QI Groups
In the second step, QI groups are iteratively refined by calculating the distance between records and QI groups. If the privacy model is not violated, each record is re-assigned to the closest QI group (with closest QI values). Refinement is continued until no change is created in the groups. Finally, each group is disseminated as a QI group. We must avoid removing (or adding) a record from a QI group because if we do so, it violates the k-anonymity and t-closeness techniques. The refinement is achieved by defining the distance d (r, q i ) between the record r and the QI group q i (line 25 in Algorithm 2) as: where |q i \{r}| shows the number of records in q i \{r}. The first and second cases in Equation (1) prevent adding and removing records if it violates the k-anonymity and tcloseness. The third case calculates the Euclidean distance between r and q i according to their QI values.

Efficiency of the Proposed Anonymization Algorithm
The proposed anonymization algorithm includes two steps: BK-based clustering and the t-k-Utility approach. BK-based clustering applies agglomerative clustering and outputs a set of clusters, N = {N 1 , . . . , N |N| }. The analysis of the algorithm shows that agglomerative clustering is a time-consuming component. Its worst-case time complexity is O(n 2 logn) when n is the number of records in the original microdata. Then, we should apply t-k-Utility for each cluster N i ∈N. In the first step of t-k-Utility, the NPN method has an O(|N i | 2 ) time complexity, and the worst-case running time of the second step of t-k-Utility is also O (|N i | 2 ). Hence, the time complexity of the proposed algorithm is O(n 2 logn + ∑ N i ∈N O (|N i | 2 )). In the worst case, BK-based clustering creates a cluster with the size n. So, the worst-case time complexity of the proposed algorithm will be O(n 2 logn + n 2 ).

Experimental Result
In the following section, we will conduct an empirical evaluation of the proposed algorithm using two different datasets and various analysis measures. Section 5.1 explains the experiments' configuration and datasets, and Section 5.2 presents the evaluation criteria. Finally, Section 5.3 shows the evaluation results for the proposed algorithm.

Evaluation Datasets and Configuration
To assess the efficacy of our proposed algorithms, we conducted several experiments using a machine equipped with a 2.5 GHz Intel Core i7 processor and 8GB RAM. For implementing the hierarchical dendrogram, we utilized the linkage function provided by MATLAB (version R2022a). The distance between two clusters is measured using the complete linkage method. We conducted experiments with various values of privacy parameters (J, k, and t) to evaluate the effectiveness of our proposed algorithms. The values for the model parameters were selected based on the policies of the data domain. To compare our algorithm with other similar studies [6,12], we took the value of J, k, and t in the ranges of 0.2-0.8, 3-20, and 0.5-0.8. It should be noted that each experiment was executed ten times, and the average of results is reported for smooth graph display. Experiments were run on two datasets consisting of categorical and numerical attributes as follows: (1) The adult dataset [13] is a dataset published by the US statistics center in 1994, which is widely used for anonymization experiments. This dataset contains 45,222 statistical records with 8 grouping attributes, 6 numerical attributes, and 1 attribute for showing income. To compare our algorithm with other similar studies [6,12], three attributes of age, gender, and education were used as QI attributes, while income was used as an SA.
(2) The BKseq dataset [6] was generated based on domain knowledge extracted from medical literature. It includes a history of 24 tables, each table including 4000 records. Each record in the dataset comprises three quasi-identifier (QI) attributes: gender, age, and weight, as well as a sensitive attribute that reflects the result of medical exams. The sensitive attribute includes 19 distinct values representing various stages of the disease.

Evaluation Measure
We applied global certainty penalty (GCP) to evaluate the impact on data utility generated by anonymization (Different data utility measures are suggested for anonymization algorithms. GCP is a well-known measure and does not have limitations that other measures do [12]). GCP uses the normalized certainty penalty (NCP) measure. If table T includes D numerical QI attributes, A 1 , A 2 , . . . , A D ; the generalized version of a record r = (x 1 , . . . ,x D ) is r = ([y 1 − z 1 ], . . . , [y D − z D ]) so that ∀1 ≤ i ≤ D, y i ≤ z i (x i , y i and z i are numbers). NCP on A i is calculated in Equation (2).
where the expression |A i | represents the number of unique values that attribute A i takes in table T. Based on this, the NCP(r) is computed as a weighted sum of its NCP values across all QI attributes, as given by Equation (3).
GCP is calculated as the sum of NCP on all records in T. The value for GCP is a number larger or equal to zero, where zero indicates a situation with no information loss and maximum data usefulness. Therefore, lower GCP values are desirable. We also evaluated our algorithms regarding record linkage (RL) [19], which measures privacy loss. It is used for disclosure risk analysis. RL is calculated using Equation (4) by the number of correct linkages between the original data and the anonymized data.
where n is the size of T, and P RL (r ) is the linkage probability of the anonymized record r . The linkage probability of r j is calculated via Equation (5) below.
where E i is the closest QI group to r. Here, r ∈ E i , P RL (r ) is computed as the inverse of the size of E i ; otherwise, the probability is zero. The lower RL means the higher privacy of the record respondents.

Effectiveness of the Proposed Algorithm
In this section, the behavior of the proposed anonymization algorithm is analyzed from three aspects: the R-U confidentiality map, the size of QI groups, and the runtime.
The R-U confidentiality map is applied as a graphical description of GCP and RL. The performance of the proposed anonymization algorithm UHRA was compared with two other approaches: the BK-based algorithm [6] and k-anonymity primacy [12], shown as BKA-JS and k-primacy in the following tables and figures, respectively. BKA-JS and k-primacy are generalization-based algorithms. We ran their public versions to compare the results. We examine the impact of different privacy parameters: t, k, and J.

R-U Confidentiality Map
We investigated how varying the privacy parameters affects RL and GCP generated by three different algorithms, UHRA, BKA-JS, and k-primacy. First, we assumed that t and J were set to 0.5 and 0.8. The evaluation results of 3 algorithms using adult and BKseq datasets are shown in Figure 1, when k varies from 3 to 20.
BKA-JS creates a sorted list of records based on the Hilbert index of QI values. The algorithm selects the first k records in the list and checks if the JSD values and k-closeness are below thresholds J and t. Then, k-primacy generates groups where the difference between BK distributions does not exceed a threshold J. All three algorithms try to maximize the homogeneity of QI values in QI groups. As seen in Figure 1a, the BKA-JS algorithm results in higher information and privacy losses. The experimental results display that UHRA outperforms BKA-JS and k-primacy due to lower information loss. This result is because UHRA inserts one record into the existing QI groups until t-closeness is met. It then refines the initial QI groups to satisfy the privacy model with homogenous QI values. Our proposed algorithm has the best results with regard to privacy loss too. The results of evaluating the algorithms' performance on the BKseq dataset shown in Figure 1b    In the next set of experiments, we analyzed the behavior of our algorithm as k and J were fixed. In the analysis, the t values vary in the range 0.5-0.8 and J and k were set to 0.8 and 5. As seen in Figure 3, The experimental results on both datasets affirm previous findings. Therefore, it can be concluded that the UHRA algorithm results in less information loss and better privacy in its output. In each level of RL, both BKA-JS and k-primacy result in higher information loss. Although both k-primacy and BKA-JS try to take records with similar QI values to enhance data utility, UHRA outperforms them regarding privacy and information loss. The refinement of the QI groups in UHRA results in lower information loss and higher data utility.

Actual QI Group Size
This section presents how close to k the sizes of QI groups created by algorithms on the Adult dataset are. To minimize information loss and enhance data utility, QI groups with sizes close to k are better. At first, we assume that J and t are constant. The actual size of QI groups is shown in Table 3 over different values of k. Each row presents the minimum and the average size of QI groups over different values of k. The minimum size of QI groups determines the level of k-anonymity. As seen in Table 3, an increase in k values will increase the average size of QI groups. In addition, our algorithm's average size of QI groups is less than BKA-JS and k-primacy. Moreover, the difference between the average and the minimum size of QI groups in BKA-JS is more than other algorithms. We conclude that our proposed algorithm creates smaller QI groups compared to BKA-JS and k-primacy. We then investigate the size of quasi-identifier (QI) groups generated by the algorithms in Table 4, while holding the parameters k and t constant. Each row in the table reports the minimum and average size of QI groups for various J values. The findings confirm earlier observations that increasing J leads to larger QI groups on average. Specifically, we note that the average size of QI groups produced by BKA-JS exceeds that of UHRA. Finally, we show how close to k are the sizes of the QI groups produced by algorithms over different values of t. We assume that k and J are set to 5 and 0.8. The actual size of QI groups created by algorithms is presented in Table 5. Each table row shows the minimum and the average size of the QI groups when the t value varies from 0.5 to 0.8. The experimental results also confirm previous conclusions. We see that the average size of QI groups increases for different t values.

The Speed and Scalability
In this section, we analyze the runtime of the proposed algorithm on the BKseq dataset. To this end, k and t are set to 5 and 0.5, while J is taken in the range of 0.1-0.7. The runtime of algorithms for different J values is presented in Figure 4. In the figure, the vertical axis is on a logarithmic scale. Our experiments reveal that the runtime of our algorithms remains consistent for J values greater than or equal to 0.3. This result stems from the close proximity of background knowledge (BK) distributions among the records in the BKseq dataset and the first step of UHRA generating a single cluster for J values less than or equal to 0.3. Notably, our algorithm exhibits faster execution times for J values less than 0.3. Specifically, UHRA's initial step generates multiple small clusters for these J values, which results in fewer record movements between QI groups to satisfy the privacy constraints. We also see that the runtime of UHRA is higher than BKA-JS, but our algorithm is similar to k-primacy. Then, we select values of privacy parameters such that a worst-case execution time can be seen. We set J = 0.8, so the BK-based clustering creates clusters with maximum sizes, and t-k-Utility moves more of the records among QI groups to meet the privacy requirements. The execution time of algorithms on the BKseq dataset is displayed in Figure 5 when the k value varies from 3 to 20. Our algorithm and k-primacy have time complexity with the order o(n 2 logn) to create the groups in BK-based clustering. We see that the runtime decreases as k increases. It can be seen that BKA-JS is more efficient than our proposed algorithm due to a lower runtime. UHRA and k-primacy have a similar runtime.

Conclusions
This study proposed an algorithm for data publishing and dissemination in which the adversary has the BK to re-identify an individual in the anonymized data. However, privacy requirements prevent the BK attacks and identity and attribute disclosures. A hierarchical algorithm, UHRA, was proposed to satisfy the privacy requirements, including two steps. First, UHRA uses agglomerative clustering to prevent background knowledge attacks that consider correlations between sensitive and QI attributes. Then, t-closeness and k-anonymity are applied to the QI groups, and records are moved between QI groups to increase data utility as long as the privacy requirements are not violated. The evaluation results confirm the higher data utility compared to other algorithms. Experimental results show that the proposed algorithm causes lower privacy loss and information loss. An increase in the k value increases the size of the QI groups, and the information loss rises too. When the J and t values decrease, the number of unpublished records increases.
For future studies, we suggest that the proposed algorithm be evaluated in re-publishing scenarios. In data re-publishing, an adversary can use the combination of information from different releases over time. Data correlation can lead to a privacy breach.