Enhancement of an Optimized Key for Database Sanitization to Ensure the Security and Privacy of an Autism Dataset

: Interrupting, altering, or stealing autism-related sensitive data by cyber attackers is a lucrative business which is increasing in prevalence on a daily basis. Enhancing the security and privacy of autism data while adhering to the symmetric encryption concept is a critical challenge in the ﬁeld of information security. To identify autism perfectly and for its data protection, the security and privacy of these data are pivotal concerns when transmitting information over the Internet. Consequently, researchers utilize software or hardware disk encryption, data backup, Data Encryption Standard (DES), TripleDES, Advanced Encryption Standard (AES), Rivest Cipher 4 (RC4), and others. Moreover, several studies employ k-anonymity and query to address security concerns, but these necessitate a signiﬁcant amount of time and computational resources. Here, we proposed the sanitization approach for autism data security and privacy. During this sanitization process, sensitive data are concealed, which avoids the leakage of sensitive information. An optimal key was generated based on our improved meta-heuristic algorithmic framework called Enhanced Combined PSO-GWO (Particle Swarm Optimization-Grey Wolf Optimization) framework. Finally, we compared our simulation results with traditional algorithms, and it achieved increased output effectively. Therefore, this ﬁnding shows that data security and privacy in autism can be improved by enhancing an optimal key used in the data sanitization process to prevent unauthorized access to and misuse of data.


Introduction
Diagnostic and Statistical Manual of Mental Disorders 5th ed. (DSM-5) [1,2] defined autism spectrum disorder (ASD) as persistent deficits in two areas of development, namely social communication as well as restricted and repetitive behaviors. Children with ASD have a distinct set of deficits but with different levels of severity. Because of this, DSM-5 has divided ASD into three levels of severity based on the support required by children with ASD in their daily lives. These severity levels range from level one to level three, and are known as requiring support, requiring substantial support, and requiring very substantial support, respectively. Children with ASD demonstrate poor social communication skills, as they have deficits in verbal communication, non-verbal communication, and socialemotional reciprocity. Deficits in verbal communication cause children with ASD to exhibit difficulties in understanding spoken language and the use of inappropriate tone of voice during conversation. tials can be sold for over a thousand USD [13,18]. Personal information stated in healthcare records can be used for opening bank accounts, securing loans, or getting a passport [19]. Deficits in social communication and behavior, as stated in medical information, mean that the buyers of information can easily act as a person with ASD. They can also use this disability issue to escape from inconvenient situations, and it is hard for authorities to detect them. Due to this circumstance, healthcare data security is a crucial issue.
Various frameworks or models use different techniques, methods, or algorithms that attain accuracy, data security, and privacy issues, such as cost-effective and model-driven application-level frameworks for e-health data transmission using different encrypted and decrypted algorithms, namely, DES, 3DES or TripleDES, AES, Blowfish, IDEA, and RC4. Some researchers utilize various meta-heuristics algorithms such as artificial bee colony (ABC) [5], particle swarm optimization (PSO) [20], crow search algorithm (CSA) [21], glowworm swarm optimization (GSO) [22], grey wolf optimizer (GWO) [23], and others. To address the security and privacy problems, some of these investigations use k-anonymity and query. Such approaches need a large amount of time and computer resources. In addition, some of these traditional meta-heuristics algorithms also possess lower solving precision, slower convergence, and worse local searching ability. Moreover, we identified certain critical issues in the existing studies [5,6,[24][25][26] which we addressed, thus forming the focus of our research contributions. Such critical issues, put in question form, include, but are not limited to, the following:

•
For how long will the key value be updated during the key generation stage? • The key length will be allocated based on which value? • How are the values of the parameters defined? • What is the key range value?
In addressing the above issues, we applied data sanitization for autism data security for better accuracy, security, and privacy. Data sanitization is a process that disguises sensitive information in order to facilitate database testing and development [27]. This can be done by overwriting it with similar types of false data while looking realistic. It is essential to protect vulnerable information, and there is an ethical obligation to do that in many countries. There are various data sanitization techniques, such as encryption/decryption, gibberish generation, number variance, shuffling records, substitution, masking data, and NULL'ing Out. We applied an optimal key that is utilized in the data sanitization technique.
However, the objectives of this study can be summarized as below: • First, to propose a data sanitization process. • Secondly, to enhance an optimal key by considering the above issues, which is used in the data sanitization procedure for the security and privacy of ASD datasets. • Finally, to compare the accuracy achieved by our optimal key with the accuracy of other existing security and privacy frameworks.
The paper is structured as follows: Section 2 analyzes the relevant works on the application of various encryption and decryption algorithms and techniques. In Section 3, we describe the methodology. Sections 4 and 5 demonstrate the experiments, results, and discussions, respectively, for ASD datasets, including possible solutions. Finally, we conclude this work in Section 6, along with the future direction.

Related Works
This section reviewed and analyzed relevant works on sensitive data security and privacy, as well as concerns that must be addressed, and summarized the aspects and challenges of various security and privacy models in Table 1.

Security and Privacy in Processing Medical Data
Mewada S et al. [5] used an artificial bee colony-based (ABC-based) model to create a privacy model for hiding sensitive information in medical data. The ABC-based model creates an optimal key for anonymizing sensitive information, and the same key was used Symmetry 2021, 13,1912 4 of 21 for restoring information. They also considered four threats, for example, known cipher attack (KCA), known-plaintext attack (KPA), chosen cipher attack (CCA), and chosenplaintext attack (CPA), for the validity of the performance of their suggested approach.
Depending on adaptive awareness probability with a meta-heuristic algorithm, the crow search algorithm [6] improved the data preservation method for medical data. The suggested framework deals with the method of data sanitization to mask sensitive laws. In comparison to other existing techniques, the effectiveness of the suggested system was observed, and it was found that their suggested system offers rigorous and efficient results for the security of autism data.
To select and classify autism spectrum disorder (ASD), Rahman MM et al. [7] reviewed state-of-the-art articles. After reviewing the works, they emphasized data security and privacy in order to identify autistic features perfectly and quickly.
Data security and privacy are also significant concerns for the cloud computing environment, because this environment provides access to various data, files, and applications. Due to its advantages, the cloud is widely exploited in the healthcare sector. For example, in work [22], Alphonsa MMA et al. developed a secure model named GMGW to sanitize sensitive information of heart disease data based on the cloud system. In the same way, the authors in [24,25,28] developed security and privacy models individually by applying different algorithms to the cloud computing system for data security and compared the performance of their models with the conventional algorithms for more improvement.
Abidi MH et al. [26] established a secured data transmission model as Whale with New Crosspoint-based Update (WNU) in supply chain management along with blockchain technology. They also evaluated their model concerning four research issues, namely false rule generation (FR), the information preservation (IP) rate, the hiding failure (HF) rate, and the degree of modification (DM). Ochôa IS et al. [29] also applied blockchain technology to protect users' personal data by using three blockchains to confirm security, trust, and privacy in their architecture. They utilized sidechains for scalability and adaptability of their system.
Shailaja GK et al. [30] applied an optimal key in their proposed model for the privacypreserving data mining (PPDM) technique using an opposition intensity-based cuckoo search algorithm. They also assessed their model with FR, IP, HF, and DM.
In the works of [31][32][33][34], authors built security and privacy models independently by applying various algorithms on the cloud computing system. They also compared the performance of their models with the conventional approaches for enhancement.
Liu Y et al. [35] introduced a new reversible data hiding strategy based on the region of interest (ROI) in encrypted medical images. A data owner primarily divides an original diagnostic image into the region of interest (ROI) and the region of non-interest (RONI). The encryption key was subsequently used in anonymizing the images in this regard. The least significant bits (LSB) of the encrypted ROI and electronic patient record (EPR) were concatenated by a data hider. Afterward, the concatenated data were embedded into the encrypted image by the LSB substitution technique. With the data-hiding key, the receiver retrieves the embedded data contained in the encrypted medical image. If the recipient possesses the encryption key, directly decrypting the encrypted medical image could result in a medical image that is similar to the original image. But suppose the recipient had both keys (data-hiding key and encryption key); embedded data could be retrieved without any mistake, subsequently meaning that the embedded data ROI could be retrieved without any flaws.  In a study by [36], Zhang Y et al. established a Privacy-Aware Smart Health (PASH) access control system. The key ingredient of their system was a large universe ciphertextpolicy attribute-based encryption (CP-ABE) whose access strategy was somewhat secret. The access strategy in the encrypted s-health records (SHRs) was that the attribute values were hidden, and only the attribute names were exposed. Indeed, attribute values hold much more sensitive information than generic attribute names. Specifically, PASH conducted an effective SHR decryption test that involves a limited number of bilinear pairings. Moreover, the attributes universe could be infinitely large, and public parameters were small and constant in size. From the analysis, they claimed that PASH was completely secure in standard frameworks.
Sharma U et al. [37] recommended two parallelized methods called PGVIR and PHCR. These approaches were applied to the spark framework, which manipulates the data so that no sensitive data could be retrieved at the time of ensuring the utility of sanitized data. Taking the standard dataset through the experiment, they found that PGVIR was more scalable while PHCR ensured the dataset's quality. Sharma S et al. [38] suggested an approach that optimally reduced the side effect of the hiding process on non-sensitive data, provided a balance between knowledge and privacy, and successfully regulated the rapid increase in data volume.
Again, some recent studies by Lin Z et al. [39][40][41] emphasized the secured data for multiple access in the presence of the availability of the Internet of Things (IoT). They utilize state-of-the-art technologies, such as unmanned aerial vehicle (UAV), beamforming (BF) approach, satellite, and aerial-integrated network (SAIN), rate-splitting multiple access (RSMA), etc., for high-level data rate, lower latency, and data exchange reliability.
Although some research works [42][43][44][45][46] employed symmetry-adapted cutting-edge technologies for diagnosing different human disabilities and illnesses, maintaining accuracy and privacy without delays, there is also an imperative urge for data security and privacy.

Features and Challenges of Privacy Preservation Models
It is noted from the comprehensive literature survey that a vast number of algorithms with advanced techniques have been generated for anonymization [47][48][49][50]. It is possible to describe these algorithms as single objective, multi-objective, and restricted algorithms. These algorithms aim to retain information or data that are sensitive.
However, none of these algorithms will ensure the protection of knowledge as required for usefulness and privacy. There is, however, a need for an efficient model of anonymization to protect medical records. The latest trends have demonstrated that confidential knowledge or sensitive information is being maintained, often by meta-heuristic algorithms. The purpose of these algorithms is to produce an optimal key for the method of sanitization. These algorithms are shown to have better outcomes in comparison with conventional algorithms. Some of the studies often used k-anonymity and query to fix the privacy issues. But these techniques take a great deal of time as well as resources for computation. Therefore, in this study, an attempt is made to establish an optimal key for protecting privacy using PSO and GWO algorithms for the sanitization process.

Methodology and Architecture
The goal of this study was to come up with a potential solution or remedy to an issue. Regarding this, the problem addressed was that of yielding optimal keys using the characteristics of meta-heuristics algorithms. We compared many cutting-edge solutions to the problem in order to establish the ideal solution. Accordingly, we identified a research gap regarding the formation of optimal key in those state-of-the-art solutions. We pointed out some significant issues in the introductory section, wherein existing technologies have no definite resolution to the challenges in terms of security and privacy. Consequently, we addressed these critical issues by forming the optimal key in the proper way. To provide the solution of the problem, the following framework was utilized. Figure 1 presents the overall architecture of our proposed model, which ensures the security and privacy of autism data and maintains our expected performances.  Restoration Process.
In this framework, the dark orange arrows represent the sanitization process, which is the focus of this study, and the blue arrows denote the restoration process.
As a security and privacy concern, autism-related sensitive data protection was considered and implemented by means of a data sanitization technique. The major different components of the overall architecture related to the sanitizing purposes have been illustrated below, for concealing the sensitive data related to autism.

Sanitization Process
The procedure of the sanitization technique is illustrated in Figure 2. Here, D , a sanitization database, is obtained accompanied by the sanitizing key generated from the processed database during the key generation process. The resulting key matrix, K 2 , and D indicate the pruned key matrix and processed database, respectively, which are binarized to fulfil the XOR function. Processed data D are obtained from the original database by using machine learning algorithms, so that no blank data, missing data, anonymous data, false data may exist. Following this binary XOR operation, the chance of having '0' is high. Getting such zero values yields insignificant data elements. So, to avoid such zero values, a unit value (one) is added where the + (plus) sign refers to the binary summation. Then, a unit step input is summed up consequently, while D is obtained, as shown in Equation (1).
binarized to fulfil the XOR function. Processed data D are obtained from the o database by using machine learning algorithms, so that no blank data, missin anonymous data, false data may exist. Following this binary XOR operation, the of having '0' is high. Getting such zero values yields insignificant data elements avoid such zero values, a unit value (one) is added where the + (plus) sign refers binary summation. Then, a unit step input is summed up consequently, while obtained, as shown in Equation (1).  Figure 3 demonstrates the key generation process for sanitization purpos optimal key is created with the help of the proposed Enhanced Combined PSO framework by setting the population of various keys indiscriminately. It is follow the sanitization process step, through which a sanitized database is obtained. Spec Figure 3 illustrates the key generation process for data sanitization and the rest process. The proposed Enhanced Combined PSO-GWO algorithm is used at t update step for obtaining the better key and is performed depending on an iterati to obtain the better solution in the process. In the interim, the sanitized data obtained through the sanitization process. Again, the processed database acqu association rule and measures the objective functions, C1, C2, and C3, respectively. F the key value is updated continuously during this process until the highest term measure is achieved and the best-desired solution is generated. For this data sanit process, a key is created optimally by the proposed Enhanced Combined PSO-GW dimension of the chromosome is allotted depending on the value of . The valu the elements, ⌊ 0, max( ) ⌋, whereas D refers to the processed initial database.  Figure 3 demonstrates the key generation process for sanitization purposes. The optimal key is created with the help of the proposed Enhanced Combined PSO-GWO framework by setting the population of various keys indiscriminately. It is followed by the sanitization process step, through which a sanitized database is obtained. Specifically, Figure 3 illustrates the key generation process for data sanitization and the restoration process. The proposed Enhanced Combined PSO-GWO algorithm is used at the key update step for obtaining the better key and is performed depending on an iterative loop to obtain the better solution in the process. In the interim, the sanitized database is obtained through the sanitization process. Again, the processed database acquires an association rule and measures the objective functions, C 1, C 2, and C 3, respectively. Finally, the key value is updated continuously during this process until the highest termination measure is achieved and the best-desired solution is generated. For this data sanitization process, a key is created optimally by the proposed Enhanced Combined PSO-GWO. The dimension of the chromosome is allotted depending on the value of L C D . The value fixes the elements, 0, max(D) , whereas D refers to the processed initial database.

• Key Encoding
The usage of keys, K, for the procedure of sanitization depends on the encoding of the proposed Enhanced Combined PSO-GWO algorithm. The optimization of the number of keys ranging from key K 1 to key K N is controlled by using an Enhanced Combined PSO-GWO algorithm, and as a result, the optimal key is obtained. The length of the key is assigned as L C D in this case. Usually, the key length for sanitization is L D . However, our key generation process needs L C D and the technique of key transformation forms a key of L D using the Khatri-Rao product. A Kronecker product that is column-wise is known as the Khatri-Rao product [51].

• Key Transformation
Let us consider a database transaction, presented in Table 2   Table 2. Data transaction in the database.

Transactions Data
The key K is converted by applying the Khatri-Rao product during the Key Transformation process phase. This operation occurred on two matrices of arbitrary size as a block matrix and is denoted by the operator ⊗. From the beginning, K is mainly formed as K 1 with the dimension of the matrix, [ L C D × T max ]. The recommended technique of K = 5, 0, 10, for illustration, performs row-wise duplication and produces the key matrix, K 1 , with dimension [ L C D × T max ], as revealed in Equation (2), wherein the row matrix depends on L C D , as well as the column matrix, is assigned depending on T max . So, the matrix K with size, [ L C D × T max] Similarly, by applying Khatri-Rao products like K 1 ⊗ K 1 , the key matrix, K 2, is achieved, whose dimension is [ L D ×T max]. Its sizes are trimmed regarding the initial database dimensions presented in Equation (3).
K 1 acts the key generation process depending on the Khatri-Rao approach and produces a matrix of the same size as the initial database, K 2 [ L D ×T max ]. Finally, the rule hiding method is encompassed to obtain the sanitized database, D , by concealing the sensitive data. In addition, binarization is performed between the processed database as well as the key matrix. Consequently, the rule hiding operation is applied to the binarized key matrix pruning, wherein the XOR function takes place with the initial binarized database, accomplishing equivalent matrix sizes, and adds up with the unit value and produces the sanitized database, which is revealed in Equation (1), where K 2 implies a pruned key matrix. Furthermore, prior to the sanitization of D, D achieved from the sanitization process, raises both sensitive rules and association rules. In this way, Equation (1) is analyzed depending on the Khatri-Rao method and is reached by sanitized database D .

• Fitness Evaluation
The functions, C 1 , C 2 , and C 3 , known as objective functions (hiding failure rate C 1 , information preservation rate C 2 , and degree of modification C 3 ), are assessed through Equation (4) to Equation (6) after sensitive rules and association rules of the original and sanitized database have been generated. In Equation (4), f s and f m refer to the frequency of the sensitive itemset, whereas f s signifies in the case of sanitized data, and f m implies in respect to original data. Similarly, f ns represents the non-sensitive itemset frequency in reference to sanitized data shown in Equation (5). From Equation (6), the Euclidean distance is achieved where D is original data, and D is sanitized data. Finally, the distance amidst individual items set from sanitized and original data is represented by c 4 in Equation (7). Moreover, f indicates the fitness function of the recommended method, whereas w 1 , w 2 , and w 3 represent the impact of a particular cost function regarding C 1 , C 2, and C 3 at the same time.
Symmetry 2021, 13,1912 11 of 21 However, the functions C 1 , C 2, and C 3 are preferred to determine how efficiently the autism data are sanitized, using the recommended Enhanced Combined PSO-GWO algorithm. For medical data, the objective function of the suggested technique is presented by Equation (8).

Both Traditional PSO and GWO Algorithms
In this section, we discussed the traditional PSO algorithm in Section 3.3.1 and GWO algorithm in Section 3.3.2.

Traditional PSO Algorithm
In the PSO algorithm, there are three vectors. These are x-vector, p-vector, and vvector. The x-vector keeps track of the present location for the particle in the searching area, whereas the p-vector (pbest) identifies the position of where the particle has discovered the best solution so far. Moreover, the v-vector incorporates particle velocity, indicating where every other particle will move through the following iteration. At the outset, the particles are randomly shifted in specified directions. The particle's orientation might be adjusted gradually, and as a result, it began to move in the direction of the prior best location on its own. After that, it explores the surrounding area for the best locations for some fitness functions, fit = S m −S. Here, the location of the particle is provided as → M∈ S m , while its velocity is provided as → w. Initially, these two variables are picked at random and then updated repeatedly according to two formulae shown in Equation (9 In this case, ω, a user-defined behavioral parameter is the inertia weight, which regulates the amount of recurrence in particle velocity. The particle's previous best position (pbest position) is → q , and the particle's previous best position in the swarm (gbest position) is → f ; in that way, the particles implicitly interact with each other. This is weighted using stochastic variables r 1 , r 2 ∼ U (0, 1), while the acceleration constants are c 1 , c 2 . Regardless of fitness gains, the velocity is added to the particle's present position to propel it to the next place in the searching area, as shown in Equation (10)

Traditional GWO Algorithm
In the GWO algorithm, there are hierarchical search agents such as level 1 (Alpha), level 2 (Beta), level 3 (Delta), and level 4 (Omega). When the grey wolves hunt their prey, then the characteristic of encircling is expressed mathematically in Equations (11) and (12).
where u is given the current iteration → H and → E are referred to as the coefficient vectors. Grey wolves possess a unique skill for detecting the position of prey and encircling it. These grey wolf hunting actions are mathematically reproduced utilizing alpha, beta, and delta wolves' enhanced awareness of probable prey locations. The first three best solutions are considered, regardless of whether the remainder is required. The mathematical Equations (13)-(15) are provided below:

The Proposed Enhanced Combined PSO-GWO Algorithm
Despite having good performance, enhancements can be made to traditional algorithms to address the limitations and improve performance. The traditional PSO algorithm demonstrates a few weaknesses, such as lower performance over a wide range of fields. The GWO algorithm also has a few drawbacks: poorer local searching capability, slower convergence, and lower solving precision. Consequently, further analysis is required to improve robustness and integration.
This study implements a new hybrid algorithm for solving those issues. The proposed Enhanced Combined PSO-GWO is elaborated as follows: in this regard, the criteria of the PSO algorithm are implemented in the GWO algorithm. The enclosure of the prey mathematical model, in the suggested method, is provided in Equations (11) and (12), while the mathematical model of the hunting method is shown in Equations (13)- (15). The updating of the location is the main reformation in the suggested model. So, the updating of the location in our Enhanced Combined PSO-GWO model is shown in Equation (16), where → M refers to the velocity for the updating of the location of PSO-this is demonstrated in Equations (9) and (10).
Again, c 1 and c 2 are considered acceleration constants in the traditional PSO algorithm, whereas c 1 and c 2 fluctuate according to the values 0.1, 0.3, 0.5, 0.7, and 1 in the suggested Enhanced Combined PSO-GWO model. The optimal key selection based on PSO-GWO is presented in Algorithm 1.

Algorithm 1: Optimal Key Selection through Enhanced Combined PSO-GWO.
M j is the Grey Wolf population where j = 1, 2, N. Here, M α , M β , and M δ denote the best searching agent, 2nd best searching agent, and 3rd best searching agent, respectively. Moreover, e is the components, and H, E are coefficients. The goal of this algorithm is to output the best searching agent, M α .

Experiment and Analysis
This section explains the implementation of our proposed method and types of autism datasets with sources and the compared traditional algorithms in Section 4.1. We also show our proposed methods' simulation performances compared to those conventional algorithms against various attacks in Section 4.2.

Configuration for Experiment
The proposed method was developed by using the Python programming language. The autism datasets were collected from the faculty of education, Universiti Kebangsaan Malaysia. The autism datasets applied for this study are collected from different aged-group autistic children. These include the autism child dataset at 24 months with 26 attributes and 209 instances, the autism child dataset at 30 months, which have 29 attributes and 209 instances, the autism child dataset at 36 months, including 31 attributes and 234 instances, and the autism child dataset at 48 months, including 33 attributes and 302 instances. All datasets are autism diagnostic data, which have three scoring options, such as z = 0, v = 5, and x = 10. For every type of dataset, the cut-off values were different, at 71, 95, 100, and 105, respectively. The performance of the proposed framework was compared with the existing conventional algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Crow Search Algorithm (CSA), Differential Evolution (DE), and Adaptive Awareness Probability-based CSA (AAP-CSA).

Results and Discussions
Among the different sorts of attacks, KCA and KPA were investigated initially and compared with other traditional algorithms revealed in Figure 4. From the simulation, the KCA attack over the proposed method is 0.44% superior to the PSO and GA and 0.43% more beneficial than DE and CSA in Figure 4a. Again, the KPA attack on the proposed method is 0.36 and 0.01% improved from PSO and AAP-CSA, as well as 0.31% enhanced in comparison with the remaining GA, DE, and CSA algorithms in Figure 4b. The overall results are shown in Table 3.  Figures 5-8. Figure 5a shows that the proposed approach shows an improvement of 0.37% and 0.33% compared to PSO and GA, respectively, and is 0.32% more beneficial than the DE and CSA algorithm, respectively, using the autism at 24 months dataset in terms of a CCA attack. For CPA analysis, our proposed scheme is 0.34%, 0.32%, 0.31%, 0.30%, and 0.10% more effective than the PSO, GA, DE, CSA, and AAP-CSA, respectively, as shown in Figure 5b. The total outcomes are summarized in Table 4. For the autism at 30 months dataset, our method, in terms of the CCA attack, is 0.03% better than AAP-CSA and 0.18% superior to all other typical algorithms, as shown in Figure 6a. Similarly, the CPA attack is also 0.20% superior to PSO, GA, DE, and CSA, as shown in Figure 6b. Table 5 displays the results discussed above. The CCA attack on the autism at 36 months dataset is 3.30% better than PSO, GA, DE, 3.10% superior to CSA, and 2.80% better than AAP-CSA, which is illustrated in Figure 7a. In addition, our method for the CPA analysis on the 36 months autism dataset is 1.10% more improved than PSO, 1% better than GA and DE, 0.80% superior to CSA algorithms, and 0.20% better than AAP-CSA, as shown in Figure 7b. The performances are depicted in Table 6. In the case of the autism at 48 months dataset, the CCA simulation for our scheme is 0.26%, 0.23%, 0.22%, and 0.18% better than the PSO, GA, DE, and CSA algorithms, accordingly, as illustrated in Figure 8a. In Figure 8b, the CPA attack on the autism at 48 months dataset is 0.40% better than PSO, 0.29% superior to GA, 0.28% higher than DE and CSA, and 0.10% better than AAP-CSA. In this regard, the overall results are shown in Table 7. Thus, the simulation demonstrates that our proposed information security technique performed better than the existing conventional algorithms based on some attacks. There-fore, it is revealed from the simulation outcomes that our sanitizing approach performs more effectively and efficiently compared to other existing traditional algorithms.
Due to the fact that sensitive diagnostic data of autism are critical for determining whether an individual is autistic or not, protecting this type of data is critical, which has greater applicability in the healthcare sector. Evidence produced by this study showed that our proposed sanitizing approach protects these data better than existing algorithms against certain attacks. It is, however, suggested that our recommended approach can be widely applied to the healthcare sector for data security and privacy.

Conclusions
The security and privacy of the autism dataset through the sanitizing technique were investigated in this study. The emphasis of this method was to conceal the sensitive data of patients. Specifically, an optimal key was produced for concealing the sensitive data, which was selected by the proposed Enhanced Combined PSO-GWO framework and resolved the problems mentioned in introduction. Furthermore, the results obtained by our recommended model were compared with existing traditional algorithms for justification. Mainly, our suggested technique was tested in terms of the different attacks and compared with existing traditional algorithms, and the expected outcomes were achieved, according to the experimental review. Our proposed technique, for the autism at 24 months dataset in terms of the CCA attack, is 0.37% and 0.33% better than the algorithms of PSO and GA, respectively, and 0.32% better than DE and CSA individually. Additionally, the suggested approach, in the case of the CPA attack, shows 0.20% more improvement compared to the PSO, GA, DE, and CSA algorithms, for the autism at 30 months dataset. For the autism at 36 months dataset, the simulation result of the proposed technique with CCA attacks is 3.30% more improved than PSO, GA, and DE, 3.10% better enhanced from CSA, and 2.80% superior to the AAP-CSA algorithms. Finally, in terms of the CPA attack on the autism at 48 months dataset, our technique is 0.40%, 0.29%, 0.28%, and 0.28% superior to PSO, GA, DE, and CSA, and 0.10% better than AAP-CSA, respectively. Therefore, it is revealed from the analyzed results that our proposed enhanced technique is more effective and efficient compared with the present conventional algorithms.
In contrast, our future research will focus on improving a restoration technique in which the optimal key will be used in information security, specifically for the security and privacy of autism data.