Adaptive Autonomous Protocol for Secured Remote Healthcare Using Fully Homomorphic Encryption (AutoPro-RHC)

The outreach of healthcare services is a challenge to remote areas with affected populations. Fortunately, remote health monitoring (RHM) has improved the hospital service quality and has proved its sustainable growth. However, the absence of security may breach the health insurance portability and accountability act (HIPAA), which has an exclusive set of rules for the privacy of medical data. Therefore, the goal of this work is to design and implement the adaptive Autonomous Protocol (AutoPro) on the patient’s remote healthcare (RHC) monitoring data for the hospital using fully homomorphic encryption (FHE). The aim is to perform adaptive autonomous FHE computations on recent RHM data for providing health status reporting and maintaining the confidentiality of every patient. The autonomous protocol works independently within the group of prime hospital servers without the dependency on the third-party system. The adaptiveness of the protocol modes is based on the patient’s affected level of slight, medium, and severe cases. Related applications are given as glucose monitoring for diabetes, digital blood pressure for stroke, pulse oximeter for COVID-19, electrocardiogram (ECG) for cardiac arrest, etc. The design for this work consists of an autonomous protocol, hospital servers combining multiple prime/local hospitals, and an algorithm based on fast fully homomorphic encryption over the torus (TFHE) library with a ring-variant by the Gentry, Sahai, and Waters (GSW) scheme. The concrete-ML model used within this work is trained using an open heart disease dataset from the UCI machine learning repository. Preprocessing is performed to recover the lost and incomplete data in the dataset. The concrete-ML model is evaluated both on the workstation and cloud server. Also, the FHE protocol is implemented on the AWS cloud network with performance details. The advantages entail providing confidentiality to the patient’s data/report while saving the travel and waiting time for the hospital services. The patient’s data will be completely confidential and can receive emergency services immediately. The FHE results show that the highest accuracy is achieved by support vector classification (SVC) of 88% and linear regression (LR) of 86% with the area under curve (AUC) of 91% and 90%, respectively. Ultimately, the FHE-based protocol presents a novel system that is successfully demonstrated on the cloud network.


Introduction
The healthcare system is one of the high-priority factors for the country's progress.Recently, the outburst of many contagious chronic diseases has highly affected major economies all around the world.Therefore, the use of IoT remote healthcare devices, also Sensors 2023, 23, 8504 2 of 31 known as smart healthcare, has been beneficial to the population at all the age groups without a need to physically attend the hospital and visit emergency services [1].
However, the popularity of using various body-worn IoT devices has increased the risk of confidentiality and integrity loss which is risky for the social well-being of the patient [1,2].To overcome such challenges, fully homomorphic encryption (FHE) is introduced in the medical healthcare context to provide secured healthcare services for improved healthcare service quality and contribution towards society [3].The patient, as the end-user, will be freed from such an FHE system installation as it will be implemented on the middleware system/cloud.The purpose of FHE is to perform analytical functions over the encrypted data and provide solutions for the query request by the authorized user.The scope of this work is to ascertain the prediction of the critical condition or health issues within the remote patient equipped with healthcare sensors to be evaluated by the doctors in the hospital.The motivation is thought from "How can the affected remote patient's data confidentiality be preserved for HIPAA compliance?" [4].A similar problem is discussed by Sendhil R. and Amuthon A. for privacy preservation in the healthcare data exchange, which is further attempted to be implemented on the cloud but suffers major drawbacks [4][5][6].Therefore, we have designed an FHE protocol that is demonstrated to be working using cloud computation as well as on a local workstation system.The recent FHE research works in the medical field are compared with our proposed AutoPro healthcare protocol, as given in the above Table 1.The FHE algorithms used in Table 1 comparison are different and include DCNN, K-means, Gorti's/Carmicheal's scheme, simple FHE, and TFHE.The detailed objectives set for this work before the design and implementation are given below.The health insurance portability and accountability act (HIPAA) is used to protect patient's health data sensitivity and consent-based disclosure.The HIPAA privacy rule is implemented as the protected health information (PHI) and the security rule is its subset of information.Communication with the patient while information transmission consists of a subset of protected information or electronic-PHI (e-PHI).
The HIPAA security rule covers (a) the information by ensuring confidentiality, integrity, and availability; (b) knowledge about securing the e-PHI data; (c) avoidance of impermissible use; and (d) workforce compliance co-operation.Therefore, AutoPro-RHC helps to serve by ensuring the security rule of HIPAA compliance.The AutoPro-RHC objective of designing a novel secure communication protocol using a federated system, which processes only FHE data, is aligned with HIPAA compliance.

1.
Construction of a pre-processing algorithm for the open dataset.
Use of an open dataset shows the adaptability of the designed algorithm to the international standards.Applying a pre-processing algorithm can overcome the missing data and incomplete data problems.The selection of necessary features is important to provide efficient results and reduce the overhead on the computational costs.The FHE algorithm is designed to process the encrypted data and evaluate the necessary features as per the medical examiner's requirements.FHE is known for preserving the confidentiality and integrity of encrypted data, thus making it an ethical evaluator with the HIPAA compliance and GDPR regulations [1].Nevertheless, the designed system must be efficient on the target platform for the lightweight scheme to be utilized;

2.
Design of a novel secure communication protocol for autonomous servers.
The problem of coordination amongst different prime and local hospital servers is solved by the adaptive autonomous protocol.The AutoPro is designed to effectively coordinate between prime hospitals to execute FHE computations by utilizing the local hospital's patient data from the group of authorized hospitals.The test set of patients is taken in the encrypted form and the results are communicated in return after the approval by the respective hospital's medical examiner.The model is evaluated on both platforms of the workstation as well as on cloud instances as a multi-party communication protocol.Therefore, the private dataset and FHE models of the prime hospitals are confidential within the network.

3.
To demonstrate the working of AutoPro-RHC protocol in the cloud computing.
The AutoPro eliminates the dependency on the third-party systems of an authentication server (AS) and ticket-granting server (TGS).Also, the FHE system is designed to provide homomorphic machine learning-based evaluation with the patient's data endto-end encrypted; no private key sharing is required within this protocol.Successively, AutoPro job scheduler dynamically adapts to the severity of the patient conditions in the AWS cloud.The flowchart given provides a stepwise process performed by this protocol in the cloud.The protocol steps are also given exclusively for the detailed analysis.Whereas, the two algorithms provide the federated system operations and utilize FHE functions for the remote healthcare protocol.

4.
Evaluation of the performance of the multiple datasets and different FHE-ML algorithms.
AutoPro-RHC experiments use three different open heart disease datasets as Cleveland, Hungary, and V.A. as well as four FHE-ML algorithms as linear regression, support vector machines (SVM), eXtreme gradient boosting (XGB), and a decision tree.In the beginning, three different datasets are used for training on three different prime servers.Successively, the patient's record is evaluated by FHE-ML algorithm-based prime servers which are later shared with the medical examiner for approval.Ultimately, the implementation of machine learning and FHE-ML on workstations and the cloud for comparison provides a complete evaluation analysis.Thus, to determine optimal performance, the AutoPro-RHC protocol is evaluated on different open datasets and demonstrated.

Applications
Remote patients constitute a major population outside hospital proximity with different categories of disease.Every patient is usually affected by a different category of disease.Patients affected with single or multiple diseases can be monitored regularly for checkups with the respective sensor devices.Therefore, as a mobile device with a wrist sensor is used in this work for FHE-based heart disease evaluation, the following similar fashion devices can be utilized for the respective diseases: 1.
Glucometer for Diabetes Monitoring: One of the chronic health conditions known as diabetes is measured by excessive sugar in the blood due to incapability to produce insulin in the human body.

Literature Survey
An IoT healthcare device used for monitoring patients' health is secured by somewhat homomorphic encryption, as presented by V. Subramaniyaswamy et al. [11].A smartwatch captures the user's health information which is encrypted with a dynamic key and then permuted data are processed with block encryption as a homomorphic function before storing on the cloud and later evaluated for performance.A scalable homomorphic encryption algorithm for cancer-type prediction is demonstrated by E. Sarkar et al. [12].The genetic information dataset is used to predict cancer type from a novel logistic regression model using fast matrix multiplication for high dimensional features.Inference to encrypted data by privacy preserving is presented by S. Sayyad et al. [13].This work uses a simple neural network with the HELib algorithm on MNIST and heart disease datasets for evaluation.The privacy-preserving CNN models with BFV homomorphic encryption are demonstrated by F. Wibawa et al. [14].A secure multi-party protocol is used for deep learning model protection at each client/hospital, which collaborates by federated learning and evaluates by aggregating servers.A medical resource-constrained mobile device used for private decision tree-based disease detection is presented by S. Alex et al. [15].An energy-efficient FHE compatible rivest scheme is proposed that works within the user edge device and in the cloud for the homomorphic operations.A secure two-party computation for cancer prediction using homomorphic encryption is demonstrated by Y. Son et al. [16].A gated recurrent unit (GRU) model is used to secure and compute over the encrypted data for homomorphic encryption to predict end-to-end recurrence.Privacy preservation for precision medicine using machine learning is presented by W. Briguglio et al. [17].A machine learning encryption framework is proposed to work between client-server models with genomics datasets.A multiple-feature pre-processing classifier is used with three different HE-compatible classifiers.A genetic algorithm (GA) for augmented ensemble learning for FHE is demonstrated by D. Dumbere et al. [18].The designed model consists of configuration settings, the evaluation of best configuration by GA, a training set for machine learning, a classifier pool of different CNN models, instance matching, and FHE evaluation for encrypted email spam filtering.Detection of COVID-19 by FHE in the federated system is presented by F. Wibawa et al. [19].A secure multi-party computation protocol is used in a centralized federated system for protecting the aggregation of an encrypted CNN model weight matrix for the patient's personal medical data.
Sharing informatics for integrating biology and the bedside (I2B2) aggregate medical data, secured by FHE in the cloud, is demonstrated by J. Raisaro et al. [20].The model consists of a public key encrypted I2B2 dataset stored on a cloud server in encrypted form and a shared key in the proxy server with an exclusive crypto engine and interacting with the client app.The database extract, transform, and loading (ETL) concepts are used for interactions in this process.An FHE-secured query system for a medicine side effect search is presented by Y. Jiang et al. [21].In case of an un-trustable cloud server, a client-side server as middleware is added which keeps the public keys that take queries from the client terminal, encrypts it, and performs transactions on the cloud database server.The server helps to perform the medicine side effect search using FHE.Secure medical cloud computing using FHE is demonstrated by O. Kocabas et al. [22].The patient's cardiac health medical data for long QT syndrome (LQTS) detection is stored on the cloud in encrypted form and is evaluated by a medical examiner with HElib for the purpose of remote health monitoring.Securing deep neural network models by privacy-preserving machine learning (PPML) with FHE is presented by JW Lee et al. [23].A ResNet-20 model with RNS-CKKS in FHE is implemented with bootstrapping on the CIFAR-10 dataset with approximate methods to evaluate non-arithmetic functions used with ReLU and softmax.Securing deep convolutional networks (DCN) with FHE is demonstrated by S. Meftah et al. [24].A new DCN model with low depth and batched neurons is designed to be utilized by FHE for better performance.Multi-party computations by machine learning for MNIST data analyzed with privacy preserving is presented by T. Li et al. [25].A non-interactive system with a multi-layer perceptron model and security protocols are presented with secure multi-party schemes to reduce calculations.A multi-key HE (MKHE) system for detecting diseasecausing genes is demonstrated by T. Zhou et al. [26].MKHE is combined with an encrypted pathogenic gene location function for operating on two location circuits, namely threshold (TH)-intersection and top gene position, as fixed parameters (Top-q) to locate polygenic diseases.Federated learning-based PPML with FHE is presented by H. Fang et al. [27].The encrypted gradients are passed by the multiple parties to be combined In the federated learning model with partial homomorphic encryption.Thus, a federated MLP is implemented to compute backpropagation with gradient exchange.Federated analytics for multiparty homomorphic encryption (FAMHE) in precision medicine is demonstrated by D. Froelicher et al. [28].FAMHE secures distributed datasets while including Kaplan-Meier survival analysis and medical genome evaluation.FHE, with full domain functional bootstrapping, is presented by K. Klucznaik et al. [29].A regev encryption is used to compute affine functions which drastically reduces errors and performs FHE additions and scalar multiplications with high efficiency.Securing enterprises with a multi-tenancy environment is demonstrated by P. Dhiman et al. [30].This work presents an enhanced homomorphic cryptosystem (EHC) which works with the BGV scheme for key and token generation and is implemented in an enterprise environment having a token, private, hybrid, and public-key environment.This FHE literature survey analysis is shown in the form of the abstract as the Figure 1 for different homomorphic encryption types, AI algorithm integration, communication protocols used, key distribution strategies, and applications.

The Survey Limitations Are Given as Follows:
• The complete mathematical model is not represented in most of the recent work; The FHE algorithm and libraries used are not specified in the implementation secti • The details of every FHE algorithm hyper-parameter tuning are not disclosed; • Comparison of machine learning and concrete-ML experiments with different op datasets and algorithms used in the protocol.

Materials and Methods
Figure 2 presents the federated model designed for the implementation of secu remote monitoring (SRM).The federated model can be explained in four parts as clo learning, prime institutions with private data training, child institutions for test set s mission, and user's consisting of the patients and medical examiners.At the start, the tient shares their encrypted remote sensor's data with the AutoPro network.Next, in AutoPro network at layer 1, the hospital's cloud portal with storage and computatio present that is used to collect queries as prime and from local hospital servers.Succ sively, the queries are resolved with the support from prime hospital servers with res and respond back to the requesting hospital by the prime server.The prime hospital se ers consist of their private servers with the FHE algorithm.

•
The complete mathematical model is not represented in most of the recent work;

•
The FHE algorithm and libraries used are not specified in the implementation section;

•
The details of every FHE algorithm hyper-parameter tuning are not disclosed; • Comparison of machine learning and concrete-ML experiments with different open datasets and algorithms used in the protocol.

Materials and Methods
Figure 2 presents the federated model designed for the implementation of secured remote monitoring (SRM).The federated model can be explained in four parts as cloud learning, prime institutions with private data training, child institutions for test set submission, and user's consisting of the patients and medical examiners.At the start, the patient shares their encrypted remote sensor's data with the AutoPro network.Next, in the AutoPro network at layer 1, the hospital's cloud portal with storage and computation is present that is used to collect queries as prime and from local hospital servers.Successively, the queries are resolved with the support from prime hospital servers with results and respond back to the requesting hospital by the prime server.The prime hospital servers consist of their private servers with the FHE algorithm.
The FHE algorithm is trained on the private dataset of that respective national/international hospital on a regular interval, i.e., weekly/monthly, and then serves for responding to queries.In layer 2, local hospitals can only submit queries to receive the results from the expert's opinion formed collectively.Here, the dataset is not standardized due to the lack of policy and standardization procedures.Ultimately, the output forwarded by the source prime/local hospital to the respective medical examiner can receive the patient's current health conditions and can share the approved medical report with them.The FHE algorithm is trained on the private dataset of that respective national/international hospital on a regular interval, i.e., weekly/monthly, and then serves for responding to queries.In layer 2, local hospitals can only submit queries to receive the results from the expert's opinion formed collectively.Here, the dataset is not standardized due to the lack of policy and standardization procedures.Ultimately, the output forwarded by the source prime/local hospital to the respective medical examiner can receive the patient's current health conditions and can share the approved medical report with them.
Figure 3 presents the flowchart for the AutoPro-RHC process.At the start, the sensor device initiates the medical reporting of a remote patient at a given interval of time.If the data are corrupted, the process is then terminated.Otherwise, the remote patient's data are encrypted and sent to the respective local/prime hospital where the remote patient is registered.If the remote patient is registered at a local hospital, then their encrypted data are immediately forwarded to the federated cloud server.Otherwise, the prime hospital first sends the remote patient's encrypted data to the cloud and starts FHE computation based on the training of the current prime hospital's data.If the cloud service queue is full due to many requests of compute intensive FHE computations, then the process has to wait until the service queue slot availability.Next, the cloud will send a copy of the patient's encrypted data to all prime hospital servers to perform FHE computation and provide prediction based on their respective data.Later, the results are returned back by all the prime hospitals to the federated cloud server to perform grouping of the results, which is then returned back to the requesting patient.The patient will later consent and share the results with the hospital's medical examiner who will approve and return the final report with the patient.Figure 3 presents the flowchart for the AutoPro-RHC process.At the start, the sensor device initiates the medical reporting of a remote patient at a given interval of time.If the data are corrupted, the process is then terminated.Otherwise, the remote patient's data are encrypted and sent to the respective local/prime hospital where the remote patient is registered.If the remote patient is registered at a local hospital, then their encrypted data are immediately forwarded to the federated cloud server.Otherwise, the prime hospital first sends the remote patient's encrypted data to the cloud and starts FHE computation based on the training of the current prime hospital's data.If the cloud service queue is full due to many requests of compute intensive FHE computations, then the process has to wait until the service queue slot availability.Next, the cloud will send a copy of the patient's encrypted data to all prime hospital servers to perform FHE computation and provide prediction based on their respective data.Later, the results are returned back by all the prime hospitals to the federated cloud server to perform grouping of the results, which is then returned back to the requesting patient.The patient will later consent and share the results with the hospital's medical examiner who will approve and return the final report with the patient.
Figure 4 the AutoPro job scheduler decides the scheduling of the patient's data evaluation based on the recent heart conditions.The mode value is selected by first obtaining the highest value from the TestSet data submission of the patient and then, based on age, the heart rate can be selected as severe, moderate, and slightly affected for the modes 1, 2, and 3, respectively.Subsequently, in the processed version of the UCI dataset, the disease presence is already given in the feature column of the severity level for existing heart disease.
In case of Mode = 1, the critical care is reserved for the patients with severe heart conditions.Thus, severe heart conditions can increase the mode value to 1 and can schedule the results evaluation time to the hospitals alert/notice.Therefore, the scheduler used by the AutoPro job network is for parallel processing by all of the prime hospital servers, as shown in Figure 4a.Successively, the Mode = 2 with special care is reserved for the patients with mildly affected heart conditions.Thus, the scheduler used by the AutoPro network is the relay based scheduler, as shown in Figure 4b, which forwards the patients' data as TestSet with results to the successive hospital prime server.Similarly, the Mode-3 with general care is allotted for the patient's with slightly affected heart conditions.The scheduler used by the AutoPro network is the buffer based time synchronization where a buffer with less than or equal to 10 patients TestSet [10] is forwarded every 1~5 min to the successive server for processing, as shown in Figure 4c.Ultimately, all the results are gathered back to the initiating prime server.Figure 4 the AutoPro job scheduler decides the scheduling of the patient's data evaluation based on the recent heart conditions.The mode value is selected by first obtaining the highest value from the TestSet data submission of the patient and then, based on age, the heart rate can be selected as severe, moderate, and slightly affected for the modes 1, 2, and 3, respectively.Subsequently, in the processed version of the UCI dataset, the disease presence is already given in the feature column of the severity level for existing heart disease.Figure 4 the AutoPro job scheduler decides the scheduling of the patient's data evaluation based on the recent heart conditions.The mode value is selected by first obtaining the highest value from the TestSet data submission of the patient and then, based on age, the heart rate can be selected as severe, moderate, and slightly affected for the modes 1, 2, and 3, respectively.Subsequently, in the processed version of the UCI dataset, the disease presence is already given in the feature column of the severity level for existing heart disease.

Protocol and Flowchart Design
The FHE protocol process can be given as communication between the remote patient, federated network, and the medical examiner interaction process: The remote patient initiates the process by sending encrypted sensor data at time T to the respective prime/local hospital with the user's public key PK 1 .The prerequisite is that the patient needs to be registered within the respective hospital for the treatment; The prime/local hospital uploads the encrypted data to the federated cloud server for processing by Algorithm 1 in the federated cloud.The modes for severe, mild, and slightly infected are given as Mode 1, 2, and 3, respectively.In the case of slightly infected cases, a batch of jobs are transferred in the form of relay between the servers as TestSet[n]; If (TestSet(Mode) == 1) ElseIf (TestSet(Mode) == 2)

3.
The AutoPro then shares the patient's encrypted data copy for evaluation to the respective trained FHE server model given in Algorithm 2. The encrypted results are then grouped by the receiving prime server and communicated back to the patient; The patient decrypts the results with his private key SK 1 and can inspect the confidential results; If the remote patient forgets/delays to share results with the medical examiner, then they receives a reminder from the hospital server.The hospital prime/local server stores the patient's history and progress at regular intervals to remind them of the test reports; For(∀ PendingPatients) SendReminder("Your results evaluation is pending.")if(HospitalReminder) Alert("Your results evaluation is pending.")

6.
The patients who are keen to evaluate the report by the medical examiner with/without a reminder from the hospital send the message with their public key for identification and then encrypt the results with the hospital public key PK 2 and forward it; The medical examiner then evaluates the report by decryption using his private key SK 2 , averages the results, and provides further prescription on the current patient's status; The evaluation is then communicated back to the patient by encryption with their public key PK 1 ; Therefore, the patient is kept informed securely about their health status with the report decrypted by SK 1 .

Algorithm Description
Algorithm 1 presents the process for the federated system.This algorithm is executed on the cloud server to provide services to the patients and medical examiners using AutoPro-RHC.In step 1, the input taken is FederatedQueue which determines the size of the cloud server queue assigned to every local/prime hospital for job requests.In step 2, the PrimeServer specifies the group of prime server addresses {PS1, PS2, . .., PSN} that are needed to execute the FHE algorithm on their private dataset.In step 3, the LocalServer is the group of approved hospital servers that can make requests for the TestSet evaluation by the cloud server.In step 4, the AuthorizedServer is given as all the servers local/prime involved within this model.In step 5, TestSet[n] is the group of test requests sent by an authorized hospital's server.In step 6, the output given by this algorithm is the ResultSet[] which is returned as the successful processing of the job request by the authorized servers.In step 7, all the variables used within this algorithm are initialized to NULL.In step 8, the global variable is declared as NULL, which can be used consistently across the code structure.In step 9, the event needs to assign a value that can be either 0 for weekly training or 1 for monthly training of the prime servers for the FHE evaluation with recent training data.In step 10, the if condition checks whether the current day is equal to the event day.In step 11, if the condition is true then the signal variable is compared with the prime server group to determine whether all the prime servers are trained for this event.In step 12, if the previous condition is true then the message is printed as prime servers are updated with training.In step 13, if the prime systems are considered to be in the training stage then are displayed with a message given in step 14.In step 15, all the prime servers are instructed for recent training by the remote procedure call which is counted as true (1) in step 16 for every successful signal value.In step 17, the TestSet[] is received from all the authorized servers requesting federated cloud service.In step 18, the condition checks whether the TestSet[] has requests pending and then in step 19, whether the FederatedQueue size allotted to that particular authorized server is full or not is checked.In step 20, if the condition is true then whether the TestSet[] belongs to the prime server's request is checked and then, except for that prime server, the request is forwarded to other prime server as it is already processing it.Next, the results of the other prime server's FHE computation are processed in step 21.Whereas, the results of the requesting prime server are added after availability in step 22.In step 23, the TestSet[] is forwarded to all the prime servers for FHE computation and stored in the ResultSet[] in step 24.In step 25, if the federated queue is full for that respective prime server then it is displayed for the request stage in the wait queue of the cloud server in step 26.Ultimately, in step 27, the TestSet[] is returned to the respective authorized servers.The Algorithm 2 for RHM-FHE consists of the following steps: Step 1 is an input of RemoteDataset[] as the open dataset used for training the FHE algorithm with heart disease.In step 2, Labels[] are the shortlisted features that will be used for the FHE algorithm as an input.In step 3, the TestSet[] is the recently collected remote patient's health data for the purpose of health status evaluation.In step 4, the output given by this algorithm is candidateSet[], which is the prediction results by different FHE algorithms using a decision tree, logistic regression, SVM, and XGBoost.In step 5, the FHE algorithm time specifies the execution time required for the algorithm on the respective platform.In step 6, all the variables are initialized to NULL.In step 7, the remote dataset is divided into two parts of DTrain and DTest by an 80-20 ratio with the selected labels, respectively.In steps 8-9, the DTrain and DTest are normalized with the min-max normalization method for preprocessing of data within the range [0,1], respectively.In step 10, the FHE algorithm is trained with DProcessed Train and DProcessed Test by using a fast fully homomorphic encryption over the torus (TFHE) library with a ring-variant of the GSW [31,32].In step 11, the DQueue stores the 10 patient's test data TestSet received from the cloud server.Next, in step 12, if the DQueue data received are consistent then a message is printed that the received data are consistent and ready to be processed in step 13.In step 14, due to the received corrupted data, respective message is printed and the algorithm is terminated by the exit function in step 15.In step 16, the for loop is initiated to loop until the size of the TestSet patient's data received.In steps 17-20, the candidateSet1, candidateSet2, candidateSet3, and candidate Set4 stores the FHE health status prediction results by the FHE algorithms of concrete-ML by decision tree, logistic regression, SVM, and XGBoost for the 10 patient's remote data, respectively.Therefore, in step 21, the prediction results received from the previous candidateSet's for different FHE algorithms are printed.In step 22, the time required for all four FHE algorithms is printed for analyzing the execution time.In step 23, all the candidate sets are combined to be stored in the candidate set array.Finally, in step 24, the algorithm returns the candidateSet[] to the calling cloud FHE function.
Input: RemoteDataset[], Open dataset of remote users for the training of heart disease.

2.
Labels[], Selected dataset labels to be processed by FHE algorithm.

3.
TestSet[], A group of recent remote health patients' data sent by An authorized hospital's server.

4.
Output: candidateSet[], Prediction given by FHE algorithm for the patient's health status.
Print "The patient's test set is consistent and ready to be processed" 14.
Print "The patient's test set is corrupted", exit() 16. For

Mathematical Model
The faster FHE (FFHE) improves the processing speed of the system model by using optimized FHE computations [32,33].The bootstrapping key size is also reduced by using an approximation algorithm.During the FHE computations, some errors are generated known as learning with errors (LWE) whereas the ring variant is known as Ring-LWE.The LWE cipertexts with unified representation are TLWE encoding polynomials over the torus.The security of TLWE depends on ideal lattice reduction or general hardness.
For the lattice-based homomorphic encryption schemes, the construction of both LWE/Ring-LWE variants can be used.The torus contains the right number of the LWE sample and can also be described as a continuous Gaussian distribution.The scale invariant LWE (SILWE) is used to work on the real torus.TLWE samples are used as follows: (a) Search Problem: There exists multiple random homogeneous TLWE samples which are polynomial and then find key S ∈ B N [X] K ; (b) Decision Problem: The difference between fresh random homogeneous TLWE samples and uniform random samples taken from having a binary co-efficient T N [X] in the TLWE sample message space.
TGSW can be defined as the FHE scheme's GSW generalized scaled invariant version.The author's gentry, sahai, and water proposed GSW with LWE problem-based security.The gadget decomposition function is utilized by this TGSW scheme for improving the processing time and minimal memory usage with small noise as an approximate decomposition function.The input given as a TLWE sample is (a and p is an integer polynomial.
Here, a unique representation is chosen for unique a i with a i,j ∈ T and set a i,j as the nearest multiple of 1 The a i,j is decomposed uniquely with each a i,j , p ∈ − Equation ( 3) is executed as a matrix with i = 1 to k + 1 and p = 1 to l.The output returned is in the form of e i,p i,p as a combination of on torus and is a concentrated distribution where LWE Key-Switching Procedure: LWE is given a sample of message µ ∈ T, the procedure of key switching KS with the same µ is output with less noise occurrence and tolerate approximations.The input given is the LWE sample a = a 1 , ..a n , b ∈ LWE S (µ) with key switching KS a → a.Where S ∈ {0, 1} n , S ∈ {0, 1} n and t ∈ N is a precision parameter.
where a i is the nearest multiple of 1 2 t to a i Sensors 2023, 23, 8504 a i,j ∈ {0, 1} where each a i is the binary decomposition.
Bootstrapping Procedure: LWE sample LWE s (µ) = (a, b) has an encryption of µ by key S as a bootstrapping construct with constant noise.The intermediate encryption scheme where where KeySwitch K,S (u) is the output.Here, the squashing technique utilized by the accu- mulator achieves an additional 2x speed up.

Results
In this section, the implementation details of the FHE-based RHM are given in detail.The system configuration required to implement the FHE operations on the workstation and cloud instance is given in Table 2 as follows: Three independent datasets are referred for the FHE model training on the prime servers, as given in Table 2.The Table 3 dataset is referenced from the UCI Heart Disease Dataset (Cleveland, Hungary, and V.A.) [34].The purpose of referring to the different open datasets shows the adaptability of the FHE model to international standards.

Dataset Name
Heart Disease Dataset (Cleveland, Hungary and V.A.-Long Beach)

Number of Instances 303
Attributes 14

Pre-Processing Techniques Used on the Dataset
The pre-processing techniques applied on the different datasets, as shown in Table 4, are necessary to achieve the high accuracy for the model performance.Every dataset is pre-processed to handle missing data to maintain consistency.One-hot encoding deals with categorical data processing.Therefore, string category data can be transformed into numbers for machine learning processing.The data imbalance is handled using class weights for adjusting cost function within the model classification for penalizing major/minority classes accordingly.

Machine Learning Performance for Different Algorithms on the Dataset (Train:Test Ratio = 80:20)
The above machine learning-based evaluation of the three different UCI datasets are presented in Table 5.Every prime server dataset is evaluated using multiple machine learning algorithms that include linear regression, support vector machines (SVM), XG-Boost, and decision tree.The dataset for prime servers 1, 2, and 3 is trained with Cleveland, Hungary, and V.A. heart disease open datasets, respectively.A total of 10 parameters are used as input to the algorithms where the highest values in most of the metrics of precision, F1-score, and accuracy are recorded by the decision tree algorithm output with the minimum processing time overall.In the case of prime server 2, all the metrics perform highly for the SVM.Whereas, the lowest recorded time is for the decision tree across all of the prime servers due to its lowest algorithm time complexity O(Nkd) where n is the number of training data, k is the features, and d is the decision tree depth.The prime server 3 also performs better for SVM in most of the metrics with less algorithm execution time in comparison to other prime servers.In the case of machine learning-based evaluation on the cloud server, it can be observed that the linear regression algorithm performs with less time in comparison to the other prime server algorithm implementations.Whereas, the SVM algorithm executed on different prime servers has a similar time execution as that of the workstation.The execution of the decision tree and XGBoost performed oppositely, having higher time requirements with the same process on the workstation server.

FHE Concrete-ML Performance
The FHE-based evaluation of the concrete-ML workstation is presented in the above Table 6.FHE computations take longer time in comparison to other cryptographic algorithms.FHE combined with machine learning is presented using concrete-ML operations.In Table 6, the dataset used for evaluation is similar to Table 4 prime servers.Therefore, all the FHE prime servers are evaluated based on similar machine-learning algorithms for comparison with this workstation.In prime servers 1, 2, and 3, the best scores achieved by the algorithms are by a decision tree, SVM, and SVM, respectively.It is found to be similar in the metric evaluation as compared to the workstation.Whereas, the major difference is noticed within the timing evaluation of the FHE computations on the workstation.Nevertheless, the FHE decision tree requires the lowest time in comparison to the other FHE-based machine learning algorithms.Implementing a concrete-ML algorithm on the workstation with the highest accuracy is observed on prime server 2 followed by prime server 1 and the lowest on prime server 3.In case of timing requirements, prime server 3 requires the lowest execution time followed by the primer server 2 and highest time for prime server 1.The confusion matrix for the concrete-ML based machine learning evaluations are presented in detail in Appendix A Figures A1-A4, respectively.

Details for the Parameters Tuned to Improve the Performance by the Concrete-ML Model
The support vector classification is based on libsvm with the configuration parameters as shown in Table 7.The number of samples is limited to thousands for the fit time to be scaled quadratically.The one-vs-one scheme is used to handle the multiclass support.The hyper-parameters determine the linear kernel type to be used for this algorithm which is suitable for datasets with large features.The regularization purpose is to prevent overfitting and minimize loss function by calibrating machine learning models.Regularization is usually inversely proportional to C and has a squared l2 penalty.The probability determines the enablement of probability estimates.In the case of class_weights which have a balanced mode that uses x values in n_samples/(n_classes * np.bincount(x)) as inputs, weights that are inversely proportional to the class frequencies are adjusted.The Max_iter is set to 1000 as the default, which is a solver hard limit on iterations.Finally, the data are shuffled based on probability estimates by the pseudo-random number generator.XGBoost is derived from a gradient-boosting framework as an optimized distributed library which is portable, flexible, and efficient.It can be applied to run on billions of samples in the distributed environment with the configuration parameters as shown in Table 8.The random_state for FHE experiments is kept null.N_Estimators are the count for run numbers for the learning performed by XGBoost.The learning rate is the learning speed and the low error rate determines its proper selection.The booster parameter is used to select the model type to be run every iteration, where gbtree has tree-based models.The max_depth of a tree is specific to the learning relation of sample data for controlling overfitting at higher depth.The seed is used for parameter tuning and obtaining reproducible results.It is a supervised learning method that is focused on regression and classification with the configuration parameters as shown in Table 9.It learns simple decision rules for model prediction based on feature data.The balanced class weight means have equal weights assigned for the output class and have the same class proportion for the child samples.The max depth limits the size of the training sample nodes within the tree to avoid overfitting.Minimum sample split is used to split an internal node based on the minimum number of specified samples.Here, the minimum sample split is related to internal nodes and the minimum sample leaf is about external nodes.The internal node has further splits but leaf nodes have no children.Minimum sample leaves are the minimum samples needed at leaf nodes.The presence of training samples in the left and right branches makes a split point be considered valid which has a smoothing effect during regression.Maximum features are used to select the number of features for the best split.

Detailed Comparison of ML and Concrete-ML
The x-axis presents the record count for tests that are performed by the ML algorithm on the workstation with the y-axis as time, as shown in Figure 5.It can be observed that the decision tree algorithm requires the lowest time on a workstation for testing multiple records.In the case of linear regression and SVM, the execution timing requirements are the average and are similar to each on the workstations.Whereas, XGBoost has the highest requirements on all of the prime servers.Therefore, the end user can select the appropriate algorithm based on accuracy-and time-based criteria.
samples.The max depth limits the size of the training sample nodes within the tree to avoid overfitting.Minimum sample split is used to split an internal node based on the minimum number of specified samples.Here, the minimum sample split is related to internal nodes and the minimum sample leaf is about external nodes.The internal node has further splits but leaf nodes have no children.Minimum sample leaves are the minimum samples needed at leaf nodes.The presence of training samples in the left and right branches makes a split point be considered valid which has a smoothing effect during regression.Maximum features are used to select the number of features for the best split.

Detailed Comparison of ML and Concrete-ML
The x-axis presents the record count for tests that are performed by the ML algorithm on the workstation with the y-axis as time, as shown in Figure 5.It can be observed that the decision tree algorithm requires the lowest time on a workstation for testing multiple records.In the case of linear regression and SVM, the execution timing requirements are the average and are similar to each on the workstations.Whereas, XGBoost has the highest requirements on all of the prime servers.Therefore, the end user can select the appropriate algorithm based on accuracy-and time-based criteria.Implementing ML algorithms on the AWS cloud is first presented by Figure 6.Every prime server has its respective open dataset as given in Table 3.The ML algorithm for the linear regression requires the highest time of 0.02 second and lowest of 0.001 second by DT on the prime server 1. Implementing ML algorithms on the AWS cloud is first presented by Figure 6.Every prime server has its respective open dataset as given in Table 3.The ML algorithm for the linear regression requires the highest time of 0.02 s and lowest of 0.001 s by DT on the prime server 1.
The highest time of 0.01 s is required by XGB and the lowest of 0.001 s by DT on the prime server 2. Whereas, the highest of 0.01 s is required by XGB and the lowest of 0.001 s by DT on the prime server 3. Overall, the DT requires the lowest processing time on all the prime servers and LR with the highest time.
In Figure 7, implementing concrete-ML on the AWS cloud has a similar behavioral pattern across the different prime servers.It can be noticed that prime server 1 performs with the lowest time, prime server 2 performs with the average time, and prime server 3 performs with the highest time for all the algorithms.Therefore, it can be concluded that cloud-ML has more algorithm-specific behavior and concrete-ML on the cloud has more pattern-based behavior by a group of algorithms.The highest time of 0.01 sec is required by XGB and the lowest of 0.001 s by DT on the prime server 2. Whereas, the highest of 0.01 sec is required by XGB and the lowest of 0.001 s by DT on the prime server 3. Overall, the DT requires the lowest processing time on all the prime servers and LR with the highest time.
In Figure 7, implementing concrete-ML on the AWS cloud has a similar behavioral pattern across the different prime servers.It can be noticed that prime server 1 performs with the lowest time, prime server 2 performs with the average time, and prime server 3 performs with the highest time for all the algorithms.Therefore, it can be concluded that cloud-ML has more algorithm-specific behavior and concrete-ML on the cloud has more pattern-based behavior by a group of algorithms.

Concrete-Ml Graphs Including Time (Sec) for Independent Record Encryption and Decryption
The concrete-ML algorithm time required for encryption /decryption on workstations and the AWS cloud with the VA dataset is presented in Table 10.In the case of cryptography, the encryption/decryption time for the decision tree is the lowest on the workstation, followed by XGBoost, SVC, and linear regression due to the output parameters generated.Similarly, the cloud cryptography time generated follows the same sequence, where the lowest time required is for the decision tree followed by the XGBoost, SVC, and linear regression.The processing time on the workstation is less for the concrete-ML prediction due to the higher workstation configuration as compared to the cloud given in the workstation configuration of Table 2.The highest time of 0.01 sec is required by XGB and the lowest of 0.001 s by the prime server 2. Whereas, the highest of 0.01 sec is required by XGB and the lo 0.001 s by DT on the prime server 3. Overall, the DT requires the lowest processi on all the prime servers and LR with the highest time.
In Figure 7, implementing concrete-ML on the AWS cloud has a similar be pattern across the different prime servers.It can be noticed that prime server 1 p with the lowest time, prime server 2 performs with the average time, and prime performs with the highest time for all the algorithms.Therefore, it can be conclud cloud-ML has more algorithm-specific behavior and concrete-ML on the cloud h pattern-based behavior by a group of algorithms.

Concrete-Ml Graphs Including Time (Sec) for Independent Record Encryption and Decryption
The concrete-ML algorithm time required for encryption /decryption on stations and the AWS cloud with the VA dataset is presented in Table 10.In the cryptography, the encryption/decryption time for the decision tree is the lowes workstation, followed by XGBoost, SVC, and linear regression due to the output p ters generated.Similarly, the cloud cryptography time generated follows the s quence, where the lowest time required is for the decision tree followed by the X SVC, and linear regression.The processing time on the workstation is less for the c ML prediction due to the higher workstation configuration as compared to th given in the workstation configuration of Table 2.

Concrete-Ml Graphs Including Time (Sec) for Independent Record Encryption and Decryption
The concrete-ML algorithm time required for encryption /decryption on workstations and the AWS cloud with the VA dataset is presented in Table 10.In the case of cryptography, the encryption/decryption time for the decision tree is the lowest on the workstation, followed by XGBoost, SVC, and linear regression due to the output parameters generated.Similarly, the cloud cryptography time generated follows the same sequence, where the lowest time required is for the decision tree followed by the XGBoost, SVC, and linear regression.The processing time on the workstation is less for the concrete-ML prediction due to the higher workstation configuration as compared to the cloud given in the workstation configuration of Table 2.

Concrete-Ml Time for Independent Record for Multiple Datasets on AWS Cloud
Implementing multiple open heart disease datasets by concrete-ML on the cloud shows high variability in the processing with different algorithms.The Cleveland dataset has the lowest prediction time by the decision tree algorithm followed by XGB, SVC, and LR as the highest cryptographic operations, as shown in Table 11.Similarly, in the VA dataset, the lowest prediction time is given again by the DT followed by SVC, XGB, and LR.Whereas, the Hungary dataset has the lowest prediction time by the XGB followed by LR, DT, and SVC.

Analysis of the Final Protocol Time
The complete FHE protocol time can be given by the While training the prime servers with different machine learning algorithms and different open datasets, the performance details can be given as above in Table 12.The tables are summarized later in Figure 8.  Implementing multiple open heart disease datasets by concrete-ML on the cloud shows high variability in the processing with different algorithms.The Cleveland dataset has the lowest prediction time by the decision tree algorithm followed by XGB, SVC, and LR as the highest cryptographic operations, as shown in Table 11.Similarly, in the VA dataset, the lowest prediction time is given again by the DT followed by SVC, XGB, and LR.Whereas, the Hungary dataset has the lowest prediction time by the XGB followed by LR, DT, and SVC.

Analysis of the Final Protocol Time
The complete FHE protocol time can be given by the While training the prime servers with different machine learning algorithms and different open datasets, the performance details can be given as above in Table 12.The tables are summarized later in Figure 8.The final FHE protocol is implemented on the AWS cloud network.First, the prime server is trained by the V.A. dataset and the test samples are taken from the Cleveland and Hungary datasets as shown in the Figure 8a,b.The Figure 8c,d show that the XGB requires the highest time for the FHE protocol followed by the SVC and LR which require a similar time and the least time by the DT.Similarly, when the cloud servers are trained by the Cleveland dataset, the similar pattern is again observed in the FHE protocols with different algorithms having XGB as the highest time requirements followed by the similar time of LR and SVC and lowest time requirement by the DT.
In the case of training cloud prime servers by Hungary, as shown in Figure 8c,d, XGB again requires the highest and DT the lowest time respectively.Whereas, the SVC needs more time for the execution than the LR.Overall, depending on the accuracy and time requirement, the user can choose appropriate settings for the training dataset.
The above Table 13 shows the benchmark comparison details of the multiple FHE protocol evaluations.In an outsourced multi-party k-means clustering scheme [8], multiple distinct secured keys are utilized for the protocol.This scheme proposes minimum, comparison, secure squared Euclidean, and average operations in the protocol with servers having time greater than or equal to five seconds.The multi-key homomorphic encryption (MKFHE) [26] uses the TFHE scheme with the CCS19 algorithm implemented in the cloud protocol.The MKFHE uses circuit optimization with three multi-party node protocols having preprocessing, intersection, set difference, and TH intersection for a minimum time of 5.16 s.In the case of privacy preservation in multi-layer perceptron (PFMLP) protocol [27], the improved paillier federated protocol is used that has multiple hidden layers containing multiple units with the embedded homomorphic operation.The system involves a key management center and computing server with multiple clients requiring at least 7.92 s to complete the protocol process.Federated learning has provided a key advantage to utilizing the network for healthcare applications [35][36][37].Ultimately, it has been observed that even though the nodes involved by the benchmarking algorithms are quite similar in range to each other, the minimum processing time of 4.23 s with FHE and multiple open heart disease datasets trained evaluation are achieved by our AUTOPRO-RHC protocol.

Detail Discussion for the AutoPro-RHC Implementation
a. Device Availability: A hospital association with at least five to seven branches should form a contract with the wearable sensor's device manufacturing company.Heartaffected patients staying in the remote areas should be prioritized for the device assignment and are supposed to return it after successful treatment.The AutoPro-RHC will be pre-installed before the device allotment and initiated immediately after the deployment.In the future, AutoPro-RHC can be upgraded to be easily installed on mobile devices with attached sensors for ease of usage; b.Data Consent and Usage: After the remote patient is diagnosed with the heart disease, the data consent form is recommended to be submitted for accepting the device agreement.Data can be optionally donated by de-identification for the hospital's FHE model training purpose and storage.The contribution of data makes a significant difference to the scientific/medical field for the future treatment improvement purpose; c.
Frequency of the Data Collection: Based on the patient's location and severity type, the data can be uploaded on specific intervals.Continuous data recording is not easy to process so in such cases, an average/peak value over an interval of 10/20/30 min can be shared.In case of multi-disease category, special attention can be given by having a continuous monitoring device with emergency calling/tracking by consent; d.
Technical Device Issues: A vendor-based maintenance system should be present to solve the device issues in case of malfunctioning or damage.Hospital-based monitoring and calling should not be responsible due to the limited resources.Effective strategy by the vendors can help to handle such problems and should be resolved on priority.Some backup sensor parts or batteries can be provided to avoid the last hour of rushing in case of an emergency situation; e.
Future Trends and Opportunities: A mobile device can be used to collect the sensor reading, encrypt it, and share it with the AutoPro-RHC which will be more convenient.Developing a mobile application to use AutoPro-RHC will enable it to be more portable to use, carried comfortably, and charged with a regular routine.Even though the mobile application will make it more portable to use, the mobile device security must also be taken into consideration.Another opportunity is present in multiple disease diagnoses by a single device, where the patient affected with heart disease, diabetes, hypertension, etc., must be able to use a single portable device and obtain health reports weekly.

Conclusions
Heart diseases are known to be the most life-threatening condition affecting the human population.The FHE-based RHM provides a secure and effective model that provides services to the remote affected patient.The FHE model presented in this research implements a concrete-ML library within the cloud network.The prediction given by the concrete-ML is trained by using multiple open heart disease datasets.Preprocessing applied on the initial training data is to recover the missing and incomplete data and class weights are assigned.The AWS cloud FHE protocol demonstrated from this work provides the prediction from a different prime server that is grouped and reported to the patient and registered hospital.Priority schedulers help to identify the best time for the patients based on their respective conditions.Successively, the results are evaluated by the medical examiner only after the patient provides consent for results analysis.The FHE results show that the highest accuracy is achieved by support vector classification (SVC) of 88% and linear regression (LR) of 86% with the AUC of 91% and 90%, respectively.Ultimately, the best time achieved by the FHE protocol is around 4.23 s by the decision tree algorithm on the AWS cloud network.The future work will focus on utilizing the smart wearable device to analyze multiple vital signs for analyzing multiple disease-affected patients based on trustworthy computing.A mobile device or smartwatch can be used without the need to carry other sensor devices; receiving complete healthcare analysis is a necessity for this era.Therefore, the use of Concrete-ML based XGBoost is recommended only when the workstation configuration is higher and the dataset is performing significantly in metrics as compared to the other open datasets.Ultimately, Figure A4 stating the evaluation of the Concrete-ML based decision tree algorithm with the respective dataset shows the best time achieved, as compared to the algorithms, while having a decent accuracy in comparison.Thus, systems with minimal configurations or requiring the best time and nearby best evaluation metrics can select decision tree algorithms in this context.on the large data of body temperature, blood sugar, and blood pressure; where a pulse oximeter collects data via a wearable device, which uses trusted forwarding and carrier weight for route selection.Whereas, the data are processed using class similarity metrics and disease prone weights for the final prediction;

3.
Scaling in Auto-RHC: The AutoPro-RHC protocol scaling in real-world healthcare is given in Section 2 Methodology which states that the patients' encrypted data are processed by the FHE algorithm on the prime server in step 1 and 2 and the results suggesting the final output as a prediction value re returned back to the patient in step 3. Therefore, as the patient possess his private keys, he does not need to perform homomorphic decryption: only a normal public-key based decryption which takes fraction of seconds on a mobile device is needed.The result values predicted by three different prime servers are then shared with medical examiners in step 6.Successively, the medical examiner performs the results average for returning the final outcome as a normal, moderate, or severe heart report to the patient in step 7 and 8;

I. Active Attacks:
a. Spoofing: Authentication of the malicious intruder (M) will be unsuccessful as his registration in the hospital as a patient will be unavailable.Therefore, the intruder will be unable to access the prime server legitimately.
M → HospitalServer(Enc PK1 (SensorsData[T])); b. Fabrication: Routing of the message from different servers will be useless as the public key will be invalid at different local hospitals.Therefore, the request message for the prime server will not be delivered successfully for the AutoPro job scheduling in the successive step; c.
Modification: Even though the malicious user (M) modifies the data packet, the successive process for the authentication by the local server will be treated as an invalid request.As the sensor's data will not be identified by the AutoPro job scheduler, it cannot be processed further; d.
Sinkhole: A specific modification in the test data can change the scheduling mode of the AutoPro job scheduler but the message authentication and response to the hospital needed for the request will be invalid.Thus, it would be inefficient to apply this attack type.

II. Passive Attacks:
a.
Monitoring: The communication between different systems and the network performance can be possibly monitored.Passive monitoring would not be useful to know the encrypted patient's identity details, his encrypted sensor's data values, or evaluation for the respective patient due to AutoPro job scheduling; b.
Traffic Analysis: The intruder can identify the network path and multiple network entities involved within the protocol.Thus, it is possible to map the network communication but the encrypted message will be useless until the correct decryption method and its data are available; c.
Man-in-the-Middle: In this attack type, a random message is captured and the interceptor attempts to read it but, due to the encrypted data within the message, it will be encoded text.Later, as the encrypted data will be shared with different prime servers by the job scheduler, it will be challenging to track all the responsive packets for the final results.

III. Advance Attacks:
a. Replay: The intruder captures the packet within the communication network and then applies it in a loop to a multiple system in the expectation of some useful outcome.Such a process will be evaluated as an invalid operation and will be discarded.Therefore, replay attack without a valid public key, authentication, or scheduling will not be useful in the AutoPro-RHC; b.
Blackhole: In the case of a network switch getting abrupted/crashed, the prime server restarts the last request.In the similar case, if the messages are delayed the AutoPro reinitiates the job scheduling request for the process and completes it by an evaluated response; c.
Location Disclosure: Considering the local hospital server, the network address can be stored but it will be challenging to locate the AWS cloud prime servers.Thus, in the case of an organized attack on the network entities, the cloud prime server will be unknown and the AutoPro Job scheduler exchange of messages between the prime servers will be completely confidential due to privacy and high cloud data traffic.Hence, location disclosure will be an incomplete attack; d.
Rushing: In this hybrid type of attack where a message is replayed by modifying it multiple times, a huge fake message traffic in the local hospital network can be created.The replay of the same messages will be discarded by the cloud server and modification will make the public key invalid.Therefore, rushing can be partially applicable but would not succeed in disturbing the AutoPro-RHC communications.

5.
The Limitations of AutoPro-RHC: (a) A necessity for high configuration cloud systems for possessing optimal processing time for a faster response; (b) Continuous availability of the remote device for effective monitoring.

Figure 1 .
Figure 1.Survey for the FHE literature analysis.

Figure 1 .
Figure 1.Survey for the FHE literature analysis.

Figure 5 .
Figure 5. Graph comparison for the ML and concrete-ML algorithms.

Figure 5 .
Figure 5. Graph comparison for the ML and concrete-ML algorithms.

Sensors 2023 , 32 Figure 6 .
Figure 6.Implementation of machine learning algorithms on the AWS platform.

Figure 7 .
Figure 7. Implementation of AWS concrete-ML algorithms on AWS Platform.

Figure 6 .
Figure 6.Implementation of machine learning algorithms on the AWS platform.

Figure 6 .
Figure 6.Implementation of machine learning algorithms on the AWS platform.

Figure 7 .
Figure 7. Implementation of AWS concrete-ML algorithms on AWS Platform.

Figure 7 .
Figure 7. Implementation of AWS concrete-ML algorithms on AWS Platform.

FHE
Protocol Total Time = Time (Cloud server communication + Encryption/Decryption + Algorithm Process) where, Cloud Server Communication Time = Time (3 Prime Servers), and Algorithm Execution Time = (Encryption/Decryption + Algorithm Process).

FHE
Protocol Total Time = Time (Cloud server communication + Encryption/Decryption + Algorithm Process) where, Cloud Server Communication Time = Time (3 Prime Servers), and Algorithm Execution Time = (Encryption/Decryption + Algorithm Process).

Figure 8 .Table 12 .
Figure 8. FHE protocol implementation time: (a,b) train by VA, (c,d) train by Cleveland, and (e,f) train by Hungary.Table 12. Train by open dataset and test on sample.

Figure 8 .
Figure 8. FHE protocol implementation time: (a,b) train by VA, (c,d) train by Cleveland, and (e,f) train by Hungary.

Figure A2 .
Figure A2.Concrete-ML based Support Vector Mechanism algorithm evaluation for (a) Cleveland, (b) Hungary, and (c) VA.The above Figure A1 shows the evaluation of Concrete-ML with the linear regression algorithm for the different open heart disease datasets with (a) Cleveland, (b) Hungary, and (c) VA.It can be noticed that the evaluation is quite moderate.In the case of Figure A2, the evaluation of concrete-ML with the SVM algorithm for different datasets isbetter than the linear regression evaluation.
As the blood sugar is converted into energy by insulin, absence of it may lead to severe health problems such as vision loss and heart and kidney diseases.The glucometer is used to test the blood sugar level and report the reading to the remote hospital; 2.
Blood Pressure Cuff: A prominent method to check a patient's health is by heart rate and blood flow for the blood pressure checkup.The artery motions are transmitted in real time by the blood pressure cuff which indicate the possibility of hypertension, heart failure, kidney problems, and diabetes.The blood pressure cuff is applied on the upper arm to measure for pressure monitoring; 3.Pulse Oximeter for COVID-19: The pulse oximeter is a multi-purpose device that can measure low blood oxygen level (SpO 2 ), lung functioning, and heart rate in bar graph form.The chronic conditions for heart/lung issues includes pneumonia, asthma, and COVID-19 monitoring.This device is easily attached to the patient's finger as a non-invasive clip; 4.Electrocardiogram (ECG) + Stethoscope for Cardiac Conditions: The heart functions are captured by ECG whereas the heart, lung, and bowel sounds are captured by stethoscope.The occurrence of cardiac conditions includes arrhythmias or coronary artery disease.This device is placed on the patient's chest to virtually monitor heart and lung sounds for cardiac assessment.

Table 4 .
Pre-processing Techniques and Parameter Values.

Table 5 .
Machine Learning based Evaluation for the Hospital Prime Servers.

Table 6 .
FHE machine learning-based evaluation for the hospital prime servers.

Table 10 .
Concrete-ML encryption and decryption time.

Table 11 .
Concrete-ML Time Comparison for Multiple Datasets.

Table 12 .
Train by open dataset and test on sample.

Table 11 .
Concrete-ML Time Comparison for Multiple Datasets.