Federated Learning-Based Lightweight Two-Factor Authentication Framework with Privacy Preservation for Mobile Sink in the Social IoMT

: The social Internet of Medical Things (S-IoMT) highly demands dependable and non-invasive device identiﬁcation and authentication and makes data services more prevalent in a reliable learning system. In real time, healthcare systems consistently acquire, analyze, and transform a few operational intelligence into actionable forms through digitization to capture the sensitive information of the patient. Since the S-IoMT tries to distribute health-related services using IoT devices and wireless technologies, protecting the privacy of data and security of the device is so crucial in any eHealth system. To fulﬁll the design objectives of eHealth, smart sensing technologies use built-in features of social networking services. Despite being more convenient in its potential use, a signiﬁcant concern is a security preventing potential threats and infringement. Thus, this paper presents a lightweight two-factor authentication framework (L2FAK) with privacy-preserving functionality, which uses a mobile sink for smart eHealth. Formal and informal analyses prove that the proposed L2FAK can resist cyberattacks such as session stealing, message modiﬁcation, and denial of service, guaranteeing device protection and data integrity. The learning analysis veriﬁes the features of the physical layer using federated learning layered authentication (FLLA) to learn the data characteristics by exploring the learning framework of neural networks. In the evaluation, the core scenario is implemented on the TensorFlow Federated framework to examine FLLA and other relevant mechanisms on two correlated datasets, namely, MNIST and FashionMNIST. The analytical results show that the proposed FLLA can analyze the protection of privacy features effectively in order to guarantee an accuracy ≈ 89.83% to 93.41% better than other mechanisms. Lastly, a real-time testbed demonstrates the signiﬁcance of the proposed L2FAK in achieving better quality metrics, such as transmission efﬁciency and overhead ratio than other state-of-the-art approaches.


Introduction
The recent advances in algorithms and hardware have led to massive computation and memory costs for the development of user authentication models using various multimodal AI.Commercial devices including infotainment systems adopt machine learning-based authentication features to unlatch the system process or to provide a few user-specific services, namely, recommendation, notification, and configuration adjustment.The features of the authentication protocol rely on a decision-making problem that uses a set of testing inputs to accept or reject based on its similarity measurement to train the user inputs [1].The similarity measurement is often assessed using an embedded spacing to predict the testing input referring to learning models with the local computing data.The authentication models utilize a variety of computing data to learn the security characteristics of the physical layer [2].The raw inputs and embedded spacing address the issues of privacy sensitivity to analyze the modeling characteristics of application systems and test the adversarial settings to protect data privacy over the inference-time attack [3].
Most distributed services use telecommunication technologies and IoT devices to store and analyze the physiological or behavioral characteristics of digital applications.The behavioral characteristics including irises, palm prints, and fingerprints extract the layered features of the computing devices using spatial correlation (i.e., channel impulse response and state information) to authenticate users and computing devices.The health insurance portability and accountability act (HIPAA) was developed by the United States to protect the sensitive information of the patient and provide the accessing rules to authorize system information [4].The computing system demands service authentication and access control to authorize user credentials aligned with the records available on the server.The loss of data protection allows intruders to acquire sensitive information via unauthorized access to arise cybersecurity risks and gain access to confidential data via malicious IoT applications [5].Moreover, inappropriate security practices cannot endorse a network policy without adequate controls to evaluate the shared security model across the physical and virtualized infrastructure.
As a result, security infrastructure integrates cloud computing, mobile communication, and artificial intelligence to create an innovative IoT application that successfully transfers different kinds of real-time data between the computing devices to operate the industrial process to a larger extent [6].The massive amount of computing data generated by IoT devices necessitates more efficient data collection and processing to conduct proper development processes and adopt technological paradigms such as transfer learning and mobile computing to manage edge intelligence controlled by industrial applications.Industrial applications converge with edge networks to meet a basic constraint of strong computation in order to evaluate a large set of real-time data.In other words, the modeling process uses an edge cache to boost the performance of IoT-based networks which develop a trustworthy platform based on physical-layer data extracted from user behavior to design a secure authentication [7].The context-aware authentication leverages biometric or physical-layer features to protect massive private data and leverage the use of machine learning to achieve reliable communication in a smart environment.
In the smart environment, a new computing paradigm, the so-called massive IoT, has evolved as a leveraging technology for the growth of digital transformation such as smart cities, automation, grids, and eHealth.The leveraging technologies revolutionize the significance of smart computing to offer a real-time awareness of the application systems.Connected edge devices can be tightly coupled with unconnected smart objects to offer data sharing, device coordination, and resource utilization [8].An IoT environment consists of distributed sensors and actuators to gather environmental information via dedicated wireless channels.It may even route sensitive information via trusted gateways to improve the performance of computing resources.According to a report by International Data Corporation, the IoT is expected to connect 41 billion devices by 2025 [9].It can generate massive amounts of sensitive data totaling approximately 79.4 zettabytes to utilize resources effectively.As a result, a two-fold development strategy is applied to achieve device integrity and security efficiency.Since each IoT device has limited computational capabilities, securing user credentials is still a challenging task in protecting transmitted data against threats such as unlawful eavesdropping, unauthorized access, and data tampering [10].
In real time, malicious attackers attempt to insert, delete, and modify the sensitive data of legitimate users.Therefore, a proper authentication technique is preferred to improve the security efficiency of IoT frameworks.Moon et al. [11] outlined the essential factors of an authentication mechanism to claim device integrity and application security.Saqib et al. [12] devised a secure mutual authentication framework to improve the security features of IoT environments.In addition, security features such as availability, integrity, and confidentiality are quietly surveilled to resist potential attacks with less computation and communication overhead.To improve system efficiencies, researchers apply artificial intelligence in developing various distributed IoT applications.Most distributed IoT applications use metaheuristic approaches to optimize resource utilization.Heuristic algorithms employ a proven genetic process to design an intelligent framework that applies cryptographic algorithms to improve searching efficiency.Most IoT devices apply cryptographic algorithms to offer seamless connectivity when accessing cloud-based computing resources and communication services [13].
The cloud-based computing services employ machine and deep learning algorithms to extract hidden patterns to maintain a large amount of IoT data in smart healthcare [14].Healthcare can discover the hidden pattern using graph analytics to relate the connection between the data points and organize the associated rules to manage learning databases based on predictive modeling.The modeling applies the distributed learning algorithms via a centralized database to train the computing data using context-aware rules to improve the decision-making process [15].However, the centralized database addresses various challenging issues such as increased latency, single point of failure, and security deficiencies leveraged by a dynamic environment.In this environment, the learning database is centrally located to generate the rules based on distributed learning to train the computing data stored in a diverse location.The machine and deep learning algorithms access the computing data across diverse locations to train the learning rules and increase the overall efficiency of the healthcare system using distributed ML [16].Traditional machine and deep learning algorithms persist with the issue of device privacy.As a result, the algorithm cannot generalize the performance of modeling with a large amount of sensitive data to secure a deep-rooted infrastructure with advanced healthcare applications [17].
The performance modeling utilizes federated learning to train the sensing data located across diverse devices such as wearable health monitors, security systems, and logistic tracking [18].The IoT device uses federated learning to compute or learn the generated source using scalable machine learning to improve prediction accuracy with guaranteed system latency.The personalized applications simplify the access control of the computing systems to protect user credentials using static authentication [19].However, static authentication is susceptible to a key impersonation attack, which allows a computing device to impersonate as an illegal entity to authenticate the service access.Therefore, healthcare applications prefer distributed machine learning, so-called federated learning, to analyze the key features of the authentication protocol [20].The application system is designed with the components of a digitized network such as control, communication, and sensing to manage the computing tasks with the social Internet of Medical Things (Social IoMT).The social IoMT distributes the application features of resource management to establish secure communication with medical devices.

Technological Advancements
Healthcare application uses a machine learning algorithm to offer a promising solution to relate the key features of the authentication protocol [21].The features utilize a few significant tools of the learning algorithm to handle data extraction more reliably and train the extraction pattern to correlate the data points to perform cross-validation.The knowledge database constructs a protective mechanism to automate the pattern discovery to prepare a decision or prediction case that leverages the physical layer features to authenticate smart IoT devices.Moreover, smart IoT includes wireless channel characteristics such as channel state information and medical access control to analyze the physical layer features and adapt context-aware authentication within a network based on the dynamic features to improve system security [22].The deployment of context-aware applications introduces the edge computing paradigm as a promising solution to meet the requirements of realtime services.The application service processes the raw data locally with data mining or aggregation to distribute the model gradients.The centralized server utilizes a gradient descent algorithm to preserve the privacy of localized data over different transmission stages to achieve the functionality of distributed encryption [23].
The utilization of distributed communication technologies offers seamless integration across 5G networks and IoT to discover business opportunities with edge computing [24].Edge computing utilizes a core technology of IoT to manage the essential parts of in-terconnected networks.The integration of heterogeneous IoT inherits the properties of next-generation networks to meet the requirements of a communication environment such as low latency, massive connectivity, and high flow data rates [25].However, the coexistence of multi-access techniques cannot protect network access as the time-based authentication increases its degree of network failures due to inadequate distributed machine learning models.Therefore, the emerging paradigms employ key technologies of current IoT systems to present an effective decentralized system using distributed machine learning.The learning system associates with an edge-enabled IoT network to optimize the training model and aggregate the unique identifier of the network using blockchain technology to ensure decentralization and immutability [26].Moreover, the edge-based IoT converges with intelligence modeling to utilize three basic elements of blockchain technology including graph structure, tree, and chain.
Most application services such as healthcare, transportation, entertainment, etc., rely on a cloud-centric machine learning model to leverage the usage of smart sensing within the physical environment [27].The application service initiates a few predefined actions to transform the physical properties into measurable signals through different sensing units.In particular, network intelligence and its advent technologies greatly expand a few machine perceptions such as image processing, pattern recognition, and computer vision to deal with detection, recognition, and navigation [9].However, advanced networking and digital processing technologies demand an expansion of decentralization to support the growth of the modern Internet and to promote data localization and end-device portability.
The key roles of the computing layers are as follows: Cloud [Data] Center has a powerful solution to offer an intelligent infrastructure that handles huge amounts of computing services, namely, sharing, accessing, and processing the data via a well-protected data center.
Learning [Data] Center applies a decentralized machine model across edge computing systems or devices to formulate a suitable optimization problem that infers the shared knowledge of the computing devices.
Mobile [Data] Center has a technological infrastructure to provide comprehensive delivery of data packets with better visualization of information via a dedicated mobile application [28].
Communication modules such as network control and storage can interconnect with device paradigms to improve the quality of network performance.As a result, IoT devices can invoke a cloud computing model to handle a massive amount of data.A three-tier architecture, including mobile, learning, and cloud explores key features of decentralized ecosystems such as visualization, data analytics, and processing.Each ecosystem has its backbone network to access the core features of the end IoT device and to support the interconnection of baseband units using a cloud server.The IoT device integrates an edge computing paradigm to address key challenges, such as network latency, processing costs, and load balancing.Layered mechanisms, such as cloud-to-fog and fog-to-cloud processes, handle the requests of IoT devices to support mobility [29].Parameters such as communication protocols, network types, and services offered are configured with a mobile sink to leverage the scope of network convergence.Devices such as wireless routers and machine-to-machine gateways can act as fog computing nodes to store and process data locally via dedicated cloudlets.The capabilities of cloud computing deploy intermediate nodes, which may allow end computing devices to offload network resources such as bandwidth usage and response time over the cloudlets.
A centralized cloud entity monitors the activities of geo-distributed fog servers, and a decentralized network platform between end computing devices and cloud data offers better content delivery and data analytics using a learning center [30], as shown in Figure 1.In addition, the learning center coexists with a suitable security framework to examine the core features of the computing systems which can be actively transmitted via a public network to achieve technical benefits such as privacy preservation and scalability [31].
better content delivery and data analytics using a learning center [30], as shown in Figure 1.In addition, the learning center coexists with a suitable security framework to examine the core features of the computing systems which can be actively transmitted via a public network to achieve technical benefits such as privacy preservation and scalability [31].

Motivation
In real time, fog computing services evolve a scenario of IoT-Cloud architecture which decentralizes the significant features, such as data processing, access control, application security, and network analytics to guarantee data integrity and security [32].The emerging computing service enables end-user authentication to secure communication over adversarial networking devices.The enabled network gains information access to train the user modeling to learn different characteristics of user-specific services.The IoT-Cloud application uses legacy infrastructures to analyze real-time data which operates intelligent gateways to handle privacy-sensitive information of sustainable architectures [33].In practice, the users of sustainable architecture exploit direct network access to eliminate the constraint of identity management.As a result, the architecture uses fog and cloud computing as complementary approaches to operating the connected layer with edge networks to minimize the quality issues related to cyber security and transmission latency [34].Depending on the availability of edge-IoT devices, the generated data are transmitted to the related edge server.
To expand the storage limit horizontally and satisfy the quality requirements including delay and mobility, the network infrastructure prefers fog computing.However, fog

Motivation
In real time, fog computing services evolve a scenario of IoT-Cloud architecture which decentralizes the significant features, such as data processing, access control, application security, and network analytics to guarantee data integrity and security [32].The emerging computing service enables end-user authentication to secure communication over adversarial networking devices.The enabled network gains information access to train the user modeling to learn different characteristics of user-specific services.The IoT-Cloud application uses legacy infrastructures to analyze real-time data which operates intelligent gateways to handle privacy-sensitive information of sustainable architectures [33].In practice, the users of sustainable architecture exploit direct network access to eliminate the constraint of identity management.As a result, the architecture uses fog and cloud computing as complementary approaches to operating the connected layer with edge networks to minimize the quality issues related to cyber security and transmission latency [34].Depending on the availability of edge-IoT devices, the generated data are transmitted to the related edge server.
To expand the storage limit horizontally and satisfy the quality requirements including delay and mobility, the network infrastructure prefers fog computing.However, fog computing cannot test any input data based on its similarity to learn the characteristics of data.Moreover, the computing paradigm cannot locate the user data centrally to train any predictive modeling due to the privacy sensitivity of any statistical database [35].The user applications demand privacy protection to test or train any adversarial model to determine the leakage of embedding space and examine the security vulnerabilities using an authentication model.Thus, distributed machine learning, known as federated learning, is chosen to train the predictive modeling with the sensitive data of IoT devices.This model repeatedly communicates its weights and gradients between the dedicated server and the IoT devices to maximize the predictive correlation.In federated learning, the training models enable data sharing without user preference to access the aggregated models using input-output pairs.In most cases, the input-output pairs can obtain a better user interaction to learn different data characteristics with mobile AI applications.
The application of next-generation networks interconnects with IoT devices to form intelligent networking and provide a seamless data connection to achieve better data transmission and storage [36].Intelligent networking has the potential features such as identity, data privacy, security, and connectivity to solve the problem of data collection in a distributed machine learning technology.This technology has an extensive observation to provide technical support to secure data sharing in distributed healthcare systems.Most computing systems utilize distributed machine learning to reduce the computation complexity of a centralized database and preserve the privacy of the data owner evaluated by aggregation strategies [37].To offer better reliability with data sharing, the modeling strategy correlates the random binary output with embedding vectors.The data protection with user privacy makes the computing device to train and upload the local model with its respective weights and gradients to the centralized server.In general, federated learning guarantees device privacy to the local data [38].However, local training has a possibility of data leakage while the modeling parameters are uploaded into untrusted servers.
The untrusted servers utilize the modeling weights and gradients to recover the actual local data in order to observe its network structure.The initial parameter and its training labels may vary over time using adversarial techniques to disclose the private information of the social IoMT device [39].Modern IoT demands a promising approach to examine the behavior of the computing devices based on the extraction of user profiling patterns to verify the modalities with smart medical devices.Advances in IoMT and social networks communicate with high-end computing devices to establish social links in order to process authentication requests [40].The continuous interactions within an environment deal with supportive infrastructure to exploit the sensitive features of the information system such as identity and pattern.To overcome the security issues associated with authentication protocol, the execution trade-off considers a robust optimization approach.The optimization approach consumes less computation and communication cost to meet the desired constraints of real-time applications and expedites the process of authentication to detect malicious behavior with minimum power consumption [41].
This strategy motivates researchers to design a robust lightweight authentication with unpredictable pseudonym updates which rely on hashing and XOR operation to offer high anonymity in the social IoMT [42].The development of computing paradigms interconnects with medical devices and healthcare providers to offer remote consultation and patient monitoring with minimum computation overload in healthcare systems.Of late, various authentication schemes have been designed using elliptic-curve cryptography (ECC) for cloud-centric eHealth systems.Jian et al. [43] designed a cloud-assisted authentication scheme using ECC to secure communication between the users and the cloud server.Yang et al. [44] utilized a secure hash function and elliptic-curve operator to design a robust authentication protocol between wearable devices and cloud servers to achieve proper mutual authentication with minimum computation cost.Izza et al. [45] devised a hybrid authentication protocol based on ECC and lightweight operations to encrypt the data features of wearable medical devices.This mechanism uses symmetric encryption to perform various computing tasks with minimum energy consumption in order to guarantee end-to-end delivery of packet transmission with reduced packet loss.
Most healthcare system applies lightweight cryptography including hash functions and XOR operation to guarantee better transmission efficiency.Alzahrani et al. [46] designed a lightweight authentication scheme for a wearable body area network that uses a hash function and XOR operation to update the device identities locally.Chunka et al. [47] constructed a hash-based authentication scheme to operate the system parameters at the end of session establishment.Wei et al. [48] devised a two-factor authentication protocol with device anonymity for a cloud computing environment.Regrettably, this scheme cannot resist a few security vulnerabilities such as user and gateway masquerading.To address security issues, this paper formulates a lightweight two-factor authentication framework (L2FAK) with the functionality of privacy preservation, which utilizes a mobile sink for smart eHealth.

Contribution
The associated technologies interconnect the computing devices with unique identities to transfer sensitive data without human intervention.The development of the S-IoMT applications handles the data traffic using the communication channel to prepare a comparative study of different network-based countermeasures including security requirements and authentication protocols.Healthcare systems operate wearable devices to collect and transmit sensitive data periodically.As a result, in healthcare, the remote monitoring system using the S-IoMT applies decentralized verification and authentication to achieve data security with secure transmission.To meet the essential requirements of the S-IoMT including session key agreement and credible mutual authentication, this paper designs a lightweight two-factor authentication framework (L2FAK).For practical uses of the S-IoMT, the proposed L2FAK includes secure data storage and transmission when facing a privileged-insider attack.The study analysis showed that the existing lightweight authentication frameworks using IoMT [49] do not have any specific strategy such as a machine learning algorithm to protect the system features; therefore, the public and private keys of the sensing units cannot be well preserved to ensure device security.The major contributions of the proposed L2FAK are as follows.

1.
Use a two-factor strategy with privacy-preserving and federated learning to block potential threats such as privileged-insider and denial-of-service attacks through an authentic-ware system [15] and to analyze the data features effectively without any centralized server access.

2.
Apply a secure averaging function and Boolean and Numerical (BN) responses according to source attributes of the data to update secret keys locally and to transfer the weighted parameters and their relevant gradients.

3.
Design federated learning layered authentication (FLLA) which proactively manages the shared data in any social network to analyze two different datasets using the poisoning attacks.The extensive analysis utilizes the privacy features of FLLA to resist malicious behavior and guarantee better robustness and credibility.4.
Explore the layer attributes of a communication channel to extract the authentication features and enable the classification system to train the authentication process based on controlled parameters with a high-level physical layer [16]. 5.
A practical testbed using Raspberry Pi 3 and Arduino examines quality metrics such as transmission efficiency and overhead ratio.

Paper Organization
The remaining sections of the paper are organized as follows.Section 2 briefly describes the security efficiencies of existing authentication frameworks against threats, such as forgery, password guessing, user tracking, and perfect secrecy, to highlight the challenges of a resource-constrained IoT.Section 3 presents a smart healthcare system model that addresses two key challenges: computation and communication.Section 4 presents the phases of the proposed L2FAK and FLLA, and Section 5 discusses informal and formal security using RoR and learning analysis, offering computational analysis and reliable authentication.Section 6 presents the performance analysis using a real-time testbed, and Section 7 concludes this research work.

Related Works
This section discusses the issues of security frameworks, artificial intelligence, and federated learning to analyze a few crucial factors such as access rights, security, privacy, heterogeneous network, etc.Of late, the security frameworks have utilized an edgeadaptive federated learning approach for various peer-to-peer computing services.In the past decade, various system functionalities such as resource efficiency, perfect secrecy, anonymity, mutual authentication, nontraceability, revocability, and resiliency have been considered for the improvement of healthcare sectors.In the design of any secure authentication scheme, key properties such as session key agreement and mutual authentication are chiefly concerned with strengthening network performance.In recent studies, various security and privacy issues have been addressed [11][12][13][14] for significant solutions in terms of security and performance.One study revealed that cloud-centric architectures in the literature have not had enough work highlighting security and privacy issues.
[50] developed a provable lightweight authentication framework to preserve the privacy of healthcare systems.This framework uses a dedicated network architecture to explore the functional requirements of the edge computing paradigm in order to protect the privacy of the medical service provider.Kim et al. [51] constructed a lightweight authentication with anonymity preservation to protect the healthcare system against replay attacks.The anonymity preservation uses biometric-based authentication to ensure the key freshness of the message requests while integrating the gateway with medical sensors for any clinical decision.Praveen et al. [52] utilized a bioacoustics signal to design a robust secure lightweight authentication to meet the security requirements of IoMT applications including integrity, authenticity, security, and privacy.This strategy applies the Chinese Remainder Technique (CRT) to generate a group key via a protective network to validate the performance of application systems in terms of computation and communication overhead.
Chen et al. [53] intended to improve the lightweight authentication framework which uses low-power wearable sensors to analyze the key requirements of medical systems.Moreover, the lightweight framework applies biometric authentication to verify the key freshness of the message requests via a dedicated gateway.Nair et al. [54] applied a federated learning framework to construct a lightweight authentication with privacy preservation.This model adopts a strategy of big-data analytics to analyze the functionalities of multi-tier system architecture with load reduction.Gupta et al. [55] designed contextaware data authentication and access control to resist quantum attacks.The comprehensive analysis proved that context-aware authentication meets the network requirements of the IoMT networks such as anonymity, mutual authentication, and quantum security.Chatterjee et al. [56] employed a ring signature-based authentication to validate the collaborative environment of the medical system.This scheme exploits quality assessment criteria to resist the network attacks such as man-in-the-middle, denial-of-service, and privileged insider to maintain data confidentiality and integrity.
Deebak and Al-Turjman [57] formulated a single sign-on mechanism using Chebyshev chaotic map to analyze the computing services of the distributed network.This model uses a strategy of unary access control to meet the service level agreements of medical IoT systems.Dharminder et al. [58] developed an efficient authentication framework based on a Chebyshev chaotic map to protect the management systems against security vulnerabilities such as key impersonation and the privileged-insider attack.This modeling framework uses key verifiers to examine the requirements of digital systems.Dsouza et al. [59] proposed a policy-based security framework to control the flow of data transmission with multiple application domains to provide a high level of security.This framework initialized attribute-based authentication to acquire the essential criteria such as computing resources and services.The main objective is to execute computing services involving storing and processing sensitive information [60].Shivraj et al. [61] designed a two-factor authentication using elliptic-curve cryptography (ECC) which utilizes fewer key sizes, a reliable infrastructure, and a robust testbed to analyze the core features of smart cities.However, this authentication scheme cannot be more genuine to examine the key elements of the three-tier architecture of fog computing architectures, namely, pre-processing, storage, and security.
Lu et al. [62] intended to develop a lightweight, privacy-preserving scheme to perform data aggregation in a fog computing environment.In this scheme, three basic techniques (the Chinese remainder problem, one-way hashing, and the Paillier cryptosystem) were applied to prevent data injection attacks at the edge of the network.Examination results demonstrated that this scheme can mitigate computation and communication costs to meet the standard constraints of a fog computing environment.Kumar and Gandhi [63] utilized data transport layer security to address vulnerability to denial-of-service (DoS) attacks.Their method uses a constrained application protocol to optimize deployment with a high number of computing devices.Ibrahim [64] designed a proper mutual authentication framework to explore the key functionalities of a master secret key.This scheme uses smart cards and intelligent devices to verify the identities of users who transmit sensitive data over public channels.Unfortunately, this scheme cannot achieve better device compatibility and user anonymity or lessen signal interference, to block unauthorized entities.
Amor et al. [65] designed a reliable authentication framework using a public-key cryptosystem.It uses pseudonym-based cryptography to maintain user anonymity between computing nodes and fog servers.However, this scheme cannot offer a secret session key agreement to meet the general requirements of a fog computing system.Xu et al. [66], Lee et al. [67], and Yu et al. [68] analyzed two-factor and three-factor authentication for a multi-server architecture.Watters et al. [69] implemented short messaging services to analyze key features of two-factor authentication.Test results revealed that the authentication scheme can only achieve about 76.5% accuracy in analyzing key features such as authentication and anonymity [70].Amin et al. [71] proved that the security mechanisms of He et al. [72] and Wu et al. [73] directly contact sensors to collect or read medical data.Therefore, their schemes could not restrict offline guessing, intractability, and a privilegedinsider attack.Amin et al. presented two-factor authentication specifically designed for wireless medical sensor networks (WMSN) to address those security weaknesses.However, their scheme is still susceptible to offline password guessing.Kumari et al. [74] presented a novel lightweight authentication scheme that constructs a secure session key between real-time entities.Unfortunately, their scheme could not restrict offline password guessing and user traceability.
Farash et al. [75] designed a secure authentication protocol to prevent forgery and password guessing.Cryptanalysis proved that their scheme is still susceptible to user traceability.Wu et al. [76] presented a lightweight, two-factor authentication scheme that blocks threats such as privileged-insider attacks, user nontraceability, session key disclosure, and offline password guessing.Inopportunely, their scheme could not rely on perfect secrecy.Wazid et al. [77] designed a robust authentication scheme that applies a fuzzy extractor to manage biometric mechanisms.However, their scheme fails to prevent attacks such as password guessing, user traceability, breach of anonymity, etc. Above all, most of the existing authentication schemes still find it challenging to offer better security and privacy protection [78].Gope et al. [79] developed an authentication framework using a one-time physical unclonable function to update the challenge-response pair dynamically to prevent a machine learning attack.Jegadeesan et al. [80] devised a lightweight privacy preservation framework with anonymous authentication to resolve the issue related to response errors.Jiang et al. [81] utilized a one-way hash function and an ideal physical unclonable function to minimize the operational cost between the medical devices and the server.Table 1 summarizes the challenging issues of existing authentication schemes.

System Model
This section provides a real-time scenario for an eHealth monitoring system offering a better quality of service and context awareness.It has a core deployment of fog and cloud network paradigms to address the challenging features of a large-scale system, such as security, scalability, heterogeneity, and programmability.The network paradigms use a distributed cloud computing model to handle data processing and to offload computing tasks to the cloud.The computing model builds an intelligent platform between the end devices and cloud data centers via authentic gateway access to examine quality metrics such as transmission efficiency and overhead ratio.The key components are as follows.
Sensing Units (IoT Devices)-Wearable sensing units collect the source medical data, such as blood pressure, heart rate, and glucose monitoring, to infer the conditional status of the patient.The application allows a medical expert to process the healthcare information of a patient via a dedicated gateway to offer better decision making.
Sink Node (Mobile Device)-A sink node can be any one of various computing nodes, such as a smart device, a microcontroller, and a sensing component to acquire and collect medical data.Most of the on-demand requests issued by end users share the healthcare information of the patient to improve the lifetime of the sensing units.
Authentic Gateway Access-The gateway acts as a reversible proxy to restrict unauthorized access and prevent suspicious activities.Moreover, it can handle authentication requests to protect the critical and sensitive information of the patient.
Cloud Server-In this framework, the cloud server acts as a semi-trusted entity to characterize the malicious behavior of the mobile device and exhibit curiosity-but-honest to deliver sufficient computing resources and data sharing between M E /P A and M S .In other words, M S cannot delete or modify the transferred data of M E /P A ; however, M S makes an effort to correlate the relationship between the gathered data and M E /P A to infer the actual data content.
Smart medical sensors are commonly implanted in or around the patient's body to read physiological data that support healthcare monitoring in real time [44].They are more portable and smaller to provide device intercommunication.They are designed to be implanted in, or worn on, a patient's body to record vital signs such as breathing rate, heart rate, blood pressure, etc.Data communication is essential to elderly people or in an emergency situation, processing sensitive data wirelessly.It may be necessary to monitor or assess the medical situation or take immediate action to obtain proper treatment from doctors or medical experts.Figure 2 shows a model smart healthcare system with authentic gateway access.It has three real-time entities (M E , M D , and AG Access ) to handle sensitive information of patients via dedicated fog computing.Owing to limited computation resources to gather medical data, it is preferable to use lightweight cryptographic operations, including the bitwise exclusive operator and collision-resistant functions [45].On the other hand,   has sufficient resources to provide a secure interface between   and   .These entities demand protected transmissions to achieve mutual authentication, data privacy, and anonymity.In addition,   must provide session unlinkability and nontraceability to strengthen security efficiency.Because of public network access, data transmissions are easily susceptible to severe security risks, such as replay attacks, eavesdropping, data modification, data interception, etc.Moreover, intruders or adversaries may try to launch malicious techniques such as forgery, a session key disclosure, key impersonation, privileged-insider attacks, etc.It is worth noting that the overhead constraints on medical sensors can substantially weaken system efficiency.
In the system model, the patient's condition is monitored periodically using smart sensors to assess status, including blood pressure, pulse rate, pedometer readings, etc. Smart sensors infer the medical condition of the patient through an access point.Subsequently, the inferred information is sent to the cloud via a system gateway to verify the system attributes using federated learning.The system acts as a smart entity to register the legal   that collects sensitive patient data to observe their physical condition.As referred to in [23], overall system costs may vary depending upon the usage of transmission bits,   .Moreover, communication costs may directly influence the transmission distance between sensors and the target entity.Table 2 shows the notations used under the L2FAK protocol.On the other hand, AG Access has sufficient resources to provide a secure interface between M D and M E .These entities demand protected transmissions to achieve mutual authentication, data privacy, and anonymity.In addition, AG Access must provide session unlinkability and nontraceability to strengthen security efficiency.Because of public network access, data transmissions are easily susceptible to severe security risks, such as replay attacks, eavesdropping, data modification, data interception, etc.Moreover, intruders or adversaries may try to launch malicious techniques such as forgery, a session key disclosure, key impersonation, privileged-insider attacks, etc.It is worth noting that the overhead constraints on medical sensors can substantially weaken system efficiency.
In the system model, the patient's condition is monitored periodically using smart sensors to assess status, including blood pressure, pulse rate, pedometer readings, etc. Smart sensors infer the medical condition of the patient through an access point.Subsequently, the inferred information is sent to the cloud via a system gateway to verify the system attributes using federated learning.The system acts as a smart entity to register the legal M E that collects sensitive patient data to observe their physical condition.As referred to in [23], overall system costs may vary depending upon the usage of transmission bits, b l .Moreover, communication costs may directly influence the transmission distance between sensors and the target entity.Table 2 shows the notations used under the L2FAK protocol.

Threat Model
In accordance with the system model, a formal adversarial attack is considered to assess four different types of threats which may intimidate the security efficiency of the proposed L2FAK.
Formal Security Definition: A formal security assumption is introduced with probabilistic polynomial time (PPT) to represent the behavior of malicious or revoked users.This malignant act may forge or deceive the cloud server to generate a privacy leakage [82].Moreover, the assumption defines the security against the malicious user where an adversary with PPT AD PPT is supposed to play the successive game with a competitor C.
Setup: C initiates non-identities of M E /P A using the proposed L2FAK.It is assumed that AD PPT represents β as a non-identity of L2FAK to assess its behavior over C. As a result, C instructs β to attack the non-identity of L2FAK and utilizes the subprogram of AD PPT to work over L2FAK.In addition, β as the competitor of L2FAK trains AD PPT and make an effort to obtain the results of AD PPT to drive an attack against the non-identity of L2FAK.Obtaining the inputs of non-identity L2FAK, C generates the parameters SK = M S i , ps 2 , ps 3 , ps 4 , S key , H( .) and β produces the parameters SK = M S i , ps 2 , ps 3 , ps 4 , S key , x 1 , pk gw , H( .) to obtain the data content.In addition, the system parameters are processed to the adversaries β and AD PPT .
Queries: The adversaries can strategically issue the data queries to C, which maintains the query lists to explore the relationship between the data or address sequences, which is initially recorded as empty, however.
(1) Hash Function H( .):The adversary uses hash value to obtain any identity i .C finds w i and returns the value to the adversary.Using this query, the adversary can obtain the parameter of any secret key S key .
(2) Key: In query execution, the adversary processes the function lists H( .) List to the adversary.It is worth noting that H( .) List utilizes the hash value of H( .) to obtain the user identities M id j .If S key has not been suspected before, then C will generate a legal message request M i .Using this query, the adversary can obtain the parameter of any S key to acquire the encrypted messages.
End-Game: Lastly, the entities including adversary and competitor obtain the encrypted messages C * η and C * η respectively.If C * η = C * η holds, then the adversary succeeds in its computation process to derive the encrypted messages.Definition 1.The L2FAK protocol can be secure over forging the encrypted messages to ensure privacy preservation, even if any adversaryAD PPT plays the game with the competitor to obtain a negligible probability: Four Types of Adversary Acts: To analyze the security efficiencies of the proposed L2FAK, the capabilities of an adversary AD PPT are as follows.

1.
AD PPT may collude with multiple user entities to infer the secret key of M E /P A without any proper permission of M S to gain server access.

2.
M S may be a semi-trusted cloud server to collude with revoked user entities to maintain the encrypted data without the consent of AG Access .Even if the outsourced data is known to the revoked users, the semi-trusted cloud still holds the key derivatives of M E /P A to secure the data transmission.

3.
When any user of a group tries to access the shared data, it may operate its access types in a different form to protect the content of data.Moreover, the revoked user cannot collude with the semi-trusted cloud to guess the interested information.

4.
The cloud server cannot determine the significance of encrypted data content to explore its relationship with data and address sequences.In addition, the curious server can attempt to track the content of data based on the access time to determine its priority.

The Proposed L2FAK
This section systematically constructs the architectural processes of eHealth applications to fulfill design criteria such as confidentiality and integrity.To structure the protocol, the design is composed of four basic entities, namely, sensors, medical experts, a mobile sink, and an authentic gateway.The execution phases of the L2FAK protocol consider the following assumptions for significant roles of the real-time entities: 1.
AG Access assumes the role of a trusted node to establish and manage the point of service via proper authorization requests.

2.
M key utilizes a hardware device to generate unique passcodes among real-time entities for single sign-on authentication.

3.
S key uses secret key distributions to verify device identities and ensure mutual authenticity via S key = H M key ID gw .
The L2FAK scheme is composed of four execution phases (pre-deployment, initialization, and registration, plus login and authentication) for medical-expert registration, login, authentication, and session key updates.
Phase 1-Pre-deployment: In this phase, M E /P A negotiates with AG Access to obtain S key .To be associated with a legal system, each M D utilizes S key along with information about M E /P A .Furthermore, it is assumed that S key cannot be accessed or obtained by A dv .
Phase 2-Initialization: This phase carries out systematic operations over a secure channel, sending the registration request for medical device M D (along with its identity, D ID ) to server S.After receiving the request, S generates a challenge, CH, to verify the next interaction with D ID .As a result, S has a series of new challenges, CH SYN = {ch 1 , ch 2 , . . . . . . ,ch n }, which requires proper re-synchronization with D ID to send a functional argument {CH, CH SYN } to M D .Accordingly, M D extracts functional outputs R CH = PUF ID (CH) and R CH−SYN = PUF ID (CH SYN ) using {CH, CH SYN } to process functional parameters {R CH , R CH−SYN } with S.
To prove the legitimacy of computing device M D , S generates first-factor authentication, including short-term identity ST ID = H R CH M key and a secret key, S key .In addition, S finds a set of fake identities along with key pairs, (F ID , K P ) = {(F ID1 , K P1 ), (F ID2 , K P2 ), . . . . . ., (F IDn , K Pn )}, to prepare a valid argument, ST ID , S key , (F ID , K P ) , which sets up se- cure channel access with D ID .Lastly, S stores the essential parameters, {(ST ID , S key ), (CH, R CH ), (CH SYN , R CH−SYN ), (F ID , K P )}, in its database, D B , to verify the parameters of M D , i.e., {(ST ID ,S key ), (F ID , K P )}.Phase 3-Registration (M E /P A ): M E /P A deal with AG Access to register credentials safely before message requests are transmitted via secure channels.The execution steps are as follows.
Step 1: M E /P A select their own identities, M id /PA id , generate a pseudo-random number, ps 1 , to compute a pseudo-identity, PID i = H(M id /PA id H(ps 1 )), and then transmit PID i to AG Access .
Step 2: After receiving the parameter PID i , AG Access determines whether PID i is already registered in D B .AG Access generates a temporary pseudo-identity, TID i , to compute TP 1 = H TID i x 1 ID gw S key and assigns TP 1 to M E /P A .Subsequently, AG Access stores values {TID i , TP 1 } for M D and allows M E /P A to access PID i via D B .Finally, AG Access transmits data from medical device M D to M E /P A .
Step 3: M E /P A set a strong password, p wd , in order to compute TP 2 = H(PID i TID i p wd ) to protect the device credentials of M E /P A from A dv .Later, M E /P A compute TP 3 = H H p wd i M id /PA id ⊕ H(ps 1 ), TP 4 = H p wd i ps 1 , and TP 5 = H TP 1 S key PID i to store system parameters TP 2 , TP 3 , TP 4 , TID i , S key in M D .Phase 3-Registration (Sensor): Assume that a medical sensor, M S , wishes to register with AG Access via dedicated device M D to transmit messages via secure channels.
Step 1: AG Access chooses an identity, M S i , for medical sensor M S and applies private key pk gw of AG Access to protect the M S identity.
Step 2: Additionally, AG Access utilizes the values of M S i and pk gw to compute a pseudoidentity for M S , i.e., P M S i = H M S i pk gw .
Step 3: After obtaining P M S i , AG Access sends parameters P M S i , M S i , PID i to M S and subsequently stores them in M D via M S to limit data access based on the characteristics and to establish secure communications with uncompromised M S .
Phase 4-Login and Authentication: In this phase, the system can legally process message requests when M E /P A reviews and verifies the credibility of patient information.To gain system access, M E /P A enter a legible S key into M D , which authenticates the communication with M S via AG Access , as shown in Figure 3.This phase shall operate the execution steps of different communication entities M E /P A and M S via AG Access to exhibit the significance of secure registration and validation among the legitimate mobile/medical device M D .Initially, S handles the registration process with M E /P A to verify the message requests and manage the medical devices M D with AG Access to control the system parameters.To begin with the registration process, M D safely registers the secret key S key with its trusted AG Access to validate the legitimacy of the patient AG Access and also generates a pseudo-random number, ps 3 to compute a legal message request M. Following this process, AG Access supplies the computation parameters {E 1 , E 2 } not only to enroll the medical device M D but also to compute their signatures to advertise a secure session key SK within the medical center.The center considers a learning network to generate a local chain whereby the entities such as M S and AG Access can handle a local batch signature to create private and public slabs in order to validate the service management controlled by the inter-hospital networks.The learning network forms predictive information to validate legitimate device M D via AG Access to determine the appropriate security features including the decryption key which manages the network verification with M S to extract the important parameter of M D and to establish secure communication between M E /P A and M S .
Step 1: M Ei /P Ai enter login credentials M idi /PA idi along with p wdi via the preferred M D to compute H(ps 1 ) = H(H(p wdi ) M idi /PA idi ) ⊕ TP 3 .Subsequently, M Ei /P Ai find PID i = H(M idi /PA idi H(ps 1 )) from H(ps 1 ) to verify whether (H(p wdi ) M idi /PA idi ) ? ⇔ TP 4 is stored in M D .After successful verification, M E /P A is determined to be legitimate and can successfully log in.
Then, M Ei /P Ai generate another pseudo-random number, ps 2 , to compute E 1 = EC TP 5 ps 2 M S i PID i S key and E 2 = H ps 2 M S i PID i TP 1 .Lastly, computation parameters, such as {E 1 , E 2 , TID i }, are transmitted via M D to AG Access .
Step 2: AG Access initially uses TP 1 = H TID i x 1 pk gw to obtain (ps 2 M S i PID i S key ) = D TP 5 (E 1 ) via functional decryption.Additionally, AG Access utilizes D B to obtain the pseudo-random identities M idi /PA idi using ps 2 M S i PID i S key to verify source values with E 2 .
Additionally, AG Access generates another pseudo-random number, ps 3 , to compute , and E 4 = H(M M S i ps 2 ps 3 PID i ).Finally, AG Access transmits parameters {E 3 , E 4 } to M S .
Step 3: To check the legitimacy of AG Access and to obtain the ps 2 , ps 3 , and PID i values, M S computes (ps 2 ps 3 PID i ) = H M M S i ⊕ E 3 .Furthermore, M S determines M M S i ps 2 ps 3 PID i to verify values with E 4 .
After successful verification, M S generates pseudo-random number ps 4 to find SK = H M S i ps 2 ps 3 ps 4 S key , which creates a random number r l to compute W L = EC GW (r l ), E 5 = H M ps 3 M S i r l ⊕ ps 4 , and E 6 = H(M ps 3 ps 4 SK).In the end, M S sends system parameters {E 5 , E 6 , W L} to AG Access .
Step 4: AG Access finds r l = DC AG (WL) to compute a pseudo-random number, ps 4 = E 5 ⊕ H(M ps 3 M S i r l ).Accordingly, AG Access evaluates SK = H(M S i ps 2 ps 3 ps 4 S key ) to verify values with E 6 .Successful verification prompts AG Access to generate a temporary identity, T New ID , and to compute TP New Step 5: M Ei /P Ai initially decrypts E 7 using TP 5 to find the target value (ps 4 ps 3 TP New Device Communication monitors the computing data via a dedicated application to manage complex issues in heterogeneous environments.Moreover, dynamic systems Phase 5-Learning Framework: In this phase, the application service uses the statistical features of the patients to discover a user behavior model.The designed model uses a connection module to extract the statistical vectors including a timestamp that computes the authentication levels of the interactive devices in real time.To meet the desired goals, the proposed framework uses four basic components: Device Communication monitors the computing data via a dedicated application to manage complex issues in heterogeneous environments.Moreover, dynamic systems learn machine intelligence to discover a data-driven decision to optimize the network functionalities. Data Preparation transforms the storage data to make an accurate prediction model which can explore a few essential tasks to uncover the relevant attributes of the application services, i.e., ST ID , S key , (F ID , K P ) .Data Storage creates and maintains the applica- tion database D B to verify the parameters ST ID , S key , (F ID , K P ) to leverage system performance and manage the file systems in parallel with low delivery latency.
Seamless Authentication simplifies the signing process to explore the trials of the password authentication scheme in order to examine the identities, social relationships, and access privileges.
In the lightweight device, the modeling parameters are preserved to compute the gradients effectively.In addition the parameters of the healthcare systems compare learning algorithms with various layer attributes to characterize the significance of computing devices.The layer attributes explore adaptive re-training to enhance the features of the detection model and to automate the utilization of neural networks using deep learningbased blind feature extraction.It is also worth noting that the proposed model operates the channel estimation matrix H(N × 256) using a convolution network to capture the essential properties of the computing layers.Specifically, the modeling system has no consistent values between the predicted values of the authentication models, and thus, the true predicted values are computed using the softmax loss function, i.e., S L = − ∑ N k=1 Y k .log S k , where K defines the legitimate computing devices, Y k represents the class label initially set to 1, and S k denotes the kth value of the desired vector S. Hence, the softmax function can be rewritten as: where a k shows the predicted values of kth computing device which is the fully connected layer of the application system.The purpose of a knowledge-based system is to acquire the blind features which functionalize the neuron to learn the complex features iteratively.Each feature tries to operate the mapping function of two adjacent layers which is as follows: and D[D or D ] define the height, width, and depth of the convolution matrix.As a result, the output matrix can be expressed as: where bi d is the model bias, (S V , S W ) denotes the sampling vectors of the matrix, namely, vertical (V) and horizontal (W), and (P − v , P + v , P − w , P + w ) represents the output padding values in the directions of V and W. The mapping functions are expressed as follows: Electronics 2023, 12, 1250 20 of 36 In order to quantize the mapping function, the proposed neural network utilizes a 1 − bit random quantization scheme.This scheme utilizes the probabilistic quantizer to guarantee better quantization which exploits a mapping function to identify the level of quantization, i.e., q l = [log 2 S V S W ].
, where .F is the Frobenius norm of two independent solutions simplified by the Euclidean norm of the matrix.Hence, the quantized matrix Ŵij is defined as: where ∂ i j defines a random function to express the distribution as follows: sgn(x) is the function suited to quantizing the positive and negative values of the sampling vectors, i.e., 1 − bit level.The convolution layer directly feeds the output data of the pooling layer to minimize the computing attributes which considers 2 × 2 × 128 to operate two convolutions and two pooling functions.This functional operation uses a fully connected layer as a target one which designs its own softmax loss to optimize the computation operation used in the proposed L2FAK while training the sensitive data, i.e., S L = − ∑ N k=1 Y k .log S k .The proposed federated scheme uses three computing phases such as training, authentication, and re-training to learn the significance of blind features based on a convolution neural network.The classified attributes perform both forward and backward propagation to converge the computed value close to 0 which determines the legitimacy of any computing device.Each device shares its relevant parameters to train the physical characteristics of a well-trained neural network and to determine the data values of the computing device as shown in Algorithm 1.The device prediction is defined as follows: It is worth noting that the fully connected layer determines whether the incoming messages of the computing devices are legitimate or not to verify the performance of the proposed FL with other learning mechanisms.

System Evaluation
This section discusses the security properties of the proposed L2FAK with the support of informal, formal, computation, and learning analysis.

Informal Analysis
The analysis of security properties is as follows.Mutual Authentication: To analyze mutual authenticity between M Ei /P Ai , the communicating parties share a common session key to authenticate each other.In the L2FAK scheme, M S authenticates M Ei /P Ai using SK = H M S i ps 2 ps 3 ps 4 S key via M D .In the system login and authentication phase, AG Access authenticates M Ei /P Ai using the calculations TP 1 = H TID i x 1 pk gw and ps 2 M S i PID i S key = D TP 5 (E 1 ) to check whether M Ei /P Ai meets the conditional expression ps 2 M S i PID i S key to initiate data transmission.Though A dv tries to intercept the login requests of M Ei /P Ai , and attempts to falsify the activities of AG Access , the attacker cannot find source parameters {ps 2 , ps 3 , PID i } to calculate derivative factors such as {E 3 , E 4 }.Therefore, A dv cannot transmit a legitimate message to AG Access .Hence, the proposed L2FAK adheres to mutual authentication of M Ei /P Ai .
Session-Key Agreement: In L2FAK, M Ei /P Ai share a common secure session key via AG Access .Upon launch of the login and authentication phase, M Ei /P Ai can confidently exchange sensitive data by knowing the common session key.P Ai data gathered by M D are encrypted from the computation of SK = H M S i ps 2 ps 3 ps 4 S key .Then, a secure session key is determined in order to validate W L = EC GW (r l ) using E 5 = H M ps 3 M S i r l ⊕ ps 4 , and E 6 = H(M ps 3 ps 4 SK).Because parameters ps 2 , ps 3 , ps 4 , and S key change periodically during execution, different sets of secure SK can be tem parameters TP 2 , TP 3 , TP 4 , TID i , S key from S D .In addition, M Ei /P Ai make an effort to find a new secret key, SK New , to compute E 8 = H TP New 1 T New ID SK ps 3 ps 4 TP 1 .However, valid key SK New cannot be computed because it is irretrievable from the M D of M Ei /P Ai .Thus, the proposed L2FAK can be resilient to offline password-guessing attacks.
Resilience against User Forgery: To forge communications of any legal entities, A dv requires parameters such as TP 2 , TP 3 , TP 4 , TID i , S key .To obtain them with little effort, A dv tries to compute TP 3 , TP 4 , and TP 5 consisting of PID i , TID i , and p wd .Since S key is irretrievable, A dv cannot infer or obtain a legal identity for AG Access in order to derive a valid message request to authorize the session.Thus, the proposed L2FAK is resilient against the user forgery attack.
Resilience against Gateway Forgery: Assume A dv generates PID i using H(M idi /PA idi H(ps 1 )).As a result, A dv claims that a legal request may be successfully generated.However, the proposed L2FAK cannot permit anyone to generate a valid request without proper associations for ps 2 , M S i , and S key .Importantly, S key and M Key are very hard to derive because they are associated with high-level security features.Thus, the proposed L2FAK can be resilient against gateway forgery.
Resilience against Gateway User Tracking: As AG Access randomly generates private key pk gw to establish secure sessions, M Ei /P Ai cannot be tracked by adversaries.In addition, S key is irretrievable; thus, a legal request cannot be generated to track user sessions.
Perfect Forward Secrecy: In most cases, A dv tries to obtain system parameters such as P M S i and M S i .Moreover, A dv may examine legal message requests such as P M S i , M S i , PID i and TP 2 , TP 3 , TP 4 , TID i , S key to generate valid session key SK using H(M S i ps 2 ps 3 ps 4 S key ).As a rule, M D generates a firm and secure SK that assigns a random number, ps.As a result, it is certain that A dv cannot obtain legal values such as MP M S i , M S i , and PID i to discover a secure session.Hence, the proposed L2FAK ensures perfect forward secrecy.
Under the procedures of the L2FAK scheme, M Ei /P Ai can mutually endorse one another to access sensitive IoT-ECF data.Eventually, M Ei can access patients' private information via AG Access .As a session key is securely shared among the communication entities, the L2FAK can achieve security, namely, the properties of mutual authenticity and session-key agreement, and be irrepressible in the face of user and gateway masquerades, and privileged-insider and replay attacks, improving security efficiency.

Formal Analysis Using Random Oracle Model
This section performs formal analysis during the login and authentication phase to show the security efficiency of the proposed L2FAK [83].
Theorem 1.The revoked user cannot learn the stored files of M E / P A even if they are in collision with the cloud server.Moreover, the user is not capable to learn the content of data stored as the blocks after the successful revocation.
Proof.When any user quits communication with M S based on the proposed L2FAK, the parameters such as ps 2 , ps 3 , and ps 4 are utilized to decrypt the data files.Moreover, it uses r.ps 2 + r.ps 3 + r.ps 4 = r.SK to perform the computation again.Observing the following instance, the revoked user can use the secret key S key to participate and learn the data contents c Subsequently, the revoked user colludes with M S to decrypt the data file using an authentic key.The key verification is as follows: e(τ ps, νps) H(K i ).(P pa ,M pa ,PID pa ) = e(τ ps, νH(K i ) (P pa + M pa + PID pa ) ps) = e(νps, H(K i ) P pa ps) τ .e(νps,H(K i ) M pa ps) τ .e(νps,H(K i ) PID pa ps) τ = ps 2 τ .ps 3 τ .ps 4 τ (8) However, the system parameters of the encrypted key imply ps 2 τ .ps 3 τ .ps 4 τ = rps 2 τ .rps 3 τ .rps 4 τ to ensure that the revoked user cannot perform any decryption process to update the source file.As a consequence, the proxy M S has the ability to control the access rights of the revoked users, whereby they cannot collude with M S to learn the actual data content.
Theorem 2. The semi-trusted cloud server cannot differentiate its access operation to learn the interested data contents.
Proof.When any content of data has gained its access several times, the curious cloud server determines such data as more significant to monitor or likely to tamper with a forged secret key.Meantime, the curious cloud collaborates with the revoked users to learn the data contents and its accessing capabilities.Thus, the probability of accessing data should be identical to the data that appeared in the cloud server.The access operation includes real and 2 N pseudo-random requests to process the application services to the cloud server while the user applies an access control algorithm.In practice, real-time data is only known to the authentic user to gain system access.It is worth noting that the accessed data is uniformly distributed to the cloud server to maintain better data consistency.Though the algorithm known as lazy obfuscation is applied to access the data content, the relationship of the data including address sequences cannot be applied by the cloud server to discover its actual form.The user performs a specific access operation on the data content to view it as a new one of the cloud servers.Hence, it is claimed that the semi-trusted cloud server cannot differentiate its access operation to learn the interested data contents.
Theorem 3. The accessed data pattern can be secure under the adversary act of AD PPT .Assume the proposed L2FAK can support a feature of data untraceability to enhance the privacy of data content.The adversary tries to access the proposed L2FAK over a wireless channel to initiate the session between M E /P A and M S to arbitrate the user U activities.Moreover, the adversary is known to determine a few computation parameters M = H ps 2 M S i and P M S i = H M S i pk gw which executes two queries {M s , N s } and {N s , M s } to breach the secure communication of the proposed L2FAK.

Computation Analysis
In this subsection, the performance of the proposed L2FAK is evaluated along with other existing schemes [42,44,[49][50][51]53].In computation analysis, the system login and authentication phases were considered to examine the security features of the proposed L2FAK and the other schemes [42,44,[49][50][51]53].The authentication schemes employ the OpenSSL library between two computer terminals for analysis of computation costs.The user-side terminal had a Core i3-1035G1 CPU with 8GB RAM and a clock speed of 3.6 GHz, whereas the server-side terminal had a Core i5-1035G1 CPU with 16GB RAM and a 2.3 GHz clock speed to construct the simulation environment.The user and server were connected over H3C S1024R to ensure that the connected device had a 100 Mbps bandwidth to perform the simulation more than 100 times.From the NIST [Anon] recommendation [26], P − 192 is preferred as the standard elliptic curve.
To regulate the message digest, SH A − 256 cryptographic hashing was utilized.Table 3 shows the execution times of the cryptographic operations.Since L2FAK has a low computation overhead of 15.343 ms, the phase execution time of the proposed L2FAK can be further reduced to achieve better computation efficiency compared to the other authentication schemes [42,44,[49][50][51]53], as shown in Table 4.

Learning Analysis
In learning analysis, datasets such as MNITS and FashionMNIST [84] are adopted to evaluate the proposed FLLA and other relevant layered mechanisms [17,18].The evaluation mechanisms utilize a dedicated message-passing interface (MPI) to maintain optimal load factors in a distributed environment [85].The environment uses Python to implement the source codes and prefers a high-performance computing package, i.e., mpi4py to access the computing platform.This platform is compatible with 12th Gen Intel Core i7, 16GB RAM, 14 cores, and a clock rate of 4.7 GHz.The compatible system uses MPI specification to build the source codes of the proposed FLLA and other relevant layered mechanisms [17,18], in order to provide a separate object interface.The object interfaces exploit a few significant key features of the configured prototype to facilitate the computation process.This prototype has one central server C S , four computing service cp s , and one coded data service D s to test the behavior of targeted and untargeted attacks.Table 5 shows the detailed descriptions of the datasets.In MNIST, the data are relevant to handwritten numbers of 250 different people, where 50% are high-school students, and another 50% are Census Bureau.Moreover, this dataset has a similar portion of digital data to test its relevance and is composed of 60, 000 training and 10, 000 testing images to verify the performance of the proposed FLLA along with other mechanisms [17,18] in terms of accuracy and testing rate.In case of no special instructions, this analysis chooses 200 as primary data to meet the constraint of data distribution represented as PD 0 .The FashionMNIST uses a similar size as MNIST to categorize its selective images of clothing.To realize the scenario of the proposed FLLA along with other mechanisms [17,18] in practice, this experiment uses poisoning attacks.This attack considers the activities of a malicious client to define the roles of targeted and untargeted attacks.The former attack arbitrarily performs its changes to operate the global model, whereas the latter exploits label-flipping to control the behavior of the malicious client, i.e., from l to (N − l − 1), where l ∈ {0, 1, . . . ,N − 1} and N represents the total number of labeling data.
Evaluation Metrics: The metrics such as test accuracy and error rate are used as an indicator of the evaluation model to train the datasets.The object of the proposed FLLA is to enhance the inference rate or detection accuracy of the global model.In data analysis, federated learning chooses a prominent model, the so-called FEDSGD, as a baseline which has the existence of different malicious clients to examine the results of the proposed FLLA along with other mechanisms [17,18].
Learning Settings: The evaluation considers cross-silo settings and defines the number of computing cores n c = 10 to exercise the clients during the training process.The selected model uses a three-layered neural network to load two different datasets on the interface of Keras with a backend platform of TensorFlow.Table 6 enlists the modeling parameters of two different datasets including MNIST and FashionMNIST to allocate the data inputs to the object interface.Each interface handles 6000 data sources which have a distribution of malicious clients ranging from 20% to 40% to perform a few critical scenarios [86].The batch size b s and loss function l f are set to 128 and 50 rounds, respectively, to observe the results of the proposed FLLA along with other mechanisms [17,18].Note.The epoch e is set to 50 to train the learning models in order to obtain the optimal solution for every iteration.Experimental Results: The results show the effectiveness of the proposed FLLA along with other mechanisms [17,18] in contact with poisoning attacks.The classified outputs prove that the proposed FLLA achieves better robustness, accuracy, reliability, and privacy than other mechanisms [17,18].Table 7 shows the test error rate of the proposed and other existing mechanisms versus the distribution of malicious clients in the global model [targeted and untargeted attack] on MNIST and FashionMNIST.The training process involves 50 iterations on the given dataset MNST and Fashion MNIST to signify the importance of the security features.Precisely, in the existence of malicious clients, the proposed FLLA retains the constant accuracy rate as its own baseline to resist various types of targeted and untargeted attacks.Figures 4 and 5 show test accuracy versus Epoch (#) on MNIST and FashionMNIST.The proposed FLLA and other existing mechanisms [17,18] acquire the layered features to observe the transition states of the distribution matrix and to identify any abnormal condition determining any anomaly degree of the data points.The system cores executed via a message-passing interface rely on layered features to analyze the abnormal behavior of any computing device as shown in Table 7.While the features such as the proposed FLLA and other existing mechanisms [17,18] were applied to the behavior of the computing devices, we observed that the proposed FLLA maintain a better consistent rate of accuracy than the other mechanisms [17,18].A few hyperparameters such as learning rate  = 0.001 and influence factor  = 0.1 were applied to change the computing models which choose its probabilistic quantizer to guarantee better quantization.Moreover, adaptive learning may momentarily accelerate the training process to Investigate the modeling performance of the layered features.In order to examine in real time, the training process was repeatedly iterated ≈ 50 .From Figures 4 and 5, it is more evident that the proposed FLLA obtains a more reliable authentication procedure in extracting the system attributes of the physical layer than other learning mechanisms [17,18], whereby the behavior of the FLLA system can fully be characterized to secure the authentication process.Importantly, key-based cryptosystems demand more computation time to establish a secure connection, whereas the physical layer While the features such as the proposed FLLA and other existing mechanisms [17,18] were applied to the behavior of the computing devices, we observed that the proposed FLLA maintain a better consistent rate of accuracy than the other mechanisms [17,18].A few hyperparameters such as learning rate α = 0.001 and influence factor γ = 0.1 were applied to change the computing models which choose its probabilistic quantizer to guarantee better quantization.Moreover, adaptive learning may momentarily accelerate the training process to Investigate the modeling performance of the layered features.In order to examine in real time, the training process was repeatedly iterated ≈50 times.From Figures 4 and 5, it is more evident that the proposed FLLA obtains a more reliable authentication procedure in extracting the system attributes of the physical layer than other learning mechanisms [17,18], whereby the behavior of the FLLA system can fully be characterized to secure the authentication process.Importantly, key-based cryptosystems demand more computation time to establish a secure connection, whereas the physical layer security depends on the quantification of the system attributes to enable device authentication and adaptive training to fulfill the objectives of model-based authentication.

Performance Analysis
This section describes a real-time testbed that verifies the transmission efficiency of the proposed L2FAK compared with other schemes [42,44,[49][50][51]53].To realize the efficiency factor, the testbed chosen used resource-constrained devices with a low code overhead, as shown in Table 8.Components such as the Raspberry Pi-3 Model B and Arduino Mega 2560 were deployed for authentic gateway access and edge computing devices, respectively [87].Note that the Arduino was equipped with ATmega2560 with 8 KB of SRAM and 256 KB of flash memory to process data transmissions from about 700 identities.Importantly, the uninitialized power-up state, i.e., on-chip SRAM, was utilized to generate a unique device key, and the user registration phase was executed to gain system access.The registration and authentication phases of the proposed L2FAK and other existing schemes [42,44,[49][50][51]53] were implemented on Python 3.5, which generated the user identities to process the data flow on the Ubuntu platform.In addition, dedicated firmware was written in C to read uninitialized memory between the heap and the stack when extracting SRAM data to establish communication with the host terminal.To measure the transmission ratio, real-time analysis was conducted that varied packet sizes of about 256 bits.The execution of firmware steps is as follows.
Step 1: Load the firmware to read the available memory space that contains only the subroutine for SRAM data.
Step 2: Combine authentication and application subroutines to shift and store the generated ID in the microcontroller.
Step 3: Load the function D[ID start ] to D[ID start + e − 1 ] that returns the locations of the stable bits.However, the location of the stable bits may vary due to availability in the hardware.
Step 4: Match the data pointers (stack and heap) to return the retrieval rate of data transmission (DT).
Step 5: Extract the user identities to compute the session key, storing the value in the microcontroller to authenticate device access.

Data Transmission Ratio
To test the data flow process, the DT ratio considers the number of user identities.It is randomly generated upon successful execution of power-up states that analyze the retrieval rate of data transmission.In Figure 6, we observe that the proposed L2FAK has a better power-up state to achieve maximum authentication access (i.e., 0.8545) than the other schemes [42,44,[49][50][51]53].Due to the increasing number of users, the collision probability may appear high.From the analysis, the transmission delay soars when the number of packet transmissions increases in proportion to the number of user identities.However, the proposed L2FAK keeps the delay within the restriction limit to improve transmission efficiency between the authentic gateway and the edge devices compared to the other schemes [42,44,[49][50][51]53].

Overhead Analysis Ratio
Overhead analysis (OA) included the system authentication phase of the proposed L2FAK and the other schemes [42,44,[49][50][51]53] to examine the core features of   and edge computing device   .Flow connectivity is as follows: Step 1: The authentication phase prefers a dedicated   to generate valid tokens, e.g., for   .The real-time entities, including   and   , use a reliable authentic token to process authentication requests that integrate a legal message request, {(.), ,   ,   }, to generate valid session key .
Step 2:   applies {  ,   } to retrieve and convert the computational parameters using the  − 2 algorithm.It can execute the extraction subroutines of user identities to construct a 256 −  stable identity using the addressed slots from SRAM.Computation parameters such as   and  − 2〈  〉 are processed to generate a valid 〈 − 〉 value for   .
Step 3:   processes the generated   retrieved from   〈  〉 to decrypt the authentication request, i.e.,   ′ =  − 2〈  〉⨁  .The generated identity then compares the values with a stored identity to process the authentication request.
As to analyzing overhead costs, key parameters such as key size and timing frame are preferred.The overall function has an overhead ratio of 3.65% in processing the system authentication phase.The overhead cost includes the hash algorithm and string processing to compute the memory requirements that use the symmetric key to store 256 −  values.Device security plays a crucial role in achieving the security level of the IoT architecture; thus, a proper configuration setup is made to examine the core features of low-cost application systems.
Figure 7 shows the overhead ratio versus the number of user identities.It is worth noting that the performance of IoT devices considers the generated identities to authorize

Overhead Analysis Ratio
Overhead analysis (OA) included the system authentication phase of the proposed L2FAK and the other schemes [42,44,[49][50][51]53] to examine the core features of AG Access and edge computing device M E .Flow connectivity is as follows: Step 1: The authentication phase prefers a dedicated AG Access to generate valid tokens, e.g., for M E .The real-time entities, including AG Access and M E , use a reliable authentic token to process authentication requests that integrate a legal message request, H(.), C, N i , S key , to generate valid session key SK.
Step 2: M E applies N i , S key to retrieve and convert the computational parameters using the SH A − 2 algorithm.It can execute the extraction subroutines of user identities to construct a 256 − bit stable identity using the addressed slots from SRAM.Computation parameters such as M id and SH A − 2 N i are processed to generate a valid X − OR value for S key .
Step 3: AG Access processes the generated M id retrieved from M E N i to decrypt the authentication request, i.e., M id = SH A − 2 N i ⊕ S key .The generated identity then compares the values with a stored identity to process the authentication request.
As to analyzing overhead costs, key parameters such as key size and timing frame are preferred.The overall function has an overhead ratio of 3.65% in processing the system authentication phase.The overhead cost includes the hash algorithm and string processing to compute the memory requirements that use the symmetric key to store 256 − bit values.Device security plays a crucial role in achieving the security level of the IoT architecture; thus, a proper configuration setup is made to examine the core features of low-cost application systems.
Figure 7 shows the overhead ratio versus the number of user identities.It is worth noting that the performance of IoT devices considers the generated identities to authorize the legal authentication requests of M E at regular intervals.The system analysis included the generation of about 700 IoT devices in order to analyze the RAM power-up states among different computing devices.The examination reveals that the proposed L2FAK incurs lower overhead costs, ≈89.45% , to determine genuine legal authentication requests, i.e., the identities of IoT devices, than the other schemes [42,44,[49][50][51]53].
the legal authentication requests of   at regular intervals.The system analysis included the generation of about 700 IoT devices in order to analyze the RAM power-up states among different computing devices.The examination reveals that the proposed L2FAK incurs lower overhead costs, ≈ 89.45%, to determine genuine legal authentication requests, i.e., the identities of IoT devices, than the other schemes [42,44,[49][50][51]53].

Conclusions
In this paper, the L2FAK protocol has been presented using a mobile sink in the IoT-ECF paradigm for smart eHealth systems.Two factors are strategically exploited through an authentic-ware system to mitigate computation costs.Computation analysis proves that the proposed L2FAK incurs lower operational costs to enhance the performance of a real-time system.The proposed L2FAK includes a lightweight operation to improve the computational efficiencies from system authentication and key agreement phases.Using informal and formal analysis, the security efficiency of the proposed L2FAK proved it strengthens the security level of the authentication phase.Moreover, the performance analysis shows that the L2FAK achieves better transmission efficiency and a better overhead ratio than other schemes [23,24,44,47,48].In addition, applied layered authentication using federated learning, i.e., FLLA, utilizes the most appropriate system attributes of the proposed L2FAK to ensure device privacy and improve authentication accuracy in healthcare applications.The experiments are established using TensorFlow Federated to examine the proposed FLLA and other relevant mechanisms on two different datasets including MNIST and FashionMNIST.The analytical results show that the proposed FLLA preserves the privacy features of authentication schemes exceedingly better than other mechanisms used to promise accuracy on standard datasets.

Conclusions
In this paper, the L2FAK protocol has been presented using a mobile sink in the IoT-ECF paradigm for smart eHealth systems.Two factors are strategically exploited through an authentic-ware system to mitigate computation costs.Computation analysis proves that the proposed L2FAK incurs lower operational costs to enhance the performance of a real-time system.The proposed L2FAK includes a lightweight operation to improve the computational efficiencies from system authentication and key agreement phases.Using informal and formal analysis, the security efficiency of the proposed L2FAK proved it strengthens the security level of the authentication phase.Moreover, the performance analysis shows that the L2FAK achieves better transmission efficiency and a better overhead ratio than other schemes [23,24,44,47,48].In addition, applied layered authentication using federated learning, i.e., FLLA, utilizes the most appropriate system attributes of the proposed L2FAK to ensure device privacy and improve authentication accuracy in healthcare applications.The experiments are established using TensorFlow Federated to examine the proposed FLLA and other relevant mechanisms on two different datasets including MNIST and FashionMNIST.The analytical results show that the proposed FLLA preserves the privacy features of authentication schemes exceedingly better than other mechanisms used to promise accuracy on standard datasets.
In the future, we will use reliable resource-constrained IoT devices, such as gateway devices and an advanced Raspberry Pi, to implement and evaluate several instances of a cloud server.In addition, we prefer to incorporate lightweight operators to analyze different traffic patterns, which may evolve into several test cases to examine the core features of fog instances and cloud servers to enhance system efficiencies, including computation, communication, and storage.

Figure 1 .
Figure 1.A Data Cloud-Centric Architecture of eHealth System Using Federated Learning Approach.

Figure 1 .
Figure 1.A Data Cloud-Centric Architecture of eHealth System Using Federated Learning Approach.

Figure 2 .
Figure 2. A Smart eHealth System Model with Authentic Gateway and Layered Authentication.

Figure 2 .
Figure 2. A Smart eHealth System Model with Authentic Gateway and Layered Authentication.

1 = H T New ID x 1 3
pk gw , E 7 = EC TP 5 ps 4 ps 3 TP New 1 ps 4 TP 1 .Finally, AG Access transmits source parameters {E 7 , E 8 } to M Ei /P Ai via M D .

Algorithm 1
Federated Learning Layered Authentication (FLLA) Classifier.Input: Collecting System Attributes {SA 1 , SA 2 , . ..} of i th Computing Devices Executing the number of rounds and Epoch T & τ Output: Authenticating the results of the Computing Devices, i.e., OM of the neurons weights ω i'j' ∆ FL−Q (SA 1 , SA 2 , . . . ,SA N , T , τ) Step 1 : Initialize random computed values W with their corresponding weights Step 2 : Obtain the classifiers models including proposed and others using storage database D_B to verify the parameters ST ID , S key , (F ID , K P ) Step 3 : For any new physical attributes Do Step 3.1 : Send a quantized global model to train the own local model on its relevant computa-tion data Step 3.2 : Obtain a quantized local trained model which applies weighted averaging to receive the quantized local model and to perform parameter quantization on S V S W Step 3.3 : Compute a probability level of better optimization q l trained by the proposed FL via ∼ 10 Step 4 : If the computing device is classified as a legal or legitimate device then Step 4.1 : Allow device access via an appropriate data center Step 4.2 : Modify the bit level of the training database D_B to find its corresponding prediction values of ST ID , S key , (F ID , K P ) Step 5 : Else Step 5.1 : Terminate the device connection Step 6 : End If Step 7 : End For

Table 1 .
Security and privacy challenges in existing authentication schemes.

Table 2 .
Notations Used for the L2FAK.

Table 3 .
Execution Times of Cryptographic Operations.

Table 4 .
Assessment of Computational Efficiency.H A represents the one-way hashing function; T SY represents the symmetric cryptosystem function; T EED represents the elliptic-curve encryption/decryption operation; T MU represents one-point multiplication over ECC; T EOR represents the Exclusive-OR operation; and T AD represents one-point addition over ECC. T

Table 5 .
Detailed Description of Datasets.

Table 6 .
Detailed Description of Datasets.

Table 7 .
Test Error Rate of the Proposed and other Existing Mechanisms versus Distribution of Malicious Clients in the Global Model [Targeted and Untargeted Attack] on MNIST and FashionMNIST.

Table 8 .
Hardware Configuration Details.