A Survey on Key Management and Authentication Approaches in Smart Metering Systems

: The implementation of the smart grid (SG) and cyber-physical systems (CPS) greatly enhances the safety, reliability, and efﬁciency of energy production and distribution. Smart grids rely on smart meters (SMs) in converting the power grids (PGs) in a smart and reliable way. However, the proper operation of these systems needs to protect them against attack attempts and unauthorized entities. In this regard, key-management and authentication mechanisms can play a signiﬁcant role. In this paper, we shed light on the importance of these mechanisms, clarifying the main efforts presented in the context of the literature. First, we address the main intelligent attacks affecting the SGs. Secondly, the main terms of cryptography are addressed. Thirdly, we summarize the common proposed key-management techniques with a suitable critique showing their pros and cons. Fourth, we introduce the effective paradigms of authentication in the state of the art. Fifth, the common two tools for verifying the security and integrity of protocols are presented. Sixth, the relevant research challenges are addressed to achieve trusted smart grids and protect their SMs against attack manipulations and unauthorized entities with a future vision. Accordingly, this survey can facilitate the efforts exerted by interested researchers in this regard.


Introduction
Smart grids (SGs) are known as the new and upcoming systems that generate power in order to increase the flow of electronic communication.SGs can improve and alter the processing capacity using the current state of power grids (PGs) and smart cities [1].It can also provide demand-response features for third parties and distributed intelligence [2], supplied renewable energy [3][4][5][6][7], and upgraded power grid (PG) elements to improve response time [8].Generally speaking, SGs have three main components: power distribution, advanced metering, and microgrid systems [9].In this regard, smart metering systems find their way to facilitate data observation and its transfer.Along with these types of contributions, SGs uphold three main security principles from the confidentiality, integrity, and availability triad.Accordingly, SGs can enforce confidentiality by preventing unauthorized access to the data flow within the current state of PGs.For integrity, SGs can restrict the modification of the grid circuitry within the current-state pwer grid (PG).Lastly, SGs can ensure that the data stored will be easily accessible for authorized end users [10].
Generally speaking, the SG is characterized by a two-way flow of information and electricity, and as this technology continues to advance and becomes more prevalent in dayto-day activity, security is of the utmost importance [8,[11][12][13][14].SG technology aims to make the PG much more efficient than ever before with solutions to bolster resiliency, improve distribution, and benefit the consumer, but with it there comes a number of security and privacy concerns [15][16][17][18].In the advanced metering infrastructure (AMI), consumers have smart meters (SMs), which are intelligent, solid-state, programmable devices that can perform many functions [6,19,20].Some of these include reporting electricity production or consumption back to the operators, and they serve to provide information on outages, bidirectional metering, and billing in a timely interval [17,[21][22][23].For many users, SMs are a major privacy concern because unlike in traditional meters where readings would be done manually and on a monthly basis, readings are automated and are transmitted via wireless links [24].Indeed, the foundation of the SG's packet relaying is a two-way communication infrastructure, either wired or wireless.Wired networks, such as power line communication (PLC), are used to connect substations and control centers [25,26].The risk here is that attackers may be able to intercept and access this information if it is not properly secured, so cryptographic means are utilized to encrypt these communications [16,[27][28][29][30].
Due to the constant activity of SMs, active tampering detection or prevention must also be active in SMs to get precise notification when issues or vulnerabilities arise [31][32][33][34][35].Moreover, smart metering and power optimization are among the main targets in SMs [36][37][38][39].Different proposed schemes have been proposed to thwart these issues.Many of them focus on providing data integrity and protecting confidentiality because of their sensitivity in AMI networks [40].It is worth mentioning that the high dependence on the exchange of information between the networks leaves SGs potentially open to threats and vulnerabilities due to the lack of security when information is being transmitted [8].Different security methodologies have been effectively employed for privacy preservation in SGs such as machine learning (ML).ML has played a significant role in different research directions and research trends including the security front and preservation of human lives [41][42][43][44][45][46][47][48][49][50].More particularly, ML has contributed to mitigating SM security issues.Moreover, the effective use of optimization techniques can contribute to prolonging the SM lifetime [51].For the encryption of meter data, the authors in [52] suggested a localization-based key-management system.Data are encrypted by using a random key index, and the key is assigned to the meter's coordinate.A dependable third party controls and distributes the encryption keys.A technique based on received signal strength and the highest likelihood estimator is suggested for the localization of the meter.At the control center, the packets are decrypted by using a key that is mapped to the key index and coordinates for the meter.
Indeed, there must be a proper key-management system [53].Traditional keymanagement schemes involve the generation, exchange, storage, update, and removal of keys and use either public key infrastructure (PKI) for the key establishment or symmetric key management and in order for components in the SG to communicate securely, proper session keys must be utilized [54].In the PKI system, the public key certificate is gained from the trusted certificate authority's (CA) signature, which is associated with the device's identity and public key.The SM then can register its public key and ID with the CA.In PKI, the user can use the CA's public key to verify the signature and thus authenticate if the certificate is legitimate.In symmetric key schemes, the secret keys are either stored in secure locations or created by a trusted third party, and this key is shared to perform encryption and decryption functions.In most consumer settings, SMs communicate with each other and distributors over a home area network (HAN) to make decisions toward the grid and to report back to the operators, but these communications remain at risk of being exposed, so they must be encrypted [55][56][57].Some of the biggest issues in key management include transporting keys in a secure manner (in a symmetric key configuration) as well as the excessive overhead involved, which makes many of these systems impractical.In addition, there is no single key-management infrastructure, and each scheme must be tailored to meet the network and security requirements of various systems [58].
Regarding authentication in SGs, SMs are an important entity because there is a need for reliability pertaining to data networking in SG security.For example, the memory systems within SMs have vulnerabilities against spoofing attacks and need to maintain power to transmit active tamper detection and prevention circuitry [8].There are two solutions to fix this problem.First, a hardware-oriented authentication for AMI has been suggested by the ring oscillator physically unclonable functions (ROPUFs).The ROPUFs can protect the AMI systems from threats of data integrity and confidentiality by developing keys from the configuration of the integrated circuit or field programmable gate array (FPGA) chip within the SM [8].The other solution is to implement a three-way factor authentication method that will improve the efficiency of renewable energy-based SGs.
One of the relevant research challenges that occurred with the hardware-oriented authentication scheme for AMI was suggesting the implementation of asymmetric key cryptography.The Diffie-Hellman Key Exchange was supposed to act as a minor method for authenticating a message through SMs with a shared session key, but the problem stemmed from the top-level overhead required in the certificate management [8].Another relevant research challenge faced with the hardware-oriented authentication scheme for AMI was suggesting the implementation of a novel key-management scheme (KMS) for the AMI system.This type of key management will utilize the key graph and execute key management modes, such as unicast, broadcast, and multicast.The problem with this key management is that it will experience vulnerabilities to spoofing and modification attacks because of invasive memory [8].The last relevant research challenge was a recommendation to merge physical key generation (PKG) and physically unclonable functions (PUFs) through a wireless channel to secure connection between end users and the original equipment manufacturer servers.The main problem is that there is no finding explaining the real-time demands in the subsystems when it comes to computation and secure communication [8].Moreover, it is vulnerable to the risk of being affected by man-in-the-middle attacks [59].In an attempt to mitigate man-in-the-middle attacks, a trusted anchor was proposed to assist in establishing a key between SMs and service providers.
These schemes were tested to develop a method of key distribution for SGs so that SMs and service providers could have the ability to authenticate with one another through a session key and have secure communication.The problem is that this method is vulnerable to ephemeral secret leakage and privileged-insider attacks and gives weak confidentiality regarding end-user credentials in SMs [59].Due to the fact that SMs can contribute to several applications [60][61][62][63], sufficient mitigation of the attack manipulations impacting the SMs, taking into consideration contemporary technology, can reduce the disastrous effects, whether on the level of human lives or the level of infrastructure [64][65][66][67].
The major contributions of the paper are as follows.

•
We highlight the significance of the common key management and authentication approaches by outlining the primary initiatives discussed in the state of the art.

•
We address the fundamental concepts of cryptography that are involved in SGs.

•
We discuss the primary intelligent attacks affecting the operation and smooth functionality of the SGs.

•
We provide an overview of the most frequently suggested key-management strategies together with a fair evaluation outlining their benefits and drawbacks.

•
We introduce the most recent and efficient authentication models.

•
The common two tools for confirming the security and integrity of protocols are highlighted.

•
In an effort to create reliable SGs and safeguard their SMs from attack manipulations and unauthorized entities, pertinent research challenges are addressed to the main key-management and authentication methodologies with a vision for future work.

•
In light of these points, this paper can aid motivated researchers' work in this area.
The rest of the paper is organized as follows.Section 2 clarifies the motivation of the present study.Section 3 focuses on the main definitions of cryptography that are involved in SGs.Then, the common SGs attacks are discussed in Section 4. Section 5 presents the com-mon key-management techniques.Afterward, the authentication schemes are addressed in Section 6.The verification tools are then considered in Section 7. Finally, the paper is concluded with a vision for future work and open research challenges in Section 8.

Motivation
Cryptography has made significant contributions to the confidentiality, integrity, and authenticity of data in cyberphysical systems, such as control data and personally identifiable information.The integrity of any encryption system is dependent on the security of the cryptographic keys used to encrypt and decrypt data, so when a key becomes compromised, the entire system it was designed for is no longer secure.The key issue with SMs is that the recorded data is often highly detailed, including information on specific appliances and the time at which they were used.One major concern if this information becomes disclosed is that criminals would be able to plan a burglary given that they know what types of appliances a consumer has based on their electric signatures, and they may be able to learn their daily habits, including when no one will be home.Expanding on the last point, it is a violation of privacy if an outsider can observe your daily routine and habits, so it is crucial that this information is properly secured and only those meant to can view it.
In Ref. [68], the authors have only considered the logical key hierarchy mechanism (LKH) for group security in communication systems in their study, which is only a type of key-management mechanism.In Ref. [69], the authors addressed the importance of key-management systems from only the point of view of AMI for SGs.However, in creating a secure key-management system for cyberphysical systems, the schemes shall be both resilient and lightweight for the current infrastructure in the field because devices such as SMs have very limited resources and must be properly accounted for.It is also important to include common mitigation techniques for different types of attacks on symmetric and asymmetric key systems such as brute force attacks or cryptanalysis.The key-generation process must be complex enough to avoid being guessed, and the encryption algorithm should not contain a fundamental flaw in the mathematical theory the system is based upon because, with the proper resources, an attacker may be able to decrypt messages without the key or potentially not even know the algorithm.The primary criteria that must be considered is a system that is secure, fast, noncomputationally intensive, and scalable for a large number of devices in order to provide ample security while also being practical.In light of what was mentioned, it motivates us to bring attention to the importance of SG and SM security.
Unlike other studies, this study considers the common key-management and authentication mechanisms, taking into consideration intelligent SG attacks.Moreover, it also highlights the roles and functionality of the two commonly utilized verification tools of protocols utilized in this regard.Finally, the study shed light on relevant research problems addressed to the primary key-management and authentication approaches with a vision for future work to build trustworthy SGs and protect their SMs from attack manipulations and unauthorized entities.

Important Terms and Definitions in Cryptography
For secure communication between the involved entities in SGs, it is important to define several key terms that contribute to developing efficient security solutions.

Encryption
It is a process to convert information into a cipher with the use of keys to maintain the confidentiality of the information being encrypted.To have a good encryption algorithm, there are several criteria that the algorithm must fulfill.

•
Efficiency: The operations used in encryption and decryption algorithms must be easy to implement on hardware and software.
• Resistance to Statistical Analysis: Encryption algorithms must destroy any statistical structure in the plain-text data -Diffusion: A change of a single bit in the plain-text string will cause a number of bits in the cipher text string to be changed.-Confusion: A change of a single bit in the encryption key will cause a number of bits in the cipher text string to be changed.
• Resistance to Brute Force Attacks: The algorithm must be able to prevent the attacker from computing and testing precomputed encryption keys.

•
Resistance to Side Channel Attacks: This is where attacks exploit loopholes in the environment of the implementation.An example of this is a timing attack in which the attacker analyzes the computing time of certain operations that could help the attacker obtain useful information about the encryption key.

Symmetrical Algorithm
Symmetrical algorithm: In this key system, the same key is used for encryption as well as decryption.There are risks associated with the initial transfer of the key (such as it being intercepted), but it can be used to safely encrypt or decrypt data after it has been successfully transferred.
• AES: Advanced Encryption Standard is a form of a symmetric-key algorithm that is a block cipher, but its encryption and decryption are not symmetrical.To complete its operations, AES divides the plain-text string into 128-bit blocks and can use encryption keys of three different key lengths: 16-byte long (128 bit), 24-byte long (192 bit), or 32-byte long (256 bit).These three variants of AES all have the same encryption and decryption structures but differ only in the number of rounds, wherein each round uses a different round key.This algorithm takes the plain text and does a variety of operations such as substitute, shift, mix, add round key, and invert over multiple rounds and produces a cipher text able to resist differential cryptoanalysis and linear cryptoanalysis.For added security, use of a 128-bit key makes it resistant to brute force attacks and there have been no methods discovered that are efficient enough to be considered serious threats to AES.

Asymmetrical Algorithm
Asymmetric key encryption uses one key to encrypt and a different key to decrypt.The key pairs, known as the public and private keys, allow for risk-free key exchanges.Both systems share public keys but do not share private keys.The public key is used for encryption and generates cipher text; however, only the private key can decrypt the cipher text.This is not always the case, as the private key may also create cipher text depending on the scheme.If done this way, this allows for nonrepudiation, which entails that the sender cannot claim that they did not encode a message, and all recipients are aware that the message is genuine.This property is also known as a cryptographic signature.

•
Diffie-Hellman: This is a public-key algorithm and its purpose is to allow two users to exchange a key in a secure manner that can then be used for subsequent symmetric encryption of messages.Its effectiveness depends on the difficulty of computing discrete logarithms because it is difficult to solve x from y = a x modp, x < p.The fundamental theory behind this is that given p, g, and g a modp, it is not feasible to compute private key a.

Hashing
Hash functions are functions that take a specific set of data and turn it into a representation of the data.For example, when a 500-MB file is hashed, rather than simply displaying the information of the file, a piece of "reference data" is used to show that the original information exists.When that file is referenced, instead of accessing the full 500-MB file, the stand-in reference point is used.The implementation of hash models makes systems more secure by making it more difficult to access true data.It also makes systems faster, because when things are referenced, they are not actually being referenced as their whole true value of data but a smaller representation of that data.
Hashing utilizes a methodology for data transformation between entities, called a one-way function.The idea of a one-way function is the transformation of an input into a second input without any way to reverse the transition.These hash models not only make communication faster but increase the security of information transferred during communication.One-way hash models are utilized to make authentication functions and models that cannot be exploited by attacks like man-in-the-middle attacks.
To sum up, hashing is a technique by which to ensure the integrity of any arbitrary length data by converting it into a fixed-length string.To do this, a hashing function is used and returns values called hash values or simply hashes.There are several properties required for a hash function.

•
It should be easy to calculate the hash value given the message m and be able to calculate h(m).

•
The function should only work one way where h(m) is easily calculated with m, but it is difficult to calculate m with h(m).

•
It must be weakly collision resistant, meaning that an attacker, given m 1 , cannot produce another message m 2 with h(m 1 ) = h(m 2 ).• Additionally, it must fulfill strong collision resistance in which it is not possible for the hashes of two messages to be identical to one other: (h(x) = h(y)).

Elliptic Curve Cryptography
Elliptic curve cryptography (ECC) is a technique utilized to encrypt data and make it more secure.It is key-focused and is closely linked with the Rivest-Shamir-Adleman (RSA) cryptographic algorithm.ECC is an attempted improvement on RSA, attempting to make authentication more viable and guaranteed to be secure with fewer resources dedicated.This is an important aspect of security when it comes to devices that cannot handle a high level of overhead resource demand (such as phones, IoT devices, etc).ECC is utilized by cryptography functions in digital signatures and in pseudorandom number generators.

Elliptic Curve Discrete Log Problem
The original discrete log formula is 2 n mod a = b, and the idea is for this equation to be complex and difficult to solve.This difficulty provides security to cryptographic functions that utilize it in its algorithms.However, as technology continues to improve, the problem becomes easier and easier to solve, destroying its security.In order to counteract this, cryptographic methods that implement it (like RSA) require a larger number of bits to continue to have the same level of security.This, as said before, increases the overhead of the systems that utilize it.The elliptic curve discrete log problem resolves this issue slightly, as the elliptic curve log problem equation is far more complex and thus requires fewer bits to utilize it to the fullest in a security context.This may only be a temporary solution to technological advancements causing cryptographic algorithms to become obsolete, as the speed at which processing power advances is not slowing down any time soon.

ECC Working Strategy
As its name suggests, ECC utilizes elliptic curves to create secure connections between pairs of keys in a public encryption function.Elliptic curves in math operate in a finite field and are based on the function y 2 = x 3 + ax + b [70].Elliptic curves have some characteristics that make them perfect for a cryptography function.One is horizontal symmetry, meaning that for any point found on the curve, it can be flipped over the x-axis and not change the curvature.A second interesting characteristic is that any line that is not vertical will only interact with the curve at three places at max.With that in mind, we consider the situation in which two points on a graph are "dotted" together (e.g., A • B = F) to be given a third final point.In a cryptographic context, when given A and F the algorithm must discover what B is, and this is difficult to accomplish, making it a functionally secure method [70].

ECC Desirability
As stated before, ECC looks to improve on the negatives of RSA that make it difficult to implement into devices, such as the the high demand for resources to make it viable and secure.For an RSA cryptographic function to be deemed secure, the bit length of the key should be 1024 bits.In comparison, in an elliptic curve function to achieve the same level of functional security would require a key length of only 160 bits.That is a drastically reduced number and greatly reduces the resources required to perform encryption securely.This reduced requirement in bits not only allows for less computationally capable devices like those linked with the Internet of things to perform the actions with little difficulty, but it also increases the speeds of the encryption transaction due to the algorithms being easier to handle with a smaller number of bits.

Bitwise Functions
Bitwise functions perform operations on binary strings, bit by bit.The result of a bitwise function will be one of two values.A 1 means true, and a 0 means false.Bitwise functions are used in low-level programming, in the transmission of messages, and more.It is worth mentioning that XOR bitwise functions are the main ones utilized for comparison.Moreover, hamming code is among the common codes employed in this regard.

XOR Bitwise Functions
Exclusive or (XOR) is a bitwise function.During logical operations, if two compared bits have a value of 1, then it returns a value of True.If there is no 1 present or both values return 1, the function will return False.

Hamming Code
Hamming code is a tool in telecommunications that handles miscommunication errors.In communication systems, hamming code is important, as it is expected for these systems to be error free or close to it [71].High accuracy is a strict requirement in these systems and Hamming code makes that possible.As SMs deal with sensitive information, it is possibly dangerous if transmitted incorrectly, and it is pertinent that every piece of data that is transmitted and received be verified to be as accurate as possible.Hamming code is an algorithm that detects errors in binarily coded messages by using something referred to as "parity" bits.Parity bits are bits that are attached to the tail end of transmitted data and are used to verify the authenticity of the information received.The way the parity bits are dispersed in the data will reveal whether the data received is the expected relay sent out [72].Hamming code detects these errors by comparing bit strings utilizing XOR bitwise functions (otherwise known as Modulo).Hamming code does increase the overhead requirements of a system [73].Consequently, this can be a problem for systems that have a low amount of onboard RAM.

Merkle Trees
Merkle trees are data structures found in computer systems.Merkle trees utilize hash functions in order to accomplish some of its functionality.Merkle Trees increases the speed of the authentication of information, and this is beneficial when dealing with large data sets that need to be accessed.

Merkle Trees Desirability
It aimed to reduce the required strain on systems to carry out authentication and provide secure communication [8].A Merkle hash tree takes all data and organizes it similarly to a literal tree with a trunk and branches stemming from it.Information on the branches originates from information at the trunk.For example, if there were three branches (hash values B1, B2, and B3), the hash value of the trunk would be equal to B1 + B2 + B3.This method of information organization creates "redundancy" in data by reducing the amounts of steps and "new" data needed to access more information.
The implementation of asymmetric key cryptography into the hardware-oriented authentication scheme for AMI solved the issue regarding authentication but created another challenge.The number of resources necessary to carry out the asymmetric cryptography functions was too demanding for non-high-level systems.This led to the implementation of a new hashing technique that utilized the Merkle hash tree technique.This technology has proof of working and improving the security of authentication, as it has been utilized as a method of authentication in secure network transactions like cryptocurrency.

Hardware-Oriented Security
As technology continues to advance, the reliance on software security is becoming increasingly more dangerous.Prior to the increase in advancements, systems were not developed with specific cyber specifications in mind, thus making hardware a very easy access point into exploiting systems.A hardware-focused security approach aims to focus on decreasing the risk and attack vector of specific hardware components of a system.The implementation of software such as ROPUFs on a field programmable gate array (FPGA) allows for a more secure hardware status for manufactured devices.

Field Programmable Gate Arrays
FPGAs are hardware circuits that are manufactured with configuration in mind.They allow for specific customizations to be made to them [74].They allow consumers or organizations that obtain them after creation to customize them for their desired outcomes.They are useful, as they come separate from the devices they will be operating in, so it is unknown at the time of production what exact settings and configuration would be desirable in order to achieve an attended goal with the final system the gate array will reside in.This postproduction configuration allows device manufacturers to be more flexible.Gate arrays are utilized during communication between devices due to their ability to make the process of understanding the algorithms that make the communication possible trivial [74].In this regard, ring oscillators play a significant role.

Ring Oscillators
Ring oscillators are used to create a frequency signal in systems.They are groups of logic gates in circuits that are made up of NOT gates.The number of NOT gates must be an odd number.The output of the logic gates oscillates between values representing either true or false outcomes.The initial NOT logic gate in a chain receives one of its given inputs from the final NOT logic gate in the system.This process utilizes a concept called "time delay," in which inputs received to NOT gates are not all at a consistent rate of time with that rate constantly changing.Consequently, although there are a limited set of values that can be the final output of the oscillator, the time delay makes it so the output is difficult to predict.This increases security in a system when utilized.

Common Attacks in SG
This section presents the most common attacks affecting SGs that have been efficiently handled by key management and authentication methodologies.

Replay Attack
The proposed scheme protects against replay attacks by including timestamps with every message that is being sent during the login, authentication, and key agreement phases.Although the contents and variables of each message may differ, all include the variable T i , which serves as the timestamp for that specific message [59].

Man-in-the-Middle Attack
Man-in-the-middle attacks are preventable by this scheme because even if an adversary was able to generate a new timestamp T ′ i and a random nonce ru ′ i , they still do not know the secret credentials RID i and signature s i .Without the secret credentials, it is impossible for the adversary to impersonate another user or modify the contents of a message [59].

Privileged-Insider Attack
Privileged inside attacks can also be prevented with the proposed scheme.Suppose a privileged insider has access to the important registration information of a user.Without the biometric key σ i of the user, the adversary cannot verify a guessed password in order to authenticate themselves as the user.Similarly, this proposed scheme can also prevent user impersonation attacks in a similar way [59].

Spoofing Attack
The main feature of the spoofing attack in SGs is to disrupt the network traffic measured by the distributed SMs.Moreover, this attack is versatile in its consequences, such as existing routing loops, extending or shortening the source route, and injecting errors in the transferred data.In the literature context, among the main efforts confronting this attack, we can find significant proposals, such as [75].

Invasive Attack
Invasive attacks are those on physical systems that permanently alter the chip's physical characteristics.The attack's goal is to record information that is kept in memory spaces of meters.These attacks are also employed to defeat blown fuse linkages, disable meters, and disconnect circuits.Interestingly, SMs are not the only targets of these attacks.From the state of the art, in [76,77], intelligent security models have been proposed for detection isolation and localization and anomaly-detection models to mitigate the attack manipulations of this kind of attack.

Denial of Service (DoS) Attacks
DoS attacks usually work by flooding targeted SMs with requests until traditional traffic is not able to be processed, which denies service to additional entities.Moreover, the DoS attacks are able to disrupt the data integrity and authentication between SMs in the SGs.Several efforts have been proposed in the literature targeting the different DoS attacks, such as [78].

Brute Force Attack
The brute force attack focuses on the security and confidentiality of observed SMs measures in SGs.Accordingly, catastrophic situations may occur due to the sensitivity of SMs and the vulnerability of physical quantities measured in SGs.Many research efforts have been exerted to resolve such intelligent attacks, such as in [78,79].

Offline and Online Attacks
First, the online attack can access the encrypted data or a password hash.Then, the attacker can experiment with key combinations without worrying about being discovered or interfered with.Secondly, the attacker in the online attack must communicate with the target system he is seeking to access.In Refs.[78,79], the proposed model can mitigate both kinds of attacks.

Key-Management Methodologies
This section addresses the methodologies of key management for SGs.In the literature context, many research efforts have been proposed in this regard.Here, we discuss the main nine techniques and algorithms as follows.

Diffie-Hellman
The Diffie-Hellman key exchange is a key part of secure communication protocols.It is a component in a communication protocol that enables the ability for them to have a secure connection [80].It makes it possible for a secure environment to be established and for two unknown users to establish a key they can use simultaneously to communicate.A secure environment when making keys is important because if keys are intercepted it makes communication between users insecure and dangerous.If someone has access to the key but has no authority to view it, they can now access all messages exchanged between users using said key.The Diffie-Helman Key Exchange establishes this secure environment by having both users indirectly discover the secret key they wish to use. Figure 1 shows the general structure of Diffie-Helman.
The way the exchange works is that two people looking to communicate start off with their own parts of a key, let us say A & B. Neither knows the other's part, and both users agree on a likewise third addition.In Ref. [80], the authors add together the piece they know and the agreed-upon piece together.They then exchange that combination, and once again add their known part to it.They now have a like-key that is the same without ever exposing what their own original keys were.This works functionally with keys, as they are usually represented by long, complex strings of bits rather than just single alphanumerical letters, making it extremely difficult for a bad actor that retrieves a piece of the key during the exchange to decipher it.

Scalable Key Management (SKM)
Figure 2 shows the general structure of an SKM scheme that was proposed by [81].The system structure is illustrated as follows.A HAN network, links smart appliances, distributed energy resources, the HAN gateway, and other control devices to the SM.A wide area network (WAN) facilitates utility companies and customers to communicate in both directions.Power line communication systems, cellular networks, or IP-based networks can all be used to create wide-area communication infrastructure, depending on the real requirements [82].Moreover, a database system called the meter data management system (MDMS) is employed to store, manage, and analyze metering data to improve customer services [81].

Logical Key Hierarchy
The LKH, which uses a key tree for each demand response (DR) project, addresses the scalability issue.Each LKH member maintains a copy of the secret keys to its leaf and all other nodes along the path leading from its leaf to the root.Figure 3 illustrates the process of the LKH.The authors in [83] showed that scalability is guaranteed for big SGs with dynamic demand response projects using their suggested LKH.Additionally, a multigroup key graph structure is suggested in this work to lower key-management storage and communication costs.A fresh set of keys can be shared by several DR projects by using the suggested key graph technique.
When compared to the communication costs brought on by using a separate LKH tree, the joining or departure of a user in a DR project has no impact on the cost of rekeying operations.In this study, a two-level graph is used to simulate the multigroup key graph structure.The lower level designates a user set that has subscribed to the same first DR project.The leaf node of the tree represents a user's key at the lower level, and the tree's root represents the group key for the DR project.The root key combinations for concurrent users subscribing to various DR projects are shown in the upper-level graph.

Information Centric Networking (ICN)
In order to guarantee secrecy, integrity, and authentication, the authors of [84] propose a key-management scheme for several SMs as well as ICN in AMI systems.The plan's goals were to provide security, manage network traffic, and facilitate mobility.Energy data in the AMI system must be kept a secret because it can reveal personal information about daily routines and habits.Utilizing ICN ensures data integrity in addition to authenticity and confidentiality.Because it is dependent on the safety of the data itself, it differs from the protection offered by end-to-end transmission.Here, the unicast, broadcast, and multicast secure message exchange is guaranteed.Figure 4 shows the architecture of the ICN.

Resilient End-to-End Message Protection (REMP)
Figure 5 illustrates the REMP paradigm.The authors in [85] addressed the challenges and limitations of conventional message protection and key-management schemes.The approach that they offered is known as REMP for short.REMP is introduced as an alternative to improve end-to-end security, "privacy, integrity, message source authentication, and key exposure resilience".It is a publish-subscriber group security scheme that improves the heavy computational load in using public keys.It also preserves the scalability and extensibility of the publish-subscriber group communications key-management schemes.REMP possesses four characteristics that make it a drastic improvement, compared to known message-protection schemes.The first characteristic is "one encryption key per message".The publisher will encrypt the message that they want to send by applying a separate session key.There is one session key per message that the publisher sends and uses for encryption.This characteristic increases the key security and privacy between multiple publishers within a group.This prevents other publishers from updating the publish key by either a new publisher entering the group or a publisher leaving the group, leaving the shared public key exposed to malicious exploitation.By having a session key per publisher and per message, other publishers leaving or entering the group cannot access this key because it is unique.It also prevents attackers from exploiting a shared session key, thus collecting and replaying cipher texts.
The second characteristic is the "subscriber's state independent of the number of publishers".An advantage that REMP possesses is that the subscriber can compute a one-time decryption key once a message is received, through the use of a long-term master key.This can prevent the subscriber to keep the security state set by the publisher and avoid any extreme overloading if any failures or restarts occur with the subscriber.This REMP characteristic displays scalability in the situation of a multipublisher group.
The third characteristic is "message source authentication extension", which addresses the problem of message source authentication.REMP exploits end-to-end authenticators and message brokers that multicast messages from the publishers in a group.Once a message is sent, the end-to-end authenticator has the sender's or publisher's identity and a ciphered message authentication code of the message.An example of this is shown in a solution that was proposed by Badra et al. [86] who mention the use of REMP to help improve end-to-end message confidentiality and integrity, as well as prevent replay attacks.This scheme that is created, is split into two phases-the application phase, and the handshake phase.In the application phase, every time a client application sends a request message to trusted third parties, the following are attached: the client's certificate URL, random number, and server address.When the trusted third party reviews the message received, they verify the client signature before using the server public key to authenticate the client into the server.With the server's public key, the trusted third party can generate a ticket for the authenticated client, which contains a message called the "response", which consists of encrypted fields.With this, the client can detect replay attacks by comparing and signing the client's random number through a trusted third party.The handshake phase verifies that both random numbers of the trusted third party and the client match in order to generate symmetric keys for encryption and message authentication.In [87], the solution proposed still has issues with end-to-end security between the WAP terminal and application server due to the client and trusted third parties not having WTLSCert and X.509 certificates.This solution then ties into the final characteristic of REMP.
The final characteristic of REMP is the symmetric key-based approach for resource constraints.In a cyberphysical system (CPS), communications devices communicate with servers in an administrative domain.A preshared key, or PSK, per device and symmetric ciphers like AES or 3DES can work with the cyberphysical system.In REMP extensions, symmetric-key operations are predominant because it protects confidentiality and integrity.With symmetric keys, REMP can use ECC to support a secure multicast to a massive number of subscribers and maintain symmetric-key operations, and secure data collection from a massive number of publishers.
These four characteristics lay out the foundation for REMP as an alternative tool for the key-management and message-protection scheme, whose system architecture and solution will be discussed in the later section.

NIKE and NIKE+
The authors in [87] contributed to this topic by proposing their own protocol for key establishment.Two versions of this protocol have been made for this paper-one called novel identity-based key establishment (NIKE) and the other called NIKE+.Their protocol is based on ECC.This protocol works by first having the grid owner take inputs to set as parameters, which are then forwarded to a trusted authority (TA).The TA will then use these parameters in combination with a master key to send the SM and the AHE private keys.Once this is done, a session key can be established between an SM and AHE. Figure 6 denotes the architecture of the NIKE process.The difference between their two proposed methods (NIKE and NIKE+) is that NIKE+ has more of the calculations performed on the AHE compared to NIKE.This is because the resources on the SM are very limited.By moving the calculations to AHE, which is less resource restricted, the protocol can do calculations much faster.When comparing NIKE and NIKE+ calculation times, NIKE was shown to take 4.91 s while NIKE+ only took 2.46 s in total.NIKE+ proved to be much faster when compared to protocols that were mentioned earlier.The protocols proposed by Wan et al. [81] (SKM and SKM+) both took over 7 s, the protocol proposed by [88] took a total of about 45 s, and the protocol proposed by [89] took 4.91 s.Compared to these NIKE+ performed much better.NIKE and NIKE+ have also proven to be secure when using the AVISPA tool to check for any vulnerabilities.
NIKE performs its key establishment scheme by using a three-step process: setup, installation, and key agreement.The scheme in the proposed paper [87] uses three components communicating with each other in order to ensure creation of a secure environment for establishing keys.NIKE and NIKE+ both use an SM for the consumer side, an AHE for the SG operations center, and a TA in order to help secure the initial connection between the SM and the AHE. Figure 7 illustrates the architecture of the NIKE+ process.For the first step, the setup occurs on the TA.This step is meant to create some of the preliminary parameters to be used for the secure key establishment.One variable, k, is first received by the TA.This value k will be used to generate the parameters to be used in the key exchange.With k, the TA decides upon a prime number, q, that will be used in the calculations.The prime number q consists of k amount of bits, so carefully choosing a k value will help provide better security because it influences other parameters.With the q value having been created, the TA then uses this value in order to generate the values F q , E/F q , G q , and P. Two hash functions, H 1 and H 2 , a master key, x, and a public key, P pub , will also be generated by the TA.These values will all be sent to the SM and the AHE as parameters for the key establishment.
The second step, installation, then has the SM and AHE communicating with the TA.This step has the AHE calculate R AHE by using a random number r and calculating rP.This value is sent to the TA, in which the TA responds back with y AHE , which is the value generated by hashing together R AHE and the ID of the AHE that it is trying to communicate.Following that process, the SM performs a similar procedure, and the TA responds back with y i , which is the value generated by hashing the SM ID and also the y AHE , which was produced with the AHE.
The third step, the key agreement, is then performed by having the AHE and SM communicate back and forth with each other.The SM begins this by generating a random number to be used to calculate T M .This value is then sent alongside the ID of the SM, ID M , and R M , toward the AHE.The AHE then uses the values given by the SM to generate k AHE , which is used for session key generation.AHE responds to this by sending back its ID AHE , M 1 which was generated by calculating the hash H 1 (0, k AHE, M ), and T AHE , which was calculated similarly to T M .The SM is able to authenticate the AHE by also calculating the hash M 1 by itself and comparing it to the one sent by the AHE.The SM then calculates M 2 and performs a similar process in order to authenticate itself to the AHE.Once both the SM and the AHE have authenticated each other, they are both able to create the key that they will be communicating by using hashing and concatenating the IDs with secrets they both have.In the case of the SM, it has less computational power compared to the AHE.What NIKE+ changes from NIKE is the computational load required by the SM and the AHE.Shifting some of the computations from the SM over to the AHE allows the key establishment to be accomplished much faster.

Anonymous Key Distribution
Figure 8 indicates the architecture of the anonymous key distribution process.The authors in [90] applied an "identify-based" signature and identity-based encryption to propose an anonymous key distribution scheme for SGs.This would require SMs and service providers to mutually authenticate with each other than establish session keys between them to communicate securely.However, this scheme was insecure against ephemeral secret leakage attacks and failed to provide strong privacy credentials for SMs.This scheme was also vulnerable to privileged insider attacks and offline password-guessing attacks [90].

Key Management System
The authors in [91] proposed a novel key-management system framework of AMI systems based on the key graph as shown in Figure 9.The key graph includes key management for unicast, multicast, and broadcast modes.The main issue around this scheme is "it is based on nonvolatile memory technologies which are vulnerable to spoofing and invasive attacks" [8].

Needham-Schroeder-Based Symmetric Key
In Ref. [92], the authors proposed a novel key-management scheme that combines symmetric-key techniques with elliptical curve public key techniques.The proposed scheme presented strong security along with great scalability and accessibility.However, the scheme is still vulnerable to man-in-the-middle attacks [59].
Although different schemes have been presented to address issues with SG communication, most of these schemes fail to provide functionality and security features, such as perfect secrecy, secret key security, protection against offline password-guessing attacks, strong SM credential privacy, dynamic SM addiction, and update phases [59].The authors of this article try to address these issues and present their own scheme for a three-factor user-authentication scheme for renewable-energy-based SG environments (TUAS-RESG).To sum up, Table 1 identifies the pros and cons of the addressed key-management mechanisms.
Table 1.Common types of key-management schemes.

Type Advantages Disadvantages
Diffie-Hellman [80] Can effectively deal with key management Vulnerable to man-in-the-middle attack SKM [81,82] End-to-end encryption, the ability of key generation, key freshness, support forward and backward secrecy, support integrity, support confidentiality, support authentication

High computational time LKH [83]
Suitable for large SGs with dynamic demand response projects, allow multiple demand response projects to share new key sets, solve the scalability issue, less storage and communication cost are needed In case of compromise, the rekey of a multicast group requires to balance the number of transmissions and storage ICN [84] Suitable for a large number of SMs, control network congestion, support mobility Establishing an experiment environment via ICN is challenging, ICN relies on name-based routing, implementation is not easy, high memory usage leading to performance degradation REMP [85,86] Improve end-to-end security, privacy, integrity, message source authentication, and key exposure resilience, less computational process Faces some end-to-end security issues between the application server and WAP terminal when the client and trusted third parties not having WTLSCert and X.509 certificate [86,93,94] NIKE&NIKE+ [81,[87][88][89] It is not based on pairing, very low overhead Keys between the corrupted user and benevolent ones not leaked immediately [94] Anonymous key distribution [90] Utilize identify-based signature and identity-based encryption Insecure against ephemeral secret leakage attacks and failed to provide strong privacy credentials for SMs [90] KMS [8,91] Suitable for unicast, multicast, and broadcast modes Relies on nonvolatile memory technologies which are vulnerable to spoofing and invasive attacks Needham-Schroeder-based symmetric key [59,92] High scalability and accessibility, efficient protection against offline password guessing attacks Vulnerable to man-in-the-middle attacks

Authentication Schemes
This section introduces the main authentication schemes in the literature.More particularly, six mechanisms have been considered, namely, PUFs, lightweight message and attribute-based authentications, Merkle-tree-based authentication, mutual authentication for unicast and multicast communications, TUAS-RESG, Markov chain, and game theory.

Physical Unclonable Functions
In Ref. [95], the authors proposed a scheme that combines PUFs and a PKG technique wirelessly to provide secure communication.PUFs are a hardware implementation of security for devices.In other words, PUFs are a hardware solution to authentication problems that plague hardware-authentication schemes.They also address the need to store cryptographic keys on a system without requiring additional hardware installation.
In authentication, there is an issue with devices being replicated.Once replicated, bad actors can pretend to be using the original device that was created.This can give them unintended access to systems that were given to the original, real device.Accordingly, by utilizing randomness, PUFs give devices a unique signature that cannot be replicated.This is done by using the hardware of a specific device to generate unreplicable data.This scheme has strong encryption and authentication.However, "no information about realtime requirements of computation and secure communication subsystems is provided".Along with those issues, the precise cost of such a protocol has yet to be identified or established.It should be noted that even if the same hardware specifications were present in multiple instances of a device, each of them would still continue to output a unique piece of data.
In this regard, in Ref. [96], the authors tried to address some of the major security issues, scalability, and efficient communication between SMs and utilities with AMIs.Their proposed scheme was based on a combination of PUFs and ID-based authentication that combines the best of symmetric cryptography with identity-based cryptosystems.Moreover, the proposed scheme provided security at the application layer, handled IDbased keys, and eliminates the risk of key compromise on the hardware level.It is also able to thwart DoS attacks and reduce the average packet latency by 8-14×.In that work, the authors have not identified any limitations revolving around this proposed framework.However, the implementation of such a framework could be difficult due to the combination of different security systems.That model has relied on Hamming code, which provided better security while sending less information back to the utility company, therefore reducing overhead [8].
The effort has been extended by [8], in which the authors tried to address the issues presented above by proposing a novel authentication and secret key storage scheme for AMI systems by using the ROPUFs on FGPA without the requirement of a secure volatile memory system or additional costly hardware in SMs.This scheme would eliminate the need to store secret keys in SMs and instead derive such keys from the FGPA chips themselves.More particularly, ROPUFs adapt the functionality of ring oscillators and PUFs in order to provide hardware security.ROPUFs combine ring oscillators, PUFs, and multiplexers in a logic circuit.ROPUFs get their output values by comparing the frequency of oscillation between the ring oscillators that are included in the system.The output bit value is dependent on the speed of the paths of the ring oscillators in the circuit.These paths of oscillators will always run at different speeds and create unpredictability within outputs.This unpredictability increases the authenticity of systems [97], preventing signals from being replicated and giving devices a unique signature.

Blockchain-Based Authentication
Recently, blockchain has played a significant role for authentication and authorization in many sensitive applications such as SGs.In Ref. [98], an edge computing-based SG system protocol for mutual authentication and key management.Without the need for additional complicated cryptographic primitives, the protocol may offer effective conditional anonymity and key management by utilizing blockchain.The model stops a user's identity from being revealed to the edge server.In addition, because of identity-based registration, the entry and exit of new end users would not have an impact on those of the already-existing end users.

A Lightweight Message and Attribute-Based Authentications
The authors of this paper present a "lightweight message authentication scheme for connecting the SMs distributed at various hierarchical networks" [8].However, this scheme results in a high level of overhead in certificate management due to the use of traditional public-key cryptography.This is implemented by using the Diffie-Hellman exchange protocol to establish shared session keys [99].
In Ref. [100], the authors proposed a privacy-preserving authentication scheme for SG environments that utilizes a two-step protocol for authentication between SMs and data collection units, and AMIs.However, some functionalities and security features are missing in this proposed scheme.Their proposed scheme is susceptible to availability attacks such as DoS attacks.
The authors in [101] presented an attribute-based authentication and authorization scheme for SGs that protect against both outsider and insider threats in SGs by "verifying the user authorization and performing user authentication together".This proposed scheme has been tested by BAN-Logic and protocol verifier (ProVerif) showing strong durability with very little computational overhead to improve performance.

Merkle-Tree-Based Authentication
The authors in [102] addressed the vulnerability that SGs face, such as message injection attacks and replay attacks, which could degrade the performance of SGs.To prevent such an issue, the authors proposed an authentication scheme that considers SGs with computation-constrained resources and employs the Merkle hash tree technique.This proposed scheme helps to reduce computational overhead and prevents replay, message injection, and message analysis attacks.However, its resiliency toward DoS attacks is still untested and could prove to be critical.

Mutual Authentication for Unicast and Multicast Communications
In this scheme, the authors proposed a scheme for mutual authentication between SG utility networks and HAN SMs [89].This method also provided "a novel key management protocol for data communication between the utility servers and customer SMs".This proposed scheme prevented brute force, replay, denial-of-service, and man-in-themiddle attacks and improved network overhead caused by security key-management packets.However, the proposed scheme does not address authentication between SMs and appliances.

TUAS-RESG and Two-Factor Authentications
With the emergence of the Internet of things, devices can exchange information with each other in order to increase efficiency and use.With this growth happening in the technological field, SGs are emerging, providing a more stable and efficient power to end users via three-factor user authentication [59].IoT devices and SGs work hand-in-hand, exchanging and interpreting information by utilizing TUAS-RESG [59].To facilitate this two-way communication between end users and SGs, SMs, sensing devices, and control systems are put in place.That model has relied on a well-known signature scheme called El-Gammal.However, the proposed scheme did not support the password and biometric update phase and dynamic SM addition phase.
For two-factor authentication, the authors in [103] addressed current issues with SG security by identifying the overlooked significance that the SG is a cyber-physical system, meaning more consideration of its cyber and physical domains needs to be addressed.Overlooking this issue has resulted in substitution and man-in-the-middle attacks.The authors presented a combination of a contextual factor based on physical connectivity in the PG with the conventional authentication factor in the challenge-response protocol to create a two-factor cyber-physical device authentication protocol to defend against coordinated cyber-physical attacks on SGs.

Markov Chain and Game Theory for Authentication
The authors in [104], proposed a dynamic and distributed "trust model based on a Markov chain to formalize the trust metric variation and its stability".This scheme allowed vehicles to act as their monitor and update the trust metric of their neighbors depending on the behavior of the network.Its flexibility allowed it to adapt the model according to the application's context.This scheme's strength is shown by its resistance and robustness during testing.While the proposed scheme is very strong, its performance evaluation of the trust model in the real context of VANETs needs improvements and enhancements.
To decide on a just cost allocation for the noise that is added to a system, parties might work together under the guidance of game theory.Therefore, a complex branch of intelligent optimization is game theory.The game theory model shows a competition between teams of players who might decide to cooperate or compete against one another to advance their results or payouts through the employed strategy or strategies carried out by the progressive player actions.The definitions of the key game parameters from the cited references [105-107].
In the field of security, game theory can be used to spot rogue nodes, lessen the impact of outside incursions, and find nodes that act egotistically and overburden the entire network.Nash equilibrium (NE) has become a realistic concept for wireless networks, and more specifically for the security of wireless nodes.NE is an intelligent solution to social concerns.
The authors in [108] investigated how to use game theory to shield wireless nodes from egotistical or malicious nodes.That study examined several game-theoretic protection tactics for wireless nodes and provided a classification of game-theory strategies based on the nature of the attacks.The significance of evolutionary games for the security of wireless nodes facing clever attacks was then recognized in a trust model employing game theory for decision-making.Finally, several game theory perspectives were put forth to encourage the cooperation and validity of data among various wireless nodes.A Stackelberg game was developed to fight external assault manipulations utilizing the energy defense budget versus the corresponding attack budget, as suggested in [109], in order to avoid disrupting the reported data in clustered sensor networks.The proposed work may successfully address the hardware issue that arises in the presence of the attack impact in sensor network-based cognitive radio, as stated in [110].The suggested model also effectively manages energy use.In order to create a security model for sensor network-based cognitive radio to defend against the data falsification attack, ref. [106] presented a Stackelberg game.This strategy was created for two distinct attack-defense situations.Based on the threshold level for calculating the interference power, two scenarios were offered.
An efficient Stackelberg game was suggested in [111] in order to attain data trustworthiness in PGN.This attack scenario frequently manipulates groups of the PGN's deployed nodes, which cannot be controlled by the method just mentioned.The attack scenario, which is more serious than that considered in previous studies, was addressed by the presented model.A game-theoretic protection strategy was put forth for clustered wireless sensor networks based on a repeated game in the article [112].The suggested method was developed to identify rogue nodes that discard HPPs in order to improve the dependability of high-priority data (HPT).The results of this study demonstrate that, in comparison to a noncooperative defensive mechanism, the suggested protection model's HPT is improved, resulting in the Pareto ideal HPT.A game-theoretic strategy based on non-zero-sum games is proposed in [113] to provide a strong trust model against sophisticated threats faced by IoT applications.The collected results demonstrate improved performance in identifying malicious nodes and a simple model.

Versification Tools of Protocols
This section highlights the main tools utilized for verifying the tools of key management [114].Specifically, we concentrate on the automated validation of Internet se-curity protocols and applications (AVISPA) tool [87], and the recent version of ProVerif (Version 2) [115].

AVISPA
In Ref. [87], the researchers utilized the AVISPA tool to verify the security integrity of the NIKE key-management scheme that was developed.AVISPA is an applications tool providing a suite of applications and modules that are used to build and analyze the security of Internet protocols and applications [116,117].Figure 10 shows the process of the AVISPA tool.
There are three layers to the AVISPA tool as you can see in Figure 10 [117].The top layer is the high-level protocol specification language in which the designer of the protocol interacts with the tool to implement their security protocol along with a security property into the tool to be tested [116].The middle layer is where the input is then converted to an intermediate format (IF) code with the use of a translator [117].The IF code is analyzed with the bottom layer, the backend analyzers.These four analyzers are the on-the-fly model checker (OFMC), CL-based attack searcher (CL-AtSe), SAT-based model checker (SATMC), and tree automata-based protocol analyzer (TA4SP).

•
OFMC uses a demand-driven method to perform protocol falsification and bounded verification by exploring the transition system described in the IF specification [116].
It allows the specification of algebraic properties of cryptographic and typed and untyped protocols [116].• CL-AtSe utilizes simplification heuristics and redundancy elimination techniques to apply constraint solving to the Internet protocol [116].• SATMC employs the IF, the initial state, and the set of states as parameters to represent a violation of the security protocol defined to build a propositional formula for said protocol [116].

•
The TA4SP module approximates attackers' or intruders' knowledge of the inner workings of the protocol with the use of regular tree languages and rewriting.It can show if the protocol is flawed by underapproximating or whether it's safe for any number of sessions by overapproximating [116].
Upon completing its analysis, the AVISPA Tool will output the result of the analysis of the defined security protocol and will state whether the input problem was solved either positively or negatively, the available resources that were exhausted, and whether the problem cannot be solved for a particular reason [116].

ProVerif
This model, created by [114] depicts protocols using Horn clauses and employs overapproximation to evaluate an infinite number of sessions.Horn clauses and a portion of the pi calculus are the two types of input files that ProVerif accepts.The tool performs unbounded verification for a class of protocols by using an abstraction of fresh nonce generation.It can manage an infinite number of protocol sessions as well as a wide variety of cryptographic primitives (shared and public-key cryptography, hash functions, etc.).Any equational theory can be modeled in ProVerif; however, the tool might not finish.Although this is true for XOR or Diffie-Hellman exponentiation, ProVerif does support the commutativity of the exponentiation alone [118].Figure 11 shows the process of the AVISPA tool.
Several studies have focused on the efficient utilization of the ProVerif tool [119,120].Figure 11 depicts the process of verifying the key-management protocol.Abbreviations shows the main acronyms throughout the paper.

Start
• Protocol identification • Security property insurance

Conclusions, Challenges, and Future Vision
The main takeaway from this survey is involved in the key-management and authentication mechanisms for SGs.The pros and cons of the most recent key-management techniques are addressed.Among the presented techniques, REMP and NIKE are considered the most effective ones with some drawbacks.REMP does not need subscribers in a group to directly communicate with publishers in a group, and as a result, outperforms many other point-to-point schemes.On the contrary, REMP is that conventional security schemes are insufficient to meet the security requirements of SG as a large-scale CPS.Moreover, REMP was unable to find a balance between end-to-end security strength and scalability and thinness requirements, along with computing capability.NIKE proposed a three-part key scheme involving setup, installation, and key agreement, but the greatest contribution was to shift the computational load onto the AMI head-end because it is less resource-restricted and will be able to complete calculations much faster.However, it has a high computational load and is susceptible to man-in-the-middle and desynchronization attacks.From the authentication point of view, the ROPUFs can be frequently used among the other ones.It can store cryptographic keys instead of nonvolatile memory systems or hardware encryptions.This makes the system easy to integrate because no additional hardware is needed and meters can have a unique ID that can identify them.ROPUFs can also be reconfigured with the AES encryption scheme, making the system easily deployable.The authentication occurs as the ROPUFS offers five levels of security to ensure the communication between the SM and utility company is secure before it is allowed to connect to the Internet.However, systems authenticated by ROPUFs can be attacked by sending spoofing messages causing reconfiguration.
The future of SMs will continue to grow with the advancement in technology.As we have seen, key management and authentication are big problems that are needed in smart metering systems because they may threaten the PG stability.With the development of the existing authentication schemes, we will continue to address problems until we reach practical, lightweight, privacy-preserving, and robust key-management and authentication schemes to advance the security of communication and authentication in smart metering systems.Because this survey study focuses on the existing key-management and authentication schemes that were implemented in smart metering systems, in the future, however, key-management and authentication approaches that are used in other cyber-physical systems, such as vehicular networks, e-health, transportation systems, etc., can be considered because each environment has its own challenges and goals.Moreover, we showed that there is not yet a solution that fulfills all the proposed objectives and that there is still much work to be done in key management and authentication.For example, one can observe that the majority of the ROPUFs are suffering from the rising temperature on PUF-embedded devices which results in performance degradation.Hence, this limitation could potentially be focused on in future designs of PUF-based approaches.We have concluded that the approaches presented in this paper have security flaws.In the future, we will propose a new lightweight, authenticated key-agreement protocol that is based on a decentralized elliptic curve cryptosystem.Furthermore, we will verify and analyze the security claims of the newly proposed protocol.

Figure 8 .
Figure 8.General architecture of anonymous key distribution.