# Deep PUF: A Highly Reliable DRAM PUF-Based Authentication for IoT Networks Using Deep Convolutional Neural Networks

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

_{RCD}), precharge time (t

_{RP}), etc. [12,13]). Reliability and robustness are two fundamental properties of a desirable PUF, which prove the independence of output responses on internal/external noises and ambient conditions. Most of the existing PUFs use some post-processing techniques, which require helper data algorithms and complex error correction codes (ECCs) to extract reliable responses and conduct a proper authentication procedure [8,14]. However, these methods cause significant hardware/computational overheads and additional Non-volatile memory (NVM) to store helper data in addition to their security defects [15,16,17]. Most of the proposed DRAM PUFs employ different pre-selection mechanisms to eliminate dependent or unstable cells and decrease the ECC overheads [12,13]. Pre-selection mechanisms consist of running multiple tests on the PUF and selecting the qualified cells using selection algorithms. These approaches limit the challenge-response pairs (CRP) space and entail additional runtime and costs. In such a way, resource-constrained nodes in the internet of things (IoT) applications cannot benefit from a DRAM PUF-based authentication.

- We propose deep PUF as a two-stage mechanism, including multi-label classification and challenge verification, to provide a robust and lightweight device authentication without error-correcting codes and other pre-filtering methods.
- We implement two types of latency-based proposals (t
_{RCD}and t_{RP}PUFs) as the fast runtime accessible DRAM PUFs and analyze their characteristics to train the CNN. - Finally, we develop a CNN model using experimental data and analyze the robustness and security of the proposed deep PUF.

## 2. Background and Motivation

#### 2.1. DRAM Operation and Timing Parameters

_{RCD}). For a subsequent read/write operation, it is necessary to issue a PRE command to deactivate the opened row. The next row will be accessible after a specified time named precharge time (t

_{RP}). There are also other timing parameters such as t

_{RAS}and t

_{CL}that are used by the memory controller to manage DRAM operations [24,25,26,27].

#### 2.2. DRAM PUF Technologies

#### Latency-Based DRAM PUFs

_{RCD}-based PUF is formed by reducing the minimum time period required to activate rows to be accessed [12]. This structure applies a filtering mechanism to eliminate unstable bits in different iterations to enhance the PUF’s robustness and repeatability. A separate DRAM rank is needed to count and store the latency failures of each iteration. The evaluation time of PUF responses is noticeably increased due to the filtering phase. However, this mechanism is not adequate, and ECC approaches are still required to perform a reliable PUF.

_{RP}-reduction and disrupts precharge procedure to obtain erroneous data. The t

_{RP}-based technique categorizes the cells on the basis of their dependency on input patterns and measurements. Then, only the independent cells are qualified to be used. Next, the specific selection algorithm is designed to choose the acceptable cells and improve the robustness of PUF. In such a scenario, the CRP space is noticeably contracted, and the effects of environmental variations are not considered.

#### 2.3. Post-Processing and Pre-Selection Algorithms

#### 2.4. Motivation

## 3. Proposed Deep PUF

#### 3.1. Enrollment

_{RCD}based, t

_{RP}based, etc.) over multiple iterations and under various ambient conditions is analyzed. Then, considering the necessary features to develop a successful classifier, challenges are selected, which contain the address of memory blocks and the input data patterns. The output responses as well as failure bits for each challenge are categorized without any modification (see Figure 3a). The number of measurements to obtain the comprehensive features of whole possible responses for each challenge can be effectively set based on intrinsic robustness evaluation.

- Robustness: determines the effects of different operating conditions on output responses. This property affects the similarity of samples in a single class and accuracy of classification results. Robustness of DRAM PUF can be calculated using intra-Hamming distance (HD) or intra-Jaccard index values.
- Uniqueness: enough difference between two responses using two distinct DRAM blocks results in uniqueness. This factor shows the difference of samples belonging to separate classes and can be determined by computing inter-class HD. Figure 3c depicts the developed deep CNN which is trained on the generated dataset and learns the failures behavior under various measurements.

- Stability of operating conditions: locating the PUF device in a stable ambiance in which the variety of conditions (e.g., temperature, voltage) is not appreciable, causes more consistency inside each class and results in better accuracy. Due to the PUF sensitivity to environmental conditions, in an environment with varying temperatures, the number of bit failures in each measurement and the way the failures are distributed may cause samples to be far different than usual. In this case, deep PUF requires involving the responses of all possible temperatures to extract entire failure features, thereby leading an accurate classification.
- Variety of blocks and input patterns: one scenario is to organize the classes using only a single memory block and writing different patterns into it as the challenges, and the other one is exploiting various blocks. If only one memory block is utilized to perform the PUF, it is necessary to provide the challenges based on different input data patterns. However, in the case of using multiple blocks, the challenges can be configured by the same data for all blocks.

#### 3.2. Authentication Phase

- The received raw bits are classified using CNN, structured during the enrollment phase.
- The detected label is compared with the original challenge.

## 4. DRAM Experiments and Observations

_{RCD}and t

_{RP}PUFs). The experimental evaluations are conducted using a DDR3 DRAM module. Figure 5 shows our experimental setup. We examine the characteristics of both latency PUFs to make a better decision considering the CNN requirements. We read DRAM values in different conditions to evaluate the robustness and uniqueness of DRAM blocks.

_{RCD}and t

_{RP}reduction-based methods. The intra-Jaccard index determines the similarity of two PUF responses for the same challenge. This is calculated as $\frac{{\mathrm{R}}_{1}{{\displaystyle \cap}}^{}{\mathrm{R}}_{2}}{{\mathrm{R}}_{1}{{\displaystyle \cup}}^{}{\mathrm{R}}_{2}}$ for two sets of responses, where ${\mathrm{R}}_{1}{{\displaystyle \cap}}^{}{\mathrm{R}}_{2}$ indicates the size of the shared failures and the ${\mathrm{R}}_{1}{{\displaystyle \cup}}^{}{\mathrm{R}}_{2}$ is the total number of failures in ${\mathrm{R}}_{1}$ and ${\mathrm{R}}_{2}$. A Jacard index close to 1 shows the more similarity between ${\mathrm{R}}_{1}$ and ${\mathrm{R}}_{2}$. In this work, this metric is used to check the repeatability and robustness of DRAM PUF responses. These results are based on the average values that we have gathered by checking multiple samples at each temperature. We also have tested the sensitivity of PUF responses to temperature variations by intra-HD calculations; the results are shown in Figure 6b, indicating the reliability of DRAM PUF responses and also the similarity of samples creating a class. Another principal factor affecting the performance of deep PUF is uniqueness, which measures the difference between failure distributions into two different memory blocks. We have analyzed this factor by comparing multiple samples belonging to various blocks of the DRAM module using inter-HD. Table 2 presents the average uniqueness for t

_{RCD}and t

_{RP}-based methods considering the average number of bit failures in each block.

_{RCD}and t

_{RP}PUFs, we realize that they have desirable characteristics to develop a classifier and organize deep PUF. These characteristics include the similarity among the samples into each class and variety among samples from different classes. Table 3 summarizes the generic HD values for stable and unstable conditions, which can be two possible scenarios during a deep PUF configuration. We focus on the t

_{RCD}-based PUF that comparatively has more intra-class consistency.

## 5. Development of CNN Model

#### 5.1. Dataset Creation

- The same input pattern (all “1”s) is used to characterize all blocks and the operating conditions are stable (room temperature and nominal voltage).
- Different input patterns (0x00, 0x01… 0xFF) are used for different blocks and the conditions are stable.
- The same input pattern is used to characterize all blocks and the operating conditions are unstable.
- Different input patterns are used for different blocks and the conditions are unstable.

#### 5.2. Training the Classifier

Algorithm 1 Convolutional neural network (CNN)-based classification |

Dataset generation |

Input: a set of challenges, including the address of PUF segment and input pattern: (${\mathrm{C}}_{1},{\mathrm{C}}_{2},\dots ,{\mathrm{C}}_{\mathrm{N}}$) |

Output: collections of images corresponding to different challenges: (${\mathrm{S}}_{1},{\mathrm{S}}_{2},\dots ,{\mathrm{S}}_{\mathrm{N}}$) |

Process: // build N folders: folder_1, folder_2,…, folder_N |

$\mathrm{for}\mathrm{i}=1\mathrm{to}\mathrm{N}\mathrm{do}$ //N: number of classes |

$\mathrm{for}\mathrm{k}=1\mathrm{to}\mathrm{M}\mathrm{do}$ // M: number of measurements |

$\mathrm{Write}\mathrm{the}\mathrm{input}-{\mathrm{pattern}\mathrm{in}\mathrm{the}\mathrm{address}\mathrm{using}\mathrm{C}}_{\mathrm{i}};$ |

$\mathrm{Change}\mathrm{the}\mathrm{timing}\mathrm{parameter}\left({\mathrm{t}}_{\mathrm{RCD}}\right);$ |

$\mathrm{Read}\mathrm{the}\mathrm{DRAM}\mathrm{segment}\to {\mathrm{R}}_{\mathrm{k}};$ |

$\mathrm{Visualize}\left({\mathrm{R}}_{\mathrm{k}}\right)$ //Convert the ${\mathrm{R}}_{\mathrm{k}}$ to integer values and a gray-scale image |

${\mathrm{Store}\mathrm{R}}_{1},{\mathrm{R}}_{2},\dots ,{\mathrm{R}}_{\mathrm{M}}\mathrm{into}\mathrm{the}\mathrm{folder}\_\mathrm{i};$ |

$\mathrm{end}\mathrm{for}$ |

$\mathrm{end}\mathrm{for}$ |

Output: $\mathrm{a}\mathrm{dataset}\mathrm{including}\mathrm{N}\mathrm{folders}\mathrm{with}\mathrm{M}\mathrm{images}\mathrm{in}\mathrm{each}\mathrm{folder}.$ |

Training process |

Input: a collection of labeled images (responses): $\left({\mathrm{s}}_{1},{\mathrm{s}}_{2},\dots ,{\mathrm{s}}_{\mathrm{M}},{\mathrm{s}}_{\mathrm{M}+1},\dots ,{\mathrm{s}}_{\mathrm{MN}}\right)$ |

Output: a collection of features assigned to different labels |

Process: |

$\mathrm{for}\mathrm{k}=1\mathrm{to}\mathrm{e}\mathrm{do}$// e: number of epochs |

$\mathrm{for}\mathrm{i}=1\mathrm{to}\mathrm{M}\times \mathrm{N}\mathrm{do}$ // N: number of classes, M: number of samples for each class |

$\mathrm{Get}\mathrm{the}\mathrm{sample}\mathrm{with}\mathrm{the}\mathrm{label}\to \left({\mathrm{s}}_{\mathrm{i}},{\mathrm{y}}_{\mathrm{i}}\right);$ |

$\mathrm{Gain}\mathrm{the}\mathrm{features}\to {\mathrm{f}}_{\mathrm{i}};$ // after applying the defined layers |

$\mathrm{Assign}\mathrm{the}\mathrm{features}\mathrm{to}\mathrm{the}\mathrm{label}\left({\mathrm{f}}_{\mathrm{i}},{\mathrm{y}}_{\mathrm{i}}\right);$ |

$\mathrm{end}\mathrm{for}$ |

$\mathrm{for}\mathrm{j}=1\mathrm{to}\mathrm{N}\mathrm{do}$ |

$\mathrm{Build}\mathrm{collection}\mathrm{of}\mathrm{features}\mathrm{for}\mathrm{each}\mathrm{class}$$\to $ (${\mathrm{F}}_{\mathrm{j}}$,${\mathrm{Y}}_{\mathrm{j}}$); |

$\mathrm{end}\mathrm{for}$ |

$\mathrm{Build}\mathrm{F}=\left({\mathrm{F}}_{1},{\mathrm{F}}_{2},\dots ,{\mathrm{F}}_{\mathrm{N}}\right)$ |

$\mathrm{Update}\mathrm{the}\mathrm{features}\to \mathrm{F}$; |

$\mathrm{end}\mathrm{for}$ |

$\mathrm{Build}\mathrm{the}\mathrm{final}\mathrm{collection}\mathrm{of}\mathrm{features}$ |

Output: $\left({\mathrm{F}}_{1},{\mathrm{Y}}_{1}\right),\left({\mathrm{F}}_{2},{\mathrm{Y}}_{2}\right),\dots ,\left({\mathrm{F}}_{\mathrm{N}},{\mathrm{Y}}_{\mathrm{N}}\right)$ |

Testing process |

Input: $\mathrm{x}$ // testing sample |

Process: |

$\mathrm{apply}\mathrm{the}\mathrm{CNN}$ |

$\mathrm{Select}\left({\mathrm{x}}^{\prime},{\mathrm{y}}^{\prime}\right),\mathrm{p}\left({\mathrm{x}}^{\prime},{\mathrm{y}}^{\prime}\right)=\mathrm{max}\left\{\mathrm{p}1\left({\mathrm{x}}^{\prime},\mathrm{y}1\right),\mathrm{p}2\left({\mathrm{x}}^{\prime},\mathrm{y}2\right),\dots ,\mathrm{pN}\left(\mathrm{x},\mathrm{yN}\right)\right\}$ // p: the probability vector calculated by Softmax |

function |

$\mathrm{Assign}\mathrm{y}\prime \mathrm{to}\mathrm{the}\mathrm{x}$ |

#### 5.3. Performance Metrics

^{−1}and even near 10

^{−2}by adjusting the number of measurements during the enrollment phase.

## 6. Security Analysis and Discussion

#### 6.1. Security and Robustness

#### 6.2. Performance Comparisons

_{RCD}-based PUF mechanism to generate raw DRAM data, proposed in [12]. This method uses a filtering procedure to extract the reliable cells and form the output response, which significantly increases the evaluation period. Deep PUF enables a lower evaluation time than t

_{RCD}-based PUF technology due to removing the filtering mechanism. The evaluation period of deep PUF can be measured in a way similar to t

_{RCD}-based PUFs, expressed by Equation (1).

_{RCD}-based PUF’s evaluation time, which is 88.2ms. Note that the evaluation time has been measured for the PUF operation on the device and does not include the time of authentication process on the server side. Furthermore, t

_{RCD}-based PUF needs at least two DRAM ranks: one for PUF operation and one for counting the latency failures. The proposed deep PUF is operational with only one rank and is appropriate for low-cost systems. Additionally, both t

_{RCD}-based [12] and t

_{RP}-based PUFs [13] require post-processing error correction algorithms that cause significant time and hardware overheads. On the other hand, retention-based PUFs [12,28] require a long period of time (order of minutes) to extract sufficient failure bits and generate reliable signatures, which makes the DRAM rank unavailable for a long time.

#### 6.3. Security Discussion and Countermeasures against Possible Attacks

## 7. Conclusion and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Helfmeier, C.; Boit, C.; Nedospasov, D.; Seifert, J. Cloning physically unclonable functions. In Proceedings of the IEEE International Symposium on Hardware-Oriented Security and Trust, Austin, TX, USA, 2–3 June 2013; pp. 1–6. [Google Scholar]
- Herder, C.; Yu, M.; Koushanfar, F.; Devadas, S. Physical unclonable functions and applications: A tutorial. Proc. IEEE
**2014**, 102, 1126–1141. [Google Scholar] [CrossRef] - Kaveh, M.; Martín, D.; Mosavi, M.R. A lightweight authentication scheme for V2G communications: A PUF-based approach ensuring cyber/physical security and identity/location privacy. Electronics
**2020**, 9, 1479. [Google Scholar] [CrossRef] - Kaveh, M.; Mosavi, M.R. A lightweight mutual authentication for smart grid neighborhood area network communications based on physically unclonable function. IEEE Syst. J.
**2020**, 14, 4535–4544. [Google Scholar] [CrossRef] - Yanambaka, V.P.; Mohanty, S.P.; Kougianos, E.; Puthal, D. PMsec: Physical unclonable function-based robust and lightweight authentication in the internet of medical things. IEEE Trans. Consum. Electron.
**2019**, 65, 388–397. [Google Scholar] [CrossRef] - Xiao, K.; Rahman, M.T.; Forte, D.; Huang, Y.; Su, M.; Tehranipoor, M. Bit selection algorithm suitable for high-volume production of SRAM-PUF. In Proceedings of the 2014 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), Arlington, VA, USA, 6–7 May 2014; pp. 101–106. [Google Scholar]
- Xiong, W.; Schaller, A.; Anagnostopoulos, N.A.; Saleem, M.U.; Gabmeyer, S.; Katzenbeisser, S.; Szefer, J. Run-time accessible DRAM PUFs in commodity devices. In Cryptographic Hardware and Embedded Systems—CHES 2016; Gierlichs, B., Poschmann, A.Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 432–453. [Google Scholar]
- Schaller, A.; Xiong, W.; Anagnostopoulos, N.A.; Saleem, M.U.; Gabmeyer, S.; Skoric, B.; Katzenbeisser, S.; Szefer, J. Decay-based DRAM PUFs in commodity devices. IEEE Trans. Dependable Secur. Comput.
**2018**, 16, 462–475. [Google Scholar] [CrossRef] - Rosenblatt, S.; Chellappa, S.; Cestero, A.; Robson, N.; Kirihata, T.; Iyer, S.S. A self-authenticating chip architecture using an intrinsic fingerprint of embedded DRAM. IEEE J. Solid State Circuits
**2013**, 48, 2934–2943. [Google Scholar] [CrossRef] - Tang, Q.; Zhou, C.; Choi, W.; Kang, G.; Park, J.; Parhi, K.K.; Kim, C.H. A DRAM based physical unclonable function capable of generating > 1032 challenge response pairs per 1Kbit array for secure chip authentication. In Proceedings of the 2017 IEEE Custom Integrated Circuits Conference, Austin, TX, USA, 30 April–3 May 2017; pp. 1–4. [Google Scholar]
- Chen, S.; Li, B.; Cao, Y. Intrinsic Physical Unclonable Function (PUF) sensors in commodity devices. Sensors
**2019**, 11, 2428. [Google Scholar] [CrossRef] [PubMed][Green Version] - Kim, J.S.; Patel, M.; Hassan, H.; Mutlu, O. The DRAM Latency PUF: Quickly evaluating physical unclonable functions by exploiting the latency-reliability tradeoff in modern commodity DRAM devices. In Proceedings of the IEEE International Symposium on High Performance Computer Architecture, Vienna, Austria, 24–28 February 2018; pp. 194–207. [Google Scholar]
- Talukder, B.M.S.B.; Ray, B.; Forte, D.; Rahman, M.T. PreLatPUF: Exploiting DRAM latency variations for generating robust device signatures. IEEE Access
**2019**, 7, 81106–81120. [Google Scholar] [CrossRef] - Delvaux, J.; Gu, D.; Schellekens, D.; Verbauwhede, I. Helper data algorithms for PUF-based key generation: Overview and analysis. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
**2015**, 34, 889–902. [Google Scholar] [CrossRef][Green Version] - Yu, M.; Devadas, S. Secure and robust error correction for physical unclonable functions. IEEE Des. Test Comput.
**2010**, 27, 48–65. [Google Scholar] [CrossRef] - Paral, Z.; Devadas, S. Reliable and efficient PUF-based key generation using pattern matching. In Proceedings of the 2011 IEEE International Symposium on Hardware-Oriented Security and Trust, San Diego, CA, USA, 5–6 June 2011; pp. 128–143. [Google Scholar]
- Hiller, M.; Merli, D.; Stumpf, F.; Sigl, G. Complementary IBS: Application specific error correction for PUFs. In Proceedings of the 2012 IEEE International Symposium on Hardware-Oriented Security and Trust, San Francisco, CA, USA, 3–4 June 2012; pp. 1–6. [Google Scholar]
- Yue, M.; Karimian, N.; Yan, W.; Anagnostopoulos, N.A.; Tehranipoor, F. DRAM-based authentication using deep convolutional neural networks. IEEE Consum. Electron. Mag.
**2020**, 1. [Google Scholar] [CrossRef] - Banerjee, S.; Odelu, V.; Kumar, A.; Chattopadhyay, S.; Rodregues, J.J.P.C.; Park, Y. Physically secure lightweight anonymous user authentication protocol for internet of things using physically unclonable functions. IEEE Access
**2019**, 7, 85627–85644. [Google Scholar] [CrossRef] - Byun, J.W. End-to-end authenticated key exchange based on different physical unclonable functions. IEEE Access
**2019**, 7, 102951–102965. [Google Scholar] [CrossRef] - Hashemian, M.S.; Singh, B.; Wolff, F.; Weyer, D.; Clay, S.; Papachristou, C. A robust authentication methodology using physically unclonable functions in DRAM arrays. In Proceedings of the Design, Automation and Test in Europe Conference, Grenoble, France, 9–13 March 2015; pp. 647–652. [Google Scholar]
- Tehranipoor, F.; Karimian, N.; Yan, W.; Chandy, J.A. DRAM-based intrinsic physically unclonable functions for system-level security and authentication. IEEE Trans. Scale Integr. Syst.
**2017**, 25, 1085–1097. [Google Scholar] [CrossRef] - Chang, K.K.; Kashyap, A.; Hassan, H.; Ghose, S.; Hsieh, K.; Lee, A.; Li, T.; Pekhimenko, G.; Khan, S.; Mutlu, O. Understanding latency variation in modern DRAM chips: Experimental characterization, analysis, and optimization. In Proceedings of the ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science, Antibes, France, 14–18 June 2016; pp. 323–336. [Google Scholar]
- Lee, D.; Kim, Y.; Pekhimenko, G.; Khan, S.; Seshadri, V.; Chang, K.; Mutlu, O. Adaptive-latency DRAM: Optimizing DRAM timing for the common-case. In Proceedings of the IEEE 21st International Symposium on High Performance Computer Architecture, Burlingame, CA, USA, 7–11 February 2015; pp. 489–501. [Google Scholar]
- Chandrasekar, K.; Goossens, S.; Weis, C.; Koedam, M.; Akesson, B.; Wehn, N.; Goossens, K. Exploiting expendable process-margins in drams for run-time performance optimization. In Proceedings of the 2017 IEEE Design, Automation and Test in Europe Conference and Exhibition, Dresden, Germany, 24–28 March 2014; pp. 1–6. [Google Scholar]
- Keller, C.; Gürkaynak, F.; Kaeslin, H.; Felber, N. Dynamic memory-based physically unclonable function for the generation of unique identifiers and true random numbers. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems, Melbourne VIC, Australia, 28 July 2014; pp. 2740–2743. [Google Scholar]
- Chang, K.K.; Yağlikçi, A.G.; Ghose, S.; Agrawal, A.; Chatterjee, N.; Kashyap, A.; Lee, A.; O’Connor, M.; Hassan, H.; Mutlu, O. Understanding reduced-voltage operation in modern DRAM Devices: Experimental characterization, analysis, and mechanisms. In Proceedings of the ACM on Measurement and Analysis of Computing Systems, New York, NY, USA; 2017; Volume 1, pp. 1–42. [Google Scholar]
- Sutar, S.; Raha, A.; Raghunathan, V. D-PUF: An intrinsically reconfigurable DRAM PUF for device authentication in embedded systems. In Proceedings of the IEEE International Conference on Compilers, Architectures, and Synthesis of Embedded Systems, CASES, Pittsburgh, PA, USA, 2–7 October 2016; pp. 1–10. [Google Scholar]
- Mathew, S.; Satpathy, S.K.; Anders, M.A.; Kaul, H.; Hsu, S.K.; Agarwal, A.; Chen, G.K.; Parker, R.J.; Krishnamurthy, R.K.; De, V. A 0.19pJ/b PVT-variation-tolerant hybrid physically unclonable function circuit for 100% stable secure key generation in 22nm CMOS. In Proceedings of the IEEE International Solid-state Circuits Conference Digest of Technical Papers, San Francisco, CA, USA, 9–13 February 2014; pp. 278–279. [Google Scholar]
- Patel, M.; Kim, J.S.; Mutlu, O. The Reach Profiler (REAPER): Enabling the mitigation of DRAM retention failures via profiling at aggressive conditions. In Proceedings of the 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), Toronto, ON, Canada, 24–28 June 2017; Volume 45, pp. 255–268. [Google Scholar]
- Bohm, C.; Hofer, M. Physical Unclonable Functions in Theory and Practice; Springer: Cham, Switzerland, 2012; pp. 239–248. [Google Scholar]
- Hiller, M.; Yu, M.D.; Sigl, G. Cherry-picking Reliable PUF Bits with differential sequence coding. IEEE Trans. Inf. Forensics Secur.
**2016**, 11, 2065–2076. [Google Scholar] [CrossRef] - Maes, R. PUF-Based entity identification and authentication. In Physically Unclonable Functions: Constructions, Properties and Applications; Springer: Berlin/Heidelberg, Germany, 2013; pp. 117–141. [Google Scholar]
- Shi, J.; Lu, Y.; Zhang, J. Approximation attacks on strong PUFs. IEEE Trans. Comput. Des. Integr. Circuits Syst.
**2019**, 39, 2138–2151. [Google Scholar] [CrossRef] - Ostrovsky, R.; Scafuro, A.; Visconti, I.; Wadia, A. Universally composable secure computation with (malicious) physically unclonable functions. In Eurocrypt LNCS; Springer: Berlin/Heidelberg, Germany, 2013; pp. 702–718. [Google Scholar]
- Ruhrmair, U.; Dijk, I. PUFs in security protocols: Attack models and security evaluations. In Proceedings of the IEEE Symposium on Security and Privacy, Berkeley, CA, USA, 19–22 May 2013; pp. 286–300. [Google Scholar]

**Figure 2.**DRAM timing at read operation [13].

**Figure 4.**The authentication phase includes communication process and verification steps in the server.

**Figure 6.**(

**a**) Distributions of intra-Jaccard indices calculated between responses for t

_{RCD}and t

_{RP}PUFs. (

**b**) The average intra-Hamming distance (HD) for t

_{RCD}and t

_{RP}PUFs with different temperatures (reference temperature is 25 °C).

**Figure 7.**Probability of error in classification as a function of the number of measurements for the different number of classes (N = 20, 50 and 100).

Reduced time | 5 ns |

Block size | 200 Kb |

Input pattern | All “1” s |

Block address | Various |

Number of tested blocks | 200 |

Mechanism | Average Inter-HD | Average Probability of Failure (for 200 Blocks) |
---|---|---|

RCD PUF | 12.15% | 6.5% |

RP PUF | 20.21% | 11% |

Mechanism | Stable Environmental Conditions | Unstable Environmental Conditions | ||
---|---|---|---|---|

Intra-Class HD | Inter-Class HD | Intra-Class HD | Inter-Class HD | |

RCD PUF | 0.32% | 12.7% | 1.44% | 11.64% |

RP PUF | 1.05% | 21.1% | 3.41% | 19.33% |

Total number of images in each dataset | 9000 |

Number of classes (challenges) | 100 |

Number of samples (responses) | 90 |

Tested temperatures | 25–55 °C |

Percentage of training data | 80 |

Percentage of test data | 20 |

The resolution of images (pixels) | 222 × 222 |

Layer | Dimension |
---|---|

Convolution 2D | (222, 222, 128) |

Convolution 2D | (220, 220, 32) |

Max pooling | (109, 109, 32) |

Convolution 2D | (107, 107, 16) |

Convolution 2D | (105, 105, 32) |

Max pooling | (52, 52, 32) |

Convolution 2D | (50, 50, 16) |

Max pooling | (24, 24, 16) |

Flatten | 9216 |

Dense | 145 |

Dense | 75 |

Dense | Number of classes |

Experimental Conditions | Accuracy of Classification (%) | |||
---|---|---|---|---|

Same Input Pattern for All Blocks | Different Input Patterns | |||

Augmented Data | Original Data | Augmented Data | Original Data | |

Stable temperature | 96.12 | 94.66 | 97.79 | 97.15 |

Various temperatures | 92.29 | 91.03 | 94.9 | 94.33 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Najafi, F.; Kaveh, M.; Martín, D.; Reza Mosavi, M. Deep PUF: A Highly Reliable DRAM PUF-Based Authentication for IoT Networks Using Deep Convolutional Neural Networks. *Sensors* **2021**, *21*, 2009.
https://doi.org/10.3390/s21062009

**AMA Style**

Najafi F, Kaveh M, Martín D, Reza Mosavi M. Deep PUF: A Highly Reliable DRAM PUF-Based Authentication for IoT Networks Using Deep Convolutional Neural Networks. *Sensors*. 2021; 21(6):2009.
https://doi.org/10.3390/s21062009

**Chicago/Turabian Style**

Najafi, Fatemeh, Masoud Kaveh, Diego Martín, and Mohammad Reza Mosavi. 2021. "Deep PUF: A Highly Reliable DRAM PUF-Based Authentication for IoT Networks Using Deep Convolutional Neural Networks" *Sensors* 21, no. 6: 2009.
https://doi.org/10.3390/s21062009