# Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks

^{*}

## Abstract

**:**

## 1. Introduction

- We develop test-pattern generation (TPG) methods which generate pseudorandom tests in the form of images whose pixel values are chosen from both normal and uniform distributions. The distributions themselves are created using the statistical properties of the information present within the training dataset for the DNN, and so, tests generated using these distributions are able to better sensitize weights within the DUT and achieve good fault coverage.
- Convolutional layers extract features in the form of edges and contours for the subsequent fully connected hidden layers within DNNs. Based on this observation, we develop TPG methods to generate structured patterns which mimic such features and show that these can augment pseudorandom tests to further improve the fault coverage in convolutional neural networks.
- For DNNs trained to classify color images, we develop a TPG method which uses template images to capture the underlying chrominance information and applies geometric transformations to these templates to create diversified tests.
- Output responses from the DNN for a series of test patterns are observed in the form of one-hot-encoded predicted labels. These are compressed into a signature which can be compared to a reference to detect faults.

## 2. Preliminaries

#### 2.1. System Architecture

#### 2.2. Reliability Issues

## 3. Neural Network and Fault Modeling

#### 3.1. Compressing the DNN Architecture

#### 3.2. Fault Model

- Tests are generated to target faults affecting NVM cells as well as the access transistors associated with the cells.
- Tests are generated assuming at most one physical fault present in the system.
- Faults affecting the NVM cells are permanent in nature, remaining in existence indefinitely if no corrective action is taken.
- The DNN is trained offline and then used to perform only inference operations once deployed in the field.

- Type 1: Suppose ${w}_{l}\in \{LR{S}_{1},LR{S}_{2}\}$, but the value read during inference is HRS. This behavior can be caused by physical faults such as the cell’s resistance being stuck at RESET or the transistor connecting the cell to its crosspoint being stuck at zero due to circuit aging. Alternatively, a read disturbance may occur during or after a read operation in that the cell’s value becomes HRS while the correct value has been read out, due to an abrupt change in the cell’s conductance state [9].
- Type 2: Suppose ${w}_{l}$ was set to $LR{S}_{1}$ (or to $LR{S}_{2}$), but the value read during inference is $LR{S}_{2}$ ($LR{S}_{1}$). This behavior occurs under a scenario where an NVM cell previously stored $LR{S}_{1}$ ($LR{S}_{2}$) but the attempt to now store $LR{S}_{2}$ ($LR{S}_{1}$) does not succeed due to a stuck-at-SET fault affecting the cell.

## 4. Overview of BIST

- Seed the pseudorandom pattern generator.
- Initialize the set S of uncovered Type 1 and Type 2 faults; neural weights are assumed to be susceptible to one fault of each type as per our fault model.
- Generate pseudorandom test pattern t.
- Simulate the DUT and calculate the fault coverage in terms of numbers of Type 1 and Type 2 faults detected by t.
- Remove faults covered by test t from S.
- If fault coverage is deemed adequate or if the testing budget is exhausted, stop. Else, return to Step 3.

- The pattern generator contains logic to generate the pseudorandom test patterns, supplied to the hardware as 2D images. Three types of test patterns are generated: unstructured patterns in which pixel intensity values are chosen from either normal or uniform distributions; structured patterns that mimic edges and contours; and patterns from template images which capture chrominance information.
- The DUT is the quantized model obtained using the procedure described in Section 3. The ternary weights are mapped to the underlying crossbar architecture.
- Because the DUT is trained to classify the input into one of k labels, the response for each test consists of a one-hot-encoded predicted label wherein exactly one out of k output bits is set to 1. The signature generator compresses these bit patterns into a signature using cyclic redundancy checking (CRC) [27]. The signature generated per output line is compared to a previously calculated fault-free signature.
- The controller sequences and schedules tests. Control can be tied to a system reset so that the BIST occurs during system start-up or shutdown. The BIST can also be carried out when the system is idle, with the process being interruptible any time so that normal operation can resume.

## 5. Test-Pattern Generation

#### 5.1. Pseudorandom Testing

#### 5.2. Testing Using Structured Patterns

- The $[0,f]$ primitive sets the intensity value of the current pixel to zero and that of its neighbor to f.
- The $[f,0]$ primitive sets the value of the current pixel to f and that of its neighbor to zero.
- The $[f,f]$ primitive sets values of both the current and neighbor pixels to f.

#### 5.3. Testing Using Template Images

Algorithm 1: TPG for color images using templates. |

$TP\leftarrow \left\{\phantom{\rule{0.277778em}{0ex}}\right\}$ /* Initialize test-pattern set */ |

for $j=1:C$ do |

for $template$ in ${x}^{{C}_{j}}$ do |

$angle\leftarrow random(0,360)$ |

$i\leftarrow random(1,k)$ |

$I\leftarrow {\tau}_{i}(template,angle)$ |

$TP\leftarrow TP\cup I$ /* Add new pattern to test set */ |

end for |

end |

#### 5.4. Test Sequencing

- Initialize the status of all faults to be uncovered.
- Generate N ND-based tests. Obtain the fault–coverage curve and find the point on this curve, say after ${n}_{1}$ tests have been applied, after which coverage levels off. Mark the faults detected up to this point as covered.
- Generate $N-{n}_{1}$ structured patterns and obtain the coverage for the remaining uncovered faults. Determine the point, after ${n}_{2}$ tests have been applied, at which coverage levels off and mark the detected faults as covered.
- Generate $N-{n}_{1}-{n}_{2}$ UD-based tests for the remaining uncovered faults. To reduce the test-set size, we can stop before exhausting the testing budget, when the coverage achieved by these tests stagnates.

## 6. Performance Analysis

#### 6.1. Results for Grayscale Images

#### 6.2. Results for Color Images

#### 6.3. Fault Coverage in the Presence of Multiple Faults

- Transitions to larger synaptic weights ($\uparrow \uparrow $/$\uparrow \uparrow \uparrow $). Each potential fault site is set to a higher synaptic weight value than the original value. That is, $-w\to 0$ or $-w\to +W$.
- Transitions to smaller synaptic weights ($\downarrow \downarrow $/$\downarrow \downarrow \downarrow $). Each potential fault site is set to a lower synaptic weight value than the original value. That is, $+w\to 0$ or $+w\to -W$.
- Mixed Transitions. For double faults, one of the potential fault sites is set to a higher synaptic weight value, whereas the other is set to a lower value. That is, $-{w}_{a}\to \{0,+W\}$ and $+{w}_{b}\to \{0,-W\}$, where ${w}_{a}$ and ${w}_{b}$ are the fault sites involved. For triple faults, we consider all eight combinations involving weights ${w}_{a}$, ${w}_{b}$, and ${w}_{c}$, where ${w}_{c}$ is the third site.

- Transitions to larger synaptic weights ($\uparrow \uparrow $/$\uparrow \uparrow \uparrow $). For a test pattern, assume that inputs from the penultimate layer are $\{{x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6}\}=\{0.08,0.15,0.10,0.12,0.07,0.30\}$ and the trained weights leading to an output neuron are $\{{w}_{1},{w}_{2},{w}_{3},{w}_{4},{w}_{5},{w}_{6}\}=\{1,-1,-1,1,-1,-1\}$. The dot product $\mathbf{x}.{\mathbf{w}}^{T}=-0.42$, which when passed through a sigmoid function $1/(1+{e}^{-{\mathbf{w}}^{T}.\mathbf{x}})$ results in a probabilistic value of $p<0.5$. Suppose a double fault results in both ${w}_{3}$ and ${w}_{5}$ transitioning to 1. The dot product will be $-0.08$, also leading to $p<0.5$. Hence, the test will not be misclassified. Now, consider a triple fault which causes ${w}_{3},{w}_{5},\mathrm{and}{w}_{6}$ to transition to 1. The dot product is $0.52$, resulting in $p>0.5$. This leads to a misclassification and, therefore, detection.
- Transitions to smaller synaptic weights ($\downarrow \downarrow $/$\downarrow \downarrow \downarrow $). For a test pattern, assume that inputs from the penultimate layer are $\{{x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5},{x}_{6}\}=\{0.30,0.10,0.25,0.10,0.21,0.10\}$ and the trained weights leading to an output neuron are $\{{w}_{1},{w}_{2},{w}_{3},{w}_{4},{w}_{5},{w}_{6}\}=\{1,1,-1,-1,1,1\}$. The dot product is $0.36$, leading to $p>0.5$. Suppose a double fault affects ${w}_{2}\mathrm{and}{w}_{6}$, flipping them both to 0. The dot product is $0.16$, leading to $p>0.5$. The test will not be misclassified. However, if a triple fault flips ${w}_{2},{w}_{5},\mathrm{and}{w}_{6}$ to 0, the dot product is $-0.05$ and $p<0.5$. This leads to the test being misclassified.

## 7. Error Detection via Signature Analysis

## 8. Processing and Storage Overhead

## 9. Related Work

## 10. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Burr, G.W.; Shelby, R.M.; Sebastian, A.; Kim, S.; Kim, S.; Sidler, S.; Virwani, K.; Ishii, M.; Narayanan, P.; Fumarola, A.; et al. Neuromorphic computing using non-volatile memory. Adv. Phys. X
**2017**, 2, 89–124. [Google Scholar] [CrossRef] - Ambrogio, S.; Narayanan, P.; Tsai, H.; Shelby, R.M.; Boybat, I.; Di Nolfo, C.; Sidler, S.; Giordano, M.; Bodini, M.; Farinha, N.C.; et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature
**2018**, 558, 60–67. [Google Scholar] [CrossRef] [PubMed] - Mallik, A.; Garbin, D.; Fantini, A.; Rodopoulos, D.; Degraeve, R.; Stuijt, J.; Das, A.K.; Schaafsma, S.; Debacker, P.; Donadio, G.; et al. Design-technology co-optimization for OxRRAM-based synaptic processing unit. In Proceedings of the 2017 Symposium on VLSI Technology, Kyoto, Japan, 5–8 June 2017. [Google Scholar]
- Wan, W.; Kubendran, R.; Gao, B.; Joshi, S.; Raina, P.; Wu, H.; Cauwenberghs, G.; Wong, H.P. A Voltage-Mode Sensing Scheme with Differential-Row Weight Mapping for Energy-Efficient RRAM-Based In-Memory Computing. In Proceedings of the 2020 IEEE Symposium on VLSI Technology, Honolulu, HI, USA, 16–19 June 2020. [Google Scholar]
- Chen, P.Y.; Yu, S. Reliability perspective of resistive synaptic devices on the neuromorphic system performance. In Proceedings of the 2018 IEEE International Reliability Physics Symposium (IRPS), Burlingame, CA, USA, 11–15 March 2018. [Google Scholar]
- Zhang, J.J.; Gu, T.; Basu, K.; Garg, S. Analyzing and mitigating the impact of permanent faults on a systolic array based neural network accelerator. In Proceedings of the 2018 IEEE 36th VLSI Test Symposium (VTS), San Francisco, CA, USA, 22–25 June 2018; pp. 1–6. [Google Scholar]
- Kundu, S.; Banerjee, S.; Raha, A.; Natarajan, S.; Basu, K. Toward Functional Safety of Systolic Array-Based Deep Learning Hardware Accelerators. IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
**2021**, 29, 485–498. [Google Scholar] [CrossRef] - Chaudhuri, A.; Liu, M.; Chakrabarty, K. Fault-Tolerant Neuromorphic Computing Systems. In Proceedings of the 2019 IEEE International Test Conference (ITC), Washington, DC, USA, 9–15 November 2019; pp. 1–10. [Google Scholar]
- Chen, C.Y.; Shih, H.C.; Wu, C.W.; Lin, C.H.; Chiu, P.F.; Sheu, S.S.; Chen, F.T. RRAM Defect Modeling and Failure Analysis Based on March Test and a Novel Squeeze-Search Scheme. IEEE Trans. Comput.
**2015**, 64, 180–190. [Google Scholar] [CrossRef] - Kannan, S.; Rajendran, J.; Karri, R.; Sinanoglu, O. Sneak-Path Testing of Crossbar-Based Nonvolatile Random Access Memories. IEEE Trans. Nanotechnol.
**2013**, 12, 413–426. [Google Scholar] [CrossRef] - Xia, L.; Liu, M.; Ning, X.; Chakrabarty, K.; Wang, Y. Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems. In Proceedings of the Design Automation Conference (DAC), Austin, TX, USA, 18–22 June 2017; pp. 1–6. [Google Scholar]
- Huang, T.C.; Schroff, J. Precompensation, BIST and Analogue Berger Codes for Self-Healing of Neuromorphic RRAM. In Proceedings of the 2018 IEEE 27th Asian Test Symposium (ATS), Hefei, China, 15–18 October 2018; pp. 173–178. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; 2012; Volume 25. Available online: https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf (accessed on 13 August 2022).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Liu, C.; Yan, B.; Yang, C.; Song, L.; Li, Z.; Liu, B.; Chen, Y.; Li, H.; Wu, Q.; Jiang, H. A spiking neuromorphic design with resistive crossbar. In Proceedings of the 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 7–11 June 2015; pp. 1–6. [Google Scholar]
- Song, S.; Das, A. A Case for Lifetime Reliability-Aware Neuromorphic Computing. In Proceedings of the 63rd IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 9–12 August 2020; pp. 596–598. [Google Scholar]
- Boukhobza, J.; Rubini, S.; Chen, R.; Shao, Z. Emerging NVM: A Survey on Architectural Integration and Research Challenges. ACM Trans. Des. Autom. Electron. Syst.
**2017**, 23, 1–32. [Google Scholar] [CrossRef] - Titirsha, T.; Song, S.; Das, A.; Krichmar, J.; Dutt, N.; Kandasamy, N.; Catthoor, F. Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic Hardware. IEEE Trans. Par. Dist. Syst.
**2022**, 33, 288–301. [Google Scholar] [CrossRef] - Frankle, J.; Carbin, M. The Lottery Ticket Hypothesis: Training Pruned Neural Networks. In Proceedings of the 7th International Conference Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Zhu, C.; Han, S.; Mao, H.; Dally, W.J. Trained Ternary Quantization. arXiv
**2016**, arXiv:1612.01064. [Google Scholar] - Mishra, A.K.; Chakraborty, M. Does local pruning offer task-specific models to learn effectively? In Proceedings of the Student Research Workshop Associated with RANLP 2021, Online, 1–3 September 2021; pp. 118–125. [Google Scholar]
- Krishnamoorthi, R. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv
**2018**, arXiv:1806.08342. [Google Scholar] - Prezioso, M.; Merrikh-Bayat, F.; Hoskins, B.D.; Adam, G.C.; Likharev, K.K.; Strukov, D.B. Training and Operation of an Integrated Neuromorphic Network based on Metal-Oxide Memristors. Nature
**2015**, 521, 61–64. [Google Scholar] [CrossRef] [PubMed] - Fouda, M.; Lee, J.; Eltawil, A.; Kurdahi, F. Overcoming Crossbar Nonidealities in Binary Neural Networks through Learning. In Proceedings of the 14th IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH), Athens, Greece, 17–19 July 2018; pp. 1–3. [Google Scholar]
- Chen, Y.; Sun, L.; Zhou, Y.; Zewdie, G.M.; Deringer, V.L.; Mazzarello, R.; Zhang, W. Chemical understanding of resistance drift suppression in Ge–Sn–Te phase-change memory materials. J. Mater. Chem. C
**2020**, 8, 71–77. [Google Scholar] [CrossRef] - Abramovici, M.; Breuer, M.A.; Friedman, A.D. Digital Systems Testing and Testable Design; Wiley & Sons: Hoboken, NJ, USA, 1990. [Google Scholar]
- Levine, L.; Meyers, W. Special feature: Semiconductor memory reliability with error detecting and correcting codes. Computer
**1976**, 9, 43–50. [Google Scholar] [CrossRef] - Patel, J.H.; Fung, L.Y. Concurrent error detection in ALU’s by recomputing with shifted operands. IEEE Trans. Comput.
**1982**, 31, 589–595. [Google Scholar] [CrossRef] - Oh, N.; Shirvani, P.P.; McCluskey, E.J. Error detection by duplicated instructions in super-scalar processors. IEEE Trans. Reliab.
**2002**, 51, 63–75. [Google Scholar] [CrossRef] - Meixner, A.; Bauer, M.E.; Sorin, D. Argus: Low-cost, comprehensive error detection in simple cores. In Proceedings of the 40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007), Chicago, IL, USA, 1–5 December 2007; pp. 210–222. [Google Scholar]
- Zhang, M.; Mitra, S.; Mak, T.; Seifert, N.; Wang, N.J.; Shi, Q.; Kim, K.S.; Shanbhag, N.R.; Patel, S.J. Sequential element design with built-in soft error resilience. IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
**2006**, 14, 1368–1378. [Google Scholar] [CrossRef] - Chang, Y.C.; Chiu, C.T.; Lin, S.Y.; Liu, C.K. On the design and analysis of fault tolerant NoC architecture using spare routers. In Proceedings of the 16th Asia and South Pacific Design Automation Conference (ASP-DAC 2011), Yokohama, Japan, 25–28 January 2011; pp. 431–436. [Google Scholar]
- Tsai, W.C.; Zheng, D.Y.; Chen, S.J.; Hu, Y.H. A fault-tolerant NoC scheme using bidirectional channel. In Proceedings of the 48th Design Automation Conference, San Diego, CA, USA, 5–10 June 2011; pp. 918–923. [Google Scholar]
- Liu, C.; Hu, M.; Strachan, J.P.; Li, H. Rescuing memristor-based neuromorphic design with high defects. In Proceedings of the 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC), San Diego, CA, USA, 5–10 June 2017; pp. 1–6. [Google Scholar]
- Liu, B.; Li, H.; Chen, Y.; Li, X.; Wu, Q.; Huang, T. Vortex: Variation-aware training for memristor X-bar. In Proceedings of the 52nd Annual Design Automation Conference, San Francisco, CA, USA, 7–11 June 2015; pp. 1–6. [Google Scholar]
- Yeo, I.; Chu, M.; Gi, S.G.; Hwang, H.; Lee, B.G. Stuck-at-fault tolerant schemes for memristor crossbar array-based neural networks. IEEE Trans. Electron Devices
**2019**, 66, 2937–2945. [Google Scholar] [CrossRef] - Ham, S.J.; Mo, H.S.; Min, K.S. Low-Power V
_{DD}/3 Write Scheme With Inversion Coding Circuit for Complementary Memristor Array. IEEE Trans. Nanotechnol.**2013**, 12, 851–857. [Google Scholar] [CrossRef] - Zhang, J.J.; Basu, K.; Garg, S. Fault-tolerant systolic array based accelerators for deep neural network execution. IEEE Des. Test
**2019**, 36, 44–53. [Google Scholar] [CrossRef] - Aggarwal, C.C.; Hinneburg, A.; Keim, D.A. On the surprising behavior of distance metrics in high dimensional space. In Proceedings of the International Conference on Database Theory, London, UK, 4–6 January 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 420–434. [Google Scholar]
- LeCun, Y. The MNIST Database of Handwritten Digits. 1998. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 13 August 2022).
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv
**2017**, arXiv:1708.07747. [Google Scholar] - Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]

**Figure 1.**Crossbar organization showing the top and bottom electrodes. Each synaptic cell consists of an NVM device (resistive element) and an access transistor. The NVM device can be implemented using technologies such as PCM or OxRAM.

**Figure 4.**Images in (

**a**,

**c**) visualize tests generated using uniform and normal distributions, respectively. Here, pixels are colored using a heat map in which smaller values appear darker. Graphs in (

**b**,

**d**) count the output labels predicted by the DNNs when supplied with test patterns from ${T}_{\mathrm{UD}}$ and ${T}_{\mathrm{ND}}$, respectively. (

**a**) A sample test from ${T}_{\mathrm{UD}}$. (

**b**) Labels predicted by DNNs trained on FMNIST for tests $\in {T}_{\mathrm{UD}}$. (

**c**) A sample test from ${T}_{\mathrm{ND}}$. (

**d**) Labels predicted by DNNs trained on FMNIST for tests $\in {T}_{\mathrm{ND}}$.

**Figure 5.**Standardized pixel values lie within the one $\sigma $ of a normal distribution centered at $\mu =0$.

**Figure 6.**Generation of structured patterns and examples. (

**a**) Structured pattern constructed by repeating the $[0,f],[f,0],[f,f]$ primitive patterns. (

**b**,

**c**) Example of a structured pattern.

**Figure 7.**Examples of color images generated using templates after applying the various geometric transformations. (

**a**) Original image. (

**b**) Vertical flip. (

**c**) Horizontal flip + affine. (

**d**) Rotation + affine. (

**e**) Rotation + horizontal flip. (

**f**) Rotation + horizontal flip + affine.

**Figure 8.**Fault coverage achieved via test sequencing versus the baseline for CNN architectures trained on FMNIST.

**Figure 9.**Fault coverage achieved using templates versus baselines for AlexNet & ResNet-18 trained on CIFAR10.

**Table 1.**The number of neurons within the fully connected (FC) layers, output (OUT) layer, pooling (P) layers, and the convolutional (CONV) layers are shown for the ANN-3 and CNN-2 architectures. For CNN-2, MAX_P1, and MAX_P2, refer to the first and second max-pooling layers, respectively. The number of CONV, P, FC, and OUT layers are provided for LeNet-5, AlexNet, and ResNet-18 architectures.

DNN | Architecture |
---|---|

ANN-3 | FC1 (128) $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ FC2 (128) $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ OUT (10) |

CNN-2 | CONV1 (16) $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ MAX_P1 $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ CONV2 (32) $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ MAX_P2 $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ FLATTEN $\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}$ OUT (10) |

LeNet-5 [13] | 2 (CONV), 2 (P), 2 (FC), 1 (OUT) |

AlexNet [14] | 5 (CONV), 2 (P), 2 (FC), 1 (OUT) |

ResNet-18 [15] | 20 (CONV), 2 (P), 1 (OUT) |

**Table 2.**Model size in terms of number of weights and accuracy reported for full-precision and compressed versions of the various DNNs.

Key Metrics | MNIST | FMNIST | CIFAR-10 | |||||||
---|---|---|---|---|---|---|---|---|---|---|

ANN-3 | CNN-2 | LeNet | AlexNet | ANN-3 | CNN-2 | LeNet | AlexNet | AlexNet | ResNet | |

Num. weights (full precision) | 1.18 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 1.28 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 6.17 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 2.32 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{7}$ | 1.18 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 1.28 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 6.17 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 2.32 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{7}$ | 4.26 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | 11.18 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ |

Num. weights (compressed) | 2081 | 1370 | 1834 | 4438 | 2029 | 1831 | 2524 | 4362 | 15,009 | 44,014 |

Accuracy (full precision) | 97.26% | 97.40% | 98.71% | 98.74% | 86.03% | 87.64% | 88.76% | 90.01% | 78.84% | 94.85% |

Accuracy (compressed) | 78.68% | 89.30% | 84.38% | 88.10% | 69.11% | 77.09% | 74.21% | 64.61% | 53.74% | 53.74% |

Workload | Model | Uniform Dist. (UD) | Normal Dist. (ND) |
---|---|---|---|

MNIST | ANN-3 | 34.02% (1416/4162) | 92.04% (3831/4162) |

CNN-2 | 42.22% (1157/2740) | 86.49% (2370/2740) | |

LeNet-5 | 36.09% (1324/3668) | 77.99% (2861/3668) | |

AlexNet | 52.67% (4577/8689) | 93.72% (8144/8689) | |

FMNIST | ANN-3 | 43.83% (1779/4058) | 95.29% (3867/4058) |

CNN-2 | 58.90% (2157/3662) | 83.01% (3040/3662) | |

LeNet-5 | 63.03% (3182/5048) | 77.85% (3930/5048) | |

AlexNet | 64.74% (5496/8489) | 91.58% (7775/8489) | |

CIFAR10 | AlexNet | 37.78% (11,304/29,922) | 59.63% (17,845/29,922) |

ResNet-18 | 45.99% (38,840/84,447) | 52.59% (44,412/84,447) |

Workload | Model | Random (10 k) | Our Approach |
---|---|---|---|

MNIST | ANN-3 | 90.02% | 98.94% |

CNN-2 | 96.27% | 92.84% | |

LeNet-5 | 94.62% | 92.36% | |

AlexNet | 91.16% | 96.78% | |

FMNIST | ANN-3 | 91.16% | 96.78% |

CNN-2 | 98.77% | 96.55% | |

LeNet-5 | 96.94% | 94.39% | |

AlexNet | 94.31% | 95.18% | |

CIFAR10 | AlexNet | 91.54% | 88.34% |

ResNet-18 | 90.46% | 86.34% |

Workload | Model | $\uparrow \uparrow $ Transitions | $\downarrow \downarrow $ Transitions | Mixed Transitions |
---|---|---|---|---|

MNIST | ANN-3 | 96.23% | 99.01% | 98.53% |

CNN-2 | 94.78% | 95.82% | 95.47% | |

LeNet-5 | 91.52% | 95.95% | 94.90% | |

AlexNet | 97.85% | 98.71% | 99.75% | |

FMNIST | ANN-3 | 96.21% | 98.27% | 99.74% |

CNN-2 | 95.46% | 99.09% | 96.94% | |

LeNet-5 | 93.48% | 95.62% | 97.03% | |

AlexNet | 97.03% | 97.12% | 98.30% | |

CIFAR10 | AlexNet | 93.48% | 96.88% | 97.03% |

ResNet-18 | 90.04% | 86.76% | 97.66% |

Workload | Model | $\uparrow \uparrow \uparrow $ Transitions | $\downarrow \downarrow \downarrow $ Transitions | Mixed Transitions |
---|---|---|---|---|

MNIST | ANN-3 | 97.90% | 100.00% | 97.66% |

CNN-2 | 97.08% | 98.01% | 94.45% | |

LeNet-5 | 94.43% | 97.80% | 94.05% | |

AlexNet | 98.24% | 8.93% | 99.06% | |

FMNIST | ANN-3 | 96.94% | 98.27% | 99.36% |

CNN-2 | 96.55% | 99.09% | 96.57% | |

LeNet-5 | 95.77% | 97.93% | 95.02% | |

AlexNet | 97.62% | 97.46% | 98.16% | |

CIFAR10 | AlexNet | 95.16% | 98.05% | 96.24% |

ResNet-18 | 91.15% | 87.50% | 98.27% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Mishra , A.K.; Das, A.K.; Kandasamy, N.
Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks. *Electronics* **2022**, *11*, 2592.
https://doi.org/10.3390/electronics11162592

**AMA Style**

Mishra AK, Das AK, Kandasamy N.
Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks. *Electronics*. 2022; 11(16):2592.
https://doi.org/10.3390/electronics11162592

**Chicago/Turabian Style**

Mishra , Abhishek Kumar, Anup Kumar Das, and Nagarajan Kandasamy.
2022. "Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks" *Electronics* 11, no. 16: 2592.
https://doi.org/10.3390/electronics11162592