# Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Method

_{1}, x

_{2}, and x

_{n}represent variables of the objective function, and they can be optimized for the function f satisfying the constraint function g.

_{1}, x

_{2}, and x

_{n}and $\mathsf{\lambda}$. The first part, of taking the derivative with respect to x

_{1}, x

_{2}, and x

_{n}, is shown in the following Equation (3).

_{1}, x

_{2}, and x

_{n}are the numbers of crossbars of layer #1, layer #2, and layer #n, respectively. ‘n’ is the total number of layers used in the CNN architecture. M

_{1}, M

_{2}, and M

_{n}are the numbers of sub-image convolutions required for layer #1, layer #2, and layer #n, respectively. Thus, in Equation (5), M

_{i}and x

_{i}are the numbers of sub-image convolutions and crossbars for layer #i, respectively. If the number of crossbars of layer #i, x

_{i}, can be used as many as Mi, the crossbar’s delay time is as short as 1. On the other hand, if the x

_{i}is as small as 1, the crossbar’s delay time of sub-image convolution for layer #i is as long as M

_{i}/1. Each layer’s delay time can be calculated by M

_{i}/x

_{i}for layer #i, as shown in Equation (5).

_{1}, x

_{2}, and x

_{n}are the numbers of crossbars of layer #1, layer #2, and layer #n, respectively. ‘n’ is the total number of layers used in the CNN architecture. M

_{1}, M

_{2}, and M

_{n}are the numbers of sub-image convolutions required for layer #1, layer #2, and layer #n, respectively.

_{1}, x

_{2}, and x

_{n}are the numbers of crossbars of layer #1, layer #2, and layer #n, respectively, to be optimized here. ‘n’ is the total number of layers used in the CNN architecture. M

_{1}, M

_{2}, and M

_{n}are the numbers of sub-image convolutions required for layer #1, layer #2, and layer #n, respectively. M

_{i}and x

_{i}are the numbers of sub-image convolutions and crossbars for layer #i, respectively.

_{1}, x

_{2}, and x

_{n}are obtained. Here ‘n’ means the number of layers. The optimized numbers of x

_{1}, x

_{2}, and x

_{n}are actually floating-point numbers, not integers. Because the number of crossbars per layer should be an integer, not a floating-point, the floating-point numbers of x

_{1}, x

_{2}, and x

_{n}should be converted to integer numbers of x

_{1}*, x

_{2}*, and x

_{n}*.

_{1}, x

_{2}, and x

_{n}, calculated from the Lagrange multiplier method and GA using Equations (1)–(9), to the integer numbers of x

_{1}*, x

_{2}*, and x

_{n}*. In the pseudo-code in Figure 3b, first, calculate xi (floating-point) with Equations (1)–(9) using the Lagrange multiplier method and the GA. Complete iterations for all the layers to find xi’ (integer) that satisfies both the minimum of the objective function and the constraint by rounding up or rounding down xi (floating-point). Then, calculate Ri* (integer) for layer #i by rounding up (Mi/xi’). Here, Mi and xi’ are the numbers of sub-image convolutions and crossbars for layer #i, respectively. Here Ri* (integer), calculated by rounding up (Mi/xi’), means the number of crossbar operations required to complete the sub-image convolutions for layer #i, when Mi and xi’ are given. Finally, obtain xi* (integer) by rounding up (Mi/Ri*). The xi* (integer) obtained from the pseudo-code can be regarded as the optimized number of crossbars for layer #i.

## 3. Results

_{2}O

_{3}. The memristor cells were measured using the Keithley 4200 (Solon, OH, USA). More information about the fabricated memristors can be found in the reference [36].

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging
**2016**, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed] - Chauhan, R.; Ghanshala, K.K.; Joshi, R.C. Convolutional neural network (CNN) for image detection and recognition. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018. [Google Scholar]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit
**2018**, 77, 354–377. [Google Scholar] [CrossRef] - O’shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv
**2015**, arXiv:1511.08458. [Google Scholar] - Cai, E.; Juan, D.-C.; Stamoulis, D.; Marculescu, D. Neuralpower: Predict and deploy energy-efficient convolutional neural networks. In Proceedings of the Asian Conference on Machine Learning, Seoul, Republic of Korea, 15−17 November 2017. [Google Scholar]
- Rashid, N.; Demirel, B.U.; Al Faruque, M.A. AHAR: Adaptive CNN for energy-efficient human activity recognition in low-power edge devices. IEEE Internet Things J.
**2022**, 9, 13041–13051. [Google Scholar] [CrossRef] - Hodak, M.; Gorkovenko, M.; Dholakia, A. Towards power efficiency in deep learning on data center hardware. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019. [Google Scholar]
- Li, Y.; Wang, Z.; Midya, R.; Xia, Q.; Yang, J.J. Review of memristor devices in neuromorphic computing: Materials sciences and device challenges. J. Phys. D Appl. Phys.
**2018**, 51, 503002. [Google Scholar] [CrossRef] - Burr, G.W.; Shelby, R.M.; Sebastian, A.; Kim, S.; Kim, S.; Sidler, S.; Virwani, K.; Ishii, M.; Narayanan, P.; Fumarola, A.; et al. Neuromorphic computing using non-volatile memory. Adv. Phys. X
**2016**, 2, 89–124. [Google Scholar] [CrossRef] - Huh, W.; Lee, D.; Lee, C. Memristors based on 2D materials as an artificial synapse for neuromorphic electronics. Adv. Mater.
**2020**, 32, 2002092. [Google Scholar] [CrossRef] [PubMed] - Upadhyay, N.K.; Jiang, H.; Wang, Z.; Asapu, S.; Xia, Q.; Yang, J.J. Emerging memory devices for neuromorphic computing. Adv. Mater. Technol.
**2019**, 4, 1800589. [Google Scholar] [CrossRef] - Sung, C.; Hwang, H.; Yoo, I.K. Perspective: A review on memristive hardware for neuromorphic computation. J. Appl. Phys.
**2018**, 124, 151903. [Google Scholar] [CrossRef] - Chen, J.; Li, J.; Li, Y.; Miao, X. Multiply accumulate operations in memristor crossbar arrays for analog computing. J. Semicond.
**2021**, 42, 013104. [Google Scholar] [CrossRef] - Xia, L.; Gu, P.; Li, B.; Tang, T.; Yin, X.; Huangfu, W.; Yu, S.; Cao, Y.; Wang, Y.; Yang, H. Technological exploration of RRAM crossbar array for matrix-vector multiplication. J. Comput. Sci. Technol.
**2016**, 31, 3–19. [Google Scholar] [CrossRef] - Li, B.; Gu, P.; Shan, Y.; Wang, Y.; Chen, Y.; Yang, H. RRAM-based analog approximate computing. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.
**2015**, 34, 1905–1917. [Google Scholar] [CrossRef] - Hu, M.; Graves, C.E.; Li, C.; Li, Y.; Ge, N.; Montgomery, E.; Davila, N.; Jiang, H.; Williams, R.S.; Yang, J.J.; et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater.
**2018**, 30, 1705914. [Google Scholar] [CrossRef] [PubMed] - Xiao, Y.; Jiang, B.; Zhang, Z.; Ke, S.; Jin, Y.; Wen, X.; Ye, C.; Chen, X. A review of memristor: Material and structure design, device performance, applications and prospects. Sci. Technol. Adv. Mater.
**2023**, 24, 2162323. [Google Scholar] [CrossRef] [PubMed] - Strukov, D.B.; Snider, G.S.; Stewart, D.R. The missing memristor found. Nature
**2008**, 453, 80–83. [Google Scholar] [CrossRef] [PubMed] - Ielmini, D. Resistive switching memories based on metal oxides: Mechanisms, reliability and scaling. Semicond. Sci. Technol.
**2016**, 31, 063002. [Google Scholar] [CrossRef] - Mohammad, B.; Jaoude, M.A.; Kumar, V.; Al Homouz, D.M.; Nahla, H.A.; Al-Qutayri, M.; Christoforou, N. State of the art of metal oxide memristor devices. Nanotechnol. Rev.
**2016**, 5, 311–329. [Google Scholar] [CrossRef] - Oh, S.; An, J.; Min, K. Area-Efficient Mapping of Convolutional Neural Networks to Memristor Crossbars Using Sub-Image Partitioning. Micromachines
**2023**, 14, 309. [Google Scholar] [CrossRef] - Peng, X.; Liu, R.; Yu, S. Optimizing weight mapping and data flow for convolutional neural networks on RRAM based processing-in-memory architecture. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019. [Google Scholar]
- Murali, G.; Sun, X.; Yu, S.; Lim, S.K. Heterogeneous mixed-signal monolithic 3-D in-memory computing using resistive RAM. IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
**2020**, 29, 386–396. [Google Scholar] [CrossRef] - Sah, M.P.; Yang, C.; Kim, H.; Muthuswamy, B.; Jevtic, J.; Chua, L. A generic model of memristors with parasitic components. IEEE Trans. Circuits Syst. I Regul. Pap.
**2015**, 62, 891–898. [Google Scholar] [CrossRef] - Nguyen, T.V.; An, J.; Min, K. Memristor-cmos hybrid neuron circuit with nonideal-effect correction related to parasitic resistance for binary-memristor-crossbar neural networks. Micromachines
**2021**, 12, 791. [Google Scholar] [CrossRef] [PubMed] - LeCun, Y. The MNIST Database of Handwritten Digits. 1998. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 12 July 2024).
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. (Master’s Thesis, University of Toronto). Available online: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (accessed on 12 July 2024).
- Tran, H.G.; Ton-That, L.; Thao, N.G.M. Lagrange Multiplier-Based Optimization for Hybrid Energy Management System with Renewable Energy Sources and Electric Vehicles. Electronics
**2023**, 12, 4513. [Google Scholar] [CrossRef] - Everett, H., III. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res.
**1963**, 11, 399–417. [Google Scholar] [CrossRef] - Lagrange Multiplier. Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Lagrange_multiplier (accessed on 8 July 2024).
- Schmitt, L.M. Theory of genetic algorithms. Theor. Comput. Sci.
**2001**, 259, 1–61. [Google Scholar] [CrossRef] - Mirjalili, S.; Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; 2019; pp. 43–55. Available online: https://link.springer.com/chapter/10.1007/978-3-319-93025-1_4 (accessed on 12 July 2024).
- Lambora, A.; Gupta, K.; Chopra, K. Genetic algorithm—A literature review. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019. [Google Scholar]
- Cho, S.-M.; Oh, S.; Yoon, R.; Min, K.-S. Compact Verilog-A Model of Current-Voltage and Transient Behaviors of Memristors for Fast Circuit Simulation. J. IKEEE
**2023**, 27, 180–186. [Google Scholar] - Yang, M.K.; Kim, G.H. Post-Annealing Effect on Resistive Switching Performance of a Ta/Mn
_{2}O_{3}/Pt/Ti Stacked Device. Phys. Status Solidi (RRL)-Rapid Res. Lett.**2018**, 12, 1800031. [Google Scholar] [CrossRef] - Truong, S.N.; Van Pham, K.; Yang, W.; Shin, S.; Pedrotti, K.; Min, K.-S. New pulse amplitude modulation for fine tuning of memristor synapses. Microelectron. J.
**2016**, 55, 162–168. [Google Scholar] [CrossRef]

**Figure 1.**Mapping of convolution operation to memristor crossbar. (

**a**) 28 × 28 MNIST image with 3 × 3 convolution kernel. (

**b**) Memristor crossbar performing convolution operation [22].

**Figure 3.**(

**a**) The flowchart for optimizing the mapping of sub-image convolutions to memristor crossbars. (

**b**) The pseudo-code for converting the floating-point x to the integer x*.

**Figure 5.**Normalized delay-time comparison between the crossbar mapping without and with optimization for the normalized area constraint.

**Figure 6.**Normalized area comparison between the crossbar mapping without and with optimization for the normalized delay-time constraint.

**Figure 7.**Normalized area–delay-time product comparison between the crossbar mapping without and with optimization. Here, the optimization is performed without any constraint.

**Table 1.**Comparison of average reduction percentages of normalized delay time and area for ResNet-18, ResNet-34, and VGG-Net.

ResNet-18 | ResNet-34 | VGG-Net | |
---|---|---|---|

(Case 1) Average reduction percentage of normalized delay time | 8% | 11.8% | 20% |

(Case 2) Average reduction percentage of normalized area | 11% | 14% | 22% |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cho, S.-M.; Yoon, R.; Yoon, I.; Moon, J.; Oh, S.; Min, K.-S.
Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time. *Information* **2024**, *15*, 409.
https://doi.org/10.3390/info15070409

**AMA Style**

Cho S-M, Yoon R, Yoon I, Moon J, Oh S, Min K-S.
Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time. *Information*. 2024; 15(7):409.
https://doi.org/10.3390/info15070409

**Chicago/Turabian Style**

Cho, Seung-Myeong, Rina Yoon, Ilpyeong Yoon, Jihwan Moon, Seokjin Oh, and Kyeong-Sik Min.
2024. "Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time" *Information* 15, no. 7: 409.
https://doi.org/10.3390/info15070409