Next Article in Journal
Empirically Informed Multi-Agent Simulation of Distributed Energy Resource Adoption and Grid Overload Dynamics in Energy Communities
Previous Article in Journal
Comprehensive Assessment of Wind Energy Potential with a Hybrid GRU–Weibull Prediction Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of 6T-SRAM Cell Based on CNN-Informed NSGA-II with Consideration of Parasitic Resistance

1
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
2
Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ, UK
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(20), 4002; https://doi.org/10.3390/electronics14204002
Submission received: 6 September 2025 / Revised: 3 October 2025 / Accepted: 9 October 2025 / Published: 13 October 2025

Abstract

Optimizing static random-access memory (SRAM) cells requires considering parasitic effects, as their impact on circuits in advanced nodes becomes increasingly complex. In this paper, Convolutional Neural Network-Informed Non-dominated Sorting Genetic Algorithms-II (CNN-Informed NSGA-II) was proposed to optimize 7 nm FinFET 6T-SRAM cells taking into account parasitic resistance. CNN-Informed NSGA-II uses a trained CNN model integrated into the conventional NSGA-II, thereby reducing its computational complexity. This approach provides a generally applicable solution that significantly improves the efficiency of circuits while balancing competitive performance metrics. Compared to the ideal (parasitic-free) 6T-SRAM cell design, the optimized 6T-SRAM cell design (considering parasitic effects) achieves a reduction of 81.60% in Write Dynamic Power and 64.65% in Write Time; HSNM and RSNM are improved by 11.92% and 6.42%, respectively. The optimized 7 nm FinFET 6T-SRAM cell structure in this paper outperforms the parasitic-free structure in terms of the performance parameters above, even when taking into account parasitic effects.

1. Introduction

SRAM is widely used in memory caches due to its high read/write speed [1]. With a reduction in circuit size and an increase in device density, the impact of parasitic effects becomes increasingly apparent and even causes read and write faults in SRAM cells [2]. Maddela et al. [3] conducted detailed studies on different kinds of faults caused by parasitic effects on SRAM at different technology nodes. Ajay et al. [4] analyzed the topology of 22 nm 6T FinFET SRAM and explored the effect of parasitic capacitance on its performance, proving that symmetric gate-workfunction 6T FinFET SRAMs have better dynamic write-ability and DC metrics. Thus, parasitic effects need to be considered when modeling SRAM at advanced nodes. Gupta et al. [5] studied the effects of different device structures on parasitic resistance and capacitance by introducing a novel device called the complementary field-effect transistor at the 7 nm node. Due to the three-dimensional structure, 7 nm FinFETs have gate capacitance, parasitic capacitance, and resistance. Therefore, parasitic effects need to be analyzed in device and circuit design for advanced nodes using FinFET [6]. To address parasitic effects in the post-layout simulation process, Luo et al. [7,8] considered parasitic capacitance and parasitic resistance in the pre-layout simulation stage. This measure helps to analyze the impact of parasitic parameter values on the performance of SRAM cells, thus optimizing the layout accordingly. However, some parasitic parameters have competitive relationships with a specific performance metric. To quickly balance the various performance metrics of SRAM cells, a faster method is required to analyze multiple parasitic parameters.
The application of low-cost machine learning (ML) models can realize the fast prediction of circuit parameters. Jeon et al. [9] focused on parasitic parameters and accelerated the extraction of SRAM’s RC netlist through Bayesian optimization. Both Bouhlila et al. [10] and Sudharsan et al. [11] applied ML models for the rapid analysis and optimization of SRAM cell performance metrics. Particularly, in a recent study, Shen et al. [12] used Graph Neural Network (GNN) classifiers and Multi-Layer Perceptron (MLP) regressors to perform parasitic extraction on SRAM in the pre-layout simulation stage, accelerating the parasitic extraction process. However, performance metrics such as the power consumption and read/Write Time of SRAM cells often exhibit a competitive relationship [13]. For example, designs with larger cell areas have higher static noise margins (SNMs) but result in higher power consumption and larger latency. Genetic algorithms, which mimic natural evolution, have been proposed to obtain the optimal solutions for multi-objective optimization problems [14]. Zhang et al. [15] applied a genetic algorithm to optimize various wearout-related parameters of SRAM cells. Some parasitic resistance of SRAM cells has a competitive impact on performance metrics [7]. So far, no work has balanced parasitic-related performance metrics through genetic algorithms. This paper focuses on parasitic resistance and uses the improved genetic algorithm to optimize the performance of SRAM cells.
This paper improves the algorithm by combining NSGA-II with a trained CNN model and proposes CNN-Informed NSGA-II for higher efficiency. The algorithm achieves high computational efficiency even under large population sizes and strict constraints. The parasitic resistances between key nodes and the transistor ports of 7 nm FinFET 6T-SRAM were analyzed in this paper. Despite considering parasitic effects, the optimized SRAM cell’s performance was still superior to that of the parasitic-free state 7 nm FinFET 6T-SRAM. CNN-informed NSGA-II significantly enhances the efficiency of multi-objective optimization for SRAM cells.

2. Methodology

This paper proposes a method for the rapid analysis and optimization of the parasitic resistance of SRAM cells. Figure 1 outlines our methodology: Firstly, in Section 2.1, we use HSPICE simulation and analyze the impact of parasitic parameters between key nodes and the transistor ports of 7 nm FinFET 6T-SRAM on its performance to determine the resistance that needs to be optimized. Afterward, multiple HSPICE simulations are conducted based on these resistances to generate a dataset. In Section 2.2, these datasets are employed to train various neural network models, among which the one-dimensional CNN model performs the best. In Section 2.3, the best-trained CNN model is integrated into the conventional NSGA-II to obtain the Pareto fronts with different configurations of resistances. CNN-informed NSGA-II is used to find the appropriate resistance configuration based on the performance metrics of the 6T-SRAM cell. The selected Pareto-optimal solution is used to design 6T-SRAM.

2.1. Analysis of Parasitic Resistances in 6T-SRAM and Generation of Dataset

Usually, parasitic effects are considered during post-layout simulation, after which the circuit layout can be optimized. This process, however, is expensive and time-consuming. Analyzing parasitic effects during pre-layout circuit simulation and designing layouts based on optimization goals for specific parasitic parameters can balance various performance parameters of SRAM cells. Several studies have investigated the parasitic extraction of 6T-SRAM cell circuits; parasitic capacitance has been proven to have a negative impact on all performance parameters of SRAM cells [7,8]. The effect of parasitic resistance on the performance of SRAM is competitive, so this paper mainly analyzes and balances parasitic resistance. The parasitic resistance networks are shown in Figure 2, which illustrate the parasitic resistances between the key nodes (storage nodes O1/O2, power supply (VDD), ground (VSS), bit line and bit line bar (BL/BLB), and write line (WL)) of 6T-SRAM and the transistor ports, while Table 1 shows the classification of these parasitic resistances.
A 7 nm FinFET 6T-SRAM circuit schematic was developed using Cadence Virtuoso (6.1.8) based on the ASAP7 PDK library [16]. The ratio of PU:PD:AX was set to 1:5:2 in this paper, to obtain SRAM cells with high stability [17] and high read/write speed [18]. The supply voltage (VDD) was set to 0.7 V [19]. Subsequently, transient and DC simulations of the 6T-SRAM cell in the hold, read, and write states were conducted using HSPICE. In the relevant experiments, parasitic resistances between key nodes of 6T-SRAM were extracted [7]. To align with the actual scenario and enable NSGA-II to search for optimal parameters in sufficient detail, the value of each type was uniformly varied from 0 to 25 K Ω . The value of each type of parasitic resistance is shown in Table 1, which is used to investigate the impact of significant parasitic effects on the performance metrics of 6T-SRAM cells.
The simulation results, as shown in Figure 3, demonstrate the variations in eight performance metrics of the 6T-SRAM cell when each parasitic resistance is individually varied from 0 to 25 K Ω . The performance metrics are Write Dynamic Power (average power in transient simulation, when the operating frequency of the SRAM cell is 250 kHz), Write Time (time from the 50% rising edge of WL to the 90% falling edge), Write/Read Peak-to-Peak Power (the difference between the maximum and minimum values of the power curve during DC simulation), Write/Read Average Power (the average value of the power curve during DC simulation), and Hold/Read Static Noise Margin (H/RSNM, where the SNM is defined as the diagonal length of the largest inner square in the butterfly curve). Notably, in the hold state, the voltage of WL is 0, and the access transistors are in the cutoff state, so the relevant parasitic resistance is not considered. There are some counterintuitive conclusions in the simulation results of Figure 3, and several types of parasitic resistance (Rbax, Roax, and Ropd) increase, which can lead to a decrease in power. These phenomena are caused by multiple physical reasons. For example, an increase in parasitic resistance prolongs the charging and discharging times of the storage nodes, which in turn leads to a decrease in the average power within one period. Alternatively, an increase in parasitic resistance can alter the current distribution in the circuit, leading to a significant reduction in the current of high-energy-consuming branches and potentially lowering the overall power consumption of the circuit.
The analysis of similar phenomena typically requires a significant amount of professional knowledge and multiple simulations to balance the power consumption and other performance metrics (SNM and delay) of SRAM cells. Thus, this paper combines machine learning with genetic algorithms to quickly find suitable parameters, providing a series of guiding ideas for the layout of SRAM cells. The optimization guidance for each type of resistance is recorded in Table 2. According to the information in Table 2, to balance the performance metrics of the read and write states, Rbax, Roax, and Rpds were selected; a detailed analysis of these three types of resistance will be conducted in the following sections.
During the dataset generation process, PVT (Process, Voltage, Temperature) represents the standard process corner, with a voltage of 0.7 V and a temperature of 27 °C. The values of the three types of parasitic resistances (Rbax, Roax, and Rpds) were varied from 0 to 25 K Ω through HSPICE simulations. Removing samples that failed to write “0” (caused by parasitic effects) resulted in 5604 samples. These 5604 samples are used as the dataset to train the neural network model. When only considering the read and write state, each sample’s input features include the values of Rbax, Roax, and Rpds, while the output features comprise the values of Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, Read Average Power, and RSNM—seven performance metrics in total.

2.2. The Training of the Neural Networks

In this section, a one-dimensional convolutional neural network (1D-CNN) was trained to predict performance metrics based on the parasitic resistances. Among the 5604 samples in the dataset, 5% were allocated for validation. The remaining data was split into 75% for training and 25% for testing. In the preprocessing stage, MinMaxScaler was applied to the inputs and outputs. The hyperparameters for each neural network model were determined through experimentation and tuning. The cuDNN library of PyTorch (2.3.0) [20] was used for the three neural network models in this paper, and the Adam optimizer [21] was employed with a learning rate of 0.001. The loss function was defined as a combination of the mean squared error (MSE) and L2 regularization loss, which can accelerate model convergence.
L o s s = 1 n i = 1 n ( y i y i ^ ) 2 + λ p p a r a m e t e r s p 2
where λ is the regularization strength parameter, set to 0.00001, and p a r a m e t e r s represents the weights and biases of the model. The CNN model was trained for 50 epochs to prevent overfitting.
The 1D-CNN model is a filter (convolutional kernel) that slides along the input data [22]. Before performing the convolution operation, the unsqueeze and expand operations are used to expand each sample with three channels, thereby obtaining a data shape suitable for 1D-CNN. During the convolution operation, the filter performs dot product operation with the local area of the input data.
y [ n ] = b + i = 1 C i n ( x [ n ] · w i ) [ n ]
where y [ n ] is the output of the convolution layer, C i n is the number of channels, and b is the bias. w i is the i th filter, and x [ n ] is the input sequence. Figure 4c shows the structure of the CNN model as 3-cnn1 (filters: 32; kernel size: 3; padding: 1; stride: 1)-maxpool (kernel size: 2; stride: 2)-cnn2 (filters: 64; kernel size: 3; padding: 1; stride: 1)-cnn3 (filters: 128; kernel size: 3; padding: 1; stride: 1)-cnn4 (filters: 256; kernel size: 3; padding: 1; stride: 1)-flatten-fc256-7. It has four convolutional layers, with a max-pooling layer following the first convolutional layer, a flatten layer, and a fully connected layer following the fourth convolutional layer. The activation function for the four convolutional layers is ReLU. In the designed multi-channel 1D-CNN, each filter is used to obtain different feature maps, and this measure can comprehensively extract features and improve the performance of 1D-CNN [23].
We also trained other neural network models, the Multi-Layer Perceptron (MLP) and Long Short-Term Memory (LSTM). The MLP model exhibits good performance in nonlinear problems [24]. Figure 4a shows the structure of the MLP model as 3-fc64-fc128-fc128-fc64-7, with four fully linked layers and their activation functions being rectifier linear units (ReLUs) [25]. The LSTM model is a special case of recurrent neural networks that alleviates gradient vanishing through gating [26]. Figure 4b displays the structure of the LSTM model, which is 3-LSTM (hidden size: 128)-fc128-7, and the activation function is sigmoid.
After training, the performance of the neural network models was evaluated using an additional 5% of the dataset. The MSE and the R-squared (R2) values for the three models are presented in Table 3.
M S E = 1 n i = 1 n ( y i y i ^ ) 2
R 2 = 1 S S r e s S S t o t
where S S r e s is the Sum of Squares due to Error, representing the sum of squares of the difference between the actual value and the model’s predicted value. S S t o t is the Sum of Squares due to Total, which represents the sum of squares of the difference between the actual value and the average value of the dataset. All three models have a very small MSE, but the CNN model performs the best from the R2 perspective.

2.3. CNN-Informed NSGA-II to Obtain Pareto Front

CNN-Informed NSGA-II is introduced in this section to optimize the performance metrics and identify the optimal configuration of the parasitic resistance values. Figure 5a shows a flowchart of NSGA-II, and this algorithm encodes the decision variables in binary or real number format and then applies crossover and mutation operations. In each iteration, the non-dominated sorting, crowding-distance assignment, and elite strategy are used to explore the parameter space [27]. Non-dominated sorting is a key component of NSGA-II. The following is the mathematical principle of non-dominated sorting: P = { x 1 , x 2 , , x N } is the population containing N individuals; for each individual x i , n i is the domination count, and S i = { x j P | x i x j } is the set of individuals dominated by the individual x i . For i = 1 , 2 , , N , we iterate through all pairs of individuals ( x i , x j ) ( j = 1 , 2 , , N and j i ) . If x i x j , S i S i { x j } and if x j x i , then S i S i { x j } . Subsequently, based on the dominance count n i , individuals in S i are assigned to different fronts. Among them, the set of individuals with the smallest n i constitutes the Pareto front.
However, NSGA-II has limitations; exploring the entire parameter space can yield unrealistic solutions for solving practical problems. The quality of the solutions obtained by NSGA-II largely depends on the setting of constraints and stopping conditions. Strict constraints and stopping conditions may require a large number of expensive iterations to satisfy, and NSGA-II may also fail to obtain Pareto-optimal solutions due to convergence to local minima. This section proposes a generally applicable approach to address these limitations of NSGA-II.
Figure 5b shows a flowchart of CNN-informed NSGA-II. The following are the improvements over the traditional NSGA-II:
  • First, 4000 samples are randomly selected from the simulation data as the initial population. This avoids the slow convergence associated with randomly generated problems.
  • The non-dominated sorting part was replaced by a trained CNN model. During each iteration, the CNN model directly takes decision variables (resistances) as input and outputs the objective function values (performance metrics). Constraints are set based on practical requirements for screening individuals whose constraints are retained, and the remaining individuals proceed to crossover and mutation.
  • Population diversity is preserved, and algorithms are prevented from converging to local minima. In each iteration, the crowding-distance of the individuals is calculated and used as a “Diversity Threshold”. When the threshold is no less than 0.1, 100 randomly selected samples from the simulation data are combined to serve as parents for the next iteration. The Diversity Threshold is defined as follows:
    D i v e r s i t y T h r e s h o l d = 2 n ( n 1 ) i = 1 n 1 j = i + 1 n d i j
    where n is the population size, and d i j is the Euclidean distance between solution i and solution j. The improvements described above were implemented based on the NSGA-II provided by the Platypus library [28]. The algorithm was iterated 100 times to minimize Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, and Read Average Power while maximizing RSNM.

3. Results and Discussion

The learning curve of the CNN model during the training process is shown in Figure 6a. After training, the performance of the CNN was evaluated using the 5% validation set. A comparison of the prediction results from the trained CNN model with the actual simulation results is shown in Figure 6b–h. The MSE and R-squared values of the trained CNN model are 2.11 × 10 7 and 0.9690, respectively. The CNN model can accurately predict the performance metrics of the 6T-SRAM cell based on the values of the parasitic resistances Rbax, Roax, and Rpds.
The Pareto-optimal solutions obtained from CNN-informed NSGA-II are presented in Table 4. Simulations were conducted to validate the effectiveness of the proposed method, and a comparison was made with the parasitic-free 7 nm FinFET 6T-SRAM cell. Additionally, optimization strategies were provided for the parasitic resistances.
HSPICE was employed for transient and DC simulations for the 13 Pareto-optimal solutions. As a result, the performance metrics were obtained, including Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, Read Average Power, and RSNM, for each solution, as shown in Figure 7.
The 13th solution, which yielded the lowest Write Time, was selected for further simulation. This solution was compared to the ideal (parasitic-free) 7 nm FinFET 6T-SRAM cell [29] in terms of Write Dynamic Power, Write Time, HSNM, and RSNM, with the results displayed in Table 5. Despite including parasitic effects, CNN-informed NSGA-II’s Pareto-optimal solution demonstrates better performance. Specifically, Write Dynamic Power and Write Time were reduced by 81.60% and 64.65%, respectively, while HSNM and RSNM increased by 11.92% and 6.42% compared to the ideal SRAM [29]. The 6T-SRAM cell designed in this study is mainly intended for applications that require frequent read and write operations. Compared with RRAM/PCM [30,31], the 6T-SRAM cell designed in this paper features fast read/write speeds, low power consumption fluctuations, and a high noise margin. Consequently, the designed SRAM holds broad application prospects in high-precision In-Memory Computing (IMC) scenarios, such as control logic storage and temporary computing data caching.
The proposed method analyzes parasitic effects during pre-layout simulation and provides optimization strategies. A detailed analysis was conducted on the impact of the parasitic resistances (Rbax, Roax, and Rpds) on the performance metrics of 6T-SRAM, and optimal configurations for these parasitic resistances were obtained. According to Table 4, optimization efforts should focus on Rbax, while Roax and Rpds may, in some cases, even contribute positively to performance. Rbax represents the resistance between BL/BLB and the drain of access transistors. The following are suggested optimization strategies for Rbax:
  • Optimize the interconnects. Use a wider BL/BLB or arrange interconnects in different metal layers to reduce parasitic resistance.
  • Use the buried power rail structure [32] to reduce interconnect parasitic resistance.
  • Improve the manufacturing process. Use Ruthenium instead of copper for interconnects to reduce resistance [33].
In addition, the proposed CNN-informed NSGA-II for multi-objective optimization can be generalized to other practical problems that require a large population size and strict constraints. In this approach, regarding the computationally intensive part, non-dominated sorting is replaced by a trained CNN model. For NSGA-II, the algorithmic complexity per iteration is O ( M N 2 ) , where the complexity of non-dominated sorting is O ( M ( 2 N ) 2 ) , M is the number of objective functions, and N is the population size [27]. In contrast, the CNN model’s computational complexity for predicting N samples is O ( N × 131 , 680 ) . Given an initial population size of 4000, NSGA-II’s complexity of non-dominated sorting is O ( M ( 2 N ) 2 ) per iteration. In contrast, for CNN-informed NSGA-II, many individuals will be filtered out by setting constraints. Except for the first iteration, the number of samples involved in each iteration is far less than 4000. This results in the computational complexity of CNN-informed NSGA-II being much lower than that of the traditional NSGA-II during the iterative process, especially when dealing with a large population size.
Table 6 provides a comparison of the runtime between the CNN-informed NSGA-II framework and the MOIL framework (which uses an ANN surrogate model and the NSGA-II algorithm) [34] for optimizing SRAM cells. The MOIL framework employs the traditional NSGA-II in the genetic algorithm part, while this work adopts CNN-informed NSGA-II. In Table 6, the runtime of the genetic algorithm in this work is much shorter than that of the MOIL framework, indicating that the algorithm complexity of CNN-informed NSGA-II is indeed reduced. In addition, when using Bayesian optimization with a sample size of 800, the runtime is 26.7 h [34], which is significantly longer than that of CNN-informed NSGA-II.
CNN-informed NSGA-II is also advantageous under strict constraint conditions. In traditional NSGA-II, strict constraints and stopping conditions mean high-quality Pareto-optimal solutions for practical problems. However, this can require excessive iterations for NSGA-II, potentially risking convergence to local minima and failure to obtain the Pareto-optimal solutions. In contrast, CNN-informed NSGA-II avoids these problems by introducing a simulation data-driven CNN predictive model, resulting in high computational efficiency even under strict constraint conditions.

4. Conclusions

This paper presents a comprehensive analysis of the parasitic effects resulting from parasitic resistance between the key nodes and transistor ports in 7 nm FinFET 6T-SRAM cells. Specific parasitic resistances were observed to have competitive relationships with the SRAM cell’s performance metrics. By implementing predictive neural networks and CNN-informed NSGA-II, a trade-off was achieved among seven performance metrics for the 6T-SRAM cell (Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, Read Average Power, and RSNM). This resulted in 13 Pareto-optimal solutions, including configurations for the parasitic resistances Rbax, Roax, and Rpds. A detailed simulation was conducted for one of the Pareto-optimal solutions, considering the significant parasitic effects of Rbax, Roax, and Rpds. The results showed that the performance metrics for the 6T-SRAM design achieved a reduction of 81.60% and 64.65% in Write Dynamic Power and Write Time and an increase of 11.92% and 6.42% in HSNM and RSNM, respectively, compared to those of the ideal (parasitic-free) 7 nm FinFET 6T-SRAM cell design. This method considers parasitic effects during pre-layout simulation and accelerates this process using CNN-informed NSGA-II. This approach significantly speeds up the chip design process compared to the traditional workflow. Additionally, the approach proposed in this paper also provides a framework for analyzing parasitic effects at advanced technology nodes.
The CNN-informed NSGA-II proposed in this paper is a generally applicable approach with high computational efficiency when dealing with practical problems with large populations and strict constraints. The method efficiently manages trade-offs among parameters with competitive relationships and achieves multi-objective optimization, showing broad application potential in circuit design. In this work, the PVT conditions were set to the standard process corner, specifically at 0.7 V and 27 °C. To further enhance PVT robustness verification and meet the requirements of practical chip applications in future work, we will expand to a multi-PVT dataset, retrain the CNN to enable PVT-generalizable performance prediction, and implement PVT-aware multi-objective optimization. These steps will significantly improve the application value of our framework in the practical design of 7 nm FinFET SRAM.

Author Contributions

Conceptualization, Q.Z.; methodology, Q.Z. and Y.W.; software, Q.Z.; validation, Q.Z.; formal analysis, Y.W.; investigation, Q.Z.; resources, Y.W., C.Z. and J.Z.; data curation, Q.Z.; writing—original draft preparation, Q.Z.; writing—review and editing, Q.Z. and Y.W.; visualization, Q.Z.; supervision, Y.W., C.Z. and J.Z.; project administration, Y.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the XJTLU Research Development Funding RDF-21-02-072.

Data Availability Statement

Data are available upon reasonable inquiries which can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, J.; Zhao, W.; Wang, Y.; Shu, Y.; Jiang, W.; Ha, Y. A Reliable 8T SRAM for High-Speed Searching and Logic-in-Memory Operations. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2022, 30, 769–780. [Google Scholar] [CrossRef]
  2. Carlo, S.D.; Savino, A.; Scionti, A.; Prinetto, P. Influence of Parasitic Capacitance Variations on 65 nm and 32 nm Predictive Technology Model SRAM Core-Cells. In Proceedings of the 2008 17th Asian Test Symposium, Hokkaido, Japan, 24–27 November 2008; pp. 411–416. [Google Scholar] [CrossRef]
  3. Maddela, V.; Sinha, S.K.; Parvathi, M.; Sharma, V. Comparative Analysis of Open and Short Defects in Embedded SRAM Using Parasitic Extraction Method for Deep Submicron Technology. Wirel. Pers. Commun. 2023, 132, 2123–2141. [Google Scholar] [CrossRef]
  4. Bhoj, A.N.; Jha, N.K. Parasitics-Aware Design of Symmetric and Asymmetric Gate-Workfunction FinFET SRAMs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2014, 22, 548–561. [Google Scholar] [CrossRef]
  5. Gupta, M.K.; Weckx, P.; Schuddinck, P.; Jang, D.; Chehab, B.; Cosemans, S.; Ryckaert, J.; Dehaene, W. The Complementary FET (CFET) 6T-SRAM. IEEE Trans. Electron Devices 2021, 68, 6106–6111. [Google Scholar] [CrossRef]
  6. Mushtaq, U.; Sharma, V.K. Design and Analysis of INDEP FinFET SRAM Cell at 7-nm Technology. Int. J. Numer. Model. Electron. Netw. Devices Fields 2020, 33, e2730. [Google Scholar] [CrossRef]
  7. Luo, Y.; Cao, L.; Zhang, Q.; Cao, Y.; Zhang, Z.; Yao, J.; Yan, G.; Zhang, X.; Gan, W.; Huo, J.; et al. Layout Optimization of Complementary FET 6T-SRAM Cell Based on a Universal Methodology Using Sensitivity With Respect to Parasitic- and -Values. IEEE Trans. Electron Devices 2022, 69, 6095–6101. [Google Scholar] [CrossRef]
  8. Luo, Y.; Yan, G.; Cao, L.; Huo, J.; Zhang, X.; Wei, Y.; Tian, G.; Zhang, Q.; Wu, Z.; Yin, H. Influence of Parasitic Capacitance and Resistance on Performance of 6T-SRAM for Advanced CMOS Circuits Design. In Proceedings of the 2022 China Semiconductor Technology International Conference (CSTIC), Shanghai, China, 20–21 June 2022; pp. 1–3. [Google Scholar] [CrossRef]
  9. Jeon, I.; Park, H.; Yoon, T.; Jeong, H. High Efficiency Variation-Aware SRAM Timing Characterization via Machine-Learning-Assisted Netlist Extraction. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 1391–1395. [Google Scholar] [CrossRef]
  10. Bouhlila, J.; Last, F.; Buchty, R.; Berekovic, M.; Mulhem, S. Machine Learning for SRAM Stability Analysis. In Proceedings of the 2024 IEEE International Symposium on Circuits and Systems (ISCAS), Singapore, 19–22 May 2024; pp. 1–5. [Google Scholar] [CrossRef]
  11. Sudharsan, B.; Yadav, P.; Breslin, J.G.; Intizar Ali, M. An SRAM Optimized Approach for Constant Memory Consumption and Ultra-fast Execution of ML Classifiers on TinyML Hardware. In Proceedings of the 2021 IEEE International Conference on Services Computing (SCC), Chicago, IL, USA, 5–10 September 2021; pp. 319–328. [Google Scholar] [CrossRef]
  12. Shen, S.; Yang, D.; Xie, Y.; Pei, C.; Yu, B.; Yu, W. Deep-Learning-Based Pre-Layout Parasitic Capacitance Prediction on SRAM Designs. In Proceedings of the Great Lakes Symposium on VLSI 2024, Tampa Bay Area, FL, USA, 12–14 June 2024; pp. 440–445. [Google Scholar]
  13. Kumar, H.; Tomar, V. A Review on Performance Evaluation of Different Low Power SRAM Cells in Nano-Scale Era. Wirel. Pers. Commun. 2021, 117, 1959–1984. [Google Scholar] [CrossRef]
  14. Lambora, A.; Gupta, K.; Chopra, K. Genetic Algorithm- A Literature Review. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 380–384. [Google Scholar] [CrossRef]
  15. Zhang, R.; Liu, Z.; Yang, K.; Liu, T.; Cai, W.; Milor, L. Inverse Design of FinFET SRAM Cells. In Proceedings of the 2020 IEEE International Reliability Physics Symposium (IRPS), Dallas, TX, USA, 28 April–30 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  16. Clark, L.T.; Vashishtha, V.; Shifren, L.; Gujja, A.; Sinha, S.; Cline, B.; Ramamurthy, C.; Yeric, G. ASAP7: A 7-nm FinFET Predictive Process Design Kit. Microelectron. J. 2016, 53, 105–115. [Google Scholar] [CrossRef]
  17. Chang, L.; Fried, D.; Hergenrother, J.; Sleight, J.; Dennard, R.; Montoye, R.; Sekaric, L.; McNab, S.; Topol, A.; Adams, C.; et al. Stable SRAM Cell Design for the 32 nm Node and Beyond. In Proceedings of the Digest of Technical Papers. 2005 Symposium on VLSI Technology, Kyoto, Japan, 14–16 June 2005; pp. 128–129. [Google Scholar] [CrossRef]
  18. Mohammed, M.U.; Nizam, A.; Chowdhury, M.H. Performance Stability Analysis of SRAM Cells Based on Different FinFET Devices in 7 nm Technology. In Proceedings of the 2018 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S), Burlingame, CA, USA, 15–18 October 2018; pp. 1–3. [Google Scholar] [CrossRef]
  19. Mushtaq, U.; Sharma, V.K. Design of 6T FinFET SRAM cell at 7 nm. In Proceedings of the 2019 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 17–19 July 2019; pp. 104–108. [Google Scholar] [CrossRef]
  20. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems 32 (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  21. Kingma, D.P.; Ba, J. Adam: A method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  22. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D Convolutional Neural Networks and Applications: A Survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
  23. Xiong, Q.; Kong, Q.; Xiong, H.; Chen, J.; Yuan, C.; Wang, X.; Xia, Y. Zero-Shot Knowledge Transfer for Seismic Damage Diagnosis through Multi-Channel 1D CNN Integrated with Autoencoder-Based Domain Adaptation. Mech. Syst. Signal Process. 2024, 217, 111535. [Google Scholar] [CrossRef]
  24. Popescu, M.C.; Balas, V.E.; Perescu-Popescu, L.; Mastorakis, N. Multilayer Perceptron and Neural Networks. WSEAS Trans. Circuits Syst. 2009, 8, 579–588. [Google Scholar]
  25. Khorrami, P.; Nabavi, A. Evaluating the Performance of 6T SRAM Cells by Deep Learning. Microelectron. Reliab. 2024, 156, 115374. [Google Scholar] [CrossRef]
  26. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  27. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  28. Hadka, D. Platypus-Multiobjective Optimization in Python. 2020. Available online: https://platypus.readthedocs.io/ (accessed on 7 October 2024).
  29. Abbasian, E.; Birla, S.; Gholipour, M. A Comprehensive Analysis of Different SRAM Cell Topologies in 7-nm FinFET Technology. Silicon 2021, 14, 6909–6920. [Google Scholar] [CrossRef]
  30. Chithambara Moorthii, J.; Mourya, M.V.; Bansal, H.; Verma, D.; Suri, M. RRAM IMC based Efficient Analog Carry Propagation and Multi-bit MVM. In Proceedings of the 2024 8th IEEE Electron Devices Technology & Manufacturing Conference (EDTM), Bangalore, India, 3–6 March 2024; pp. 1–3. [Google Scholar] [CrossRef]
  31. Antolini, A.; Lico, A.; Scarselli, E.F.; Carissimi, M.; Pasotti, M. Phase-change memory cells characterization in an analog in-memory computing perspective. In Proceedings of the 2022 17th Conference on Ph.D Research in Microelectronics and Electronics (PRIME), Villasimius, Italy, 12–15 June 2022; pp. 233–236. [Google Scholar] [CrossRef]
  32. Gupta, A.; Mertens, H.; Tao, Z.; Demuynck, S.; Bömmels, J.; Arutchelvan, G.; Devriendt, K.; Pedreira, O.V.; Ritzenthaler, R.; Wang, S.; et al. Buried Power Rail Integration with Si FinFETs for CMOS Scaling beyond the 5 nm Node. In Proceedings of the 2020 IEEE Symposium on VLSI Technology, Honolulu, HI, USA, 16–19 June 2020; pp. 1–2. [Google Scholar] [CrossRef]
  33. Zhang, X.; Huang, H.; Patlolla, R.; Wang, W.; Mont, F.W.; Li, J.; Hu, C.K.; Liniger, E.G.; McLaughlin, P.S.; Labelle, C.; et al. Ruthenium interconnect resistivity and reliability at 48 nm pitch. In Proceedings of the 2016 IEEE International Interconnect Technology Conference/Advanced Metallization Conference (IITC/AMC), San Jose, CA, USA, 23–26 May 2016; pp. 31–33. [Google Scholar] [CrossRef]
  34. Lee, J.; Park, J.; Kim, S.; Jeong, H. Bayesian Learning Automated SRAM Circuit Design for Power and Performance Optimization. IEEE Trans. Circuits Syst. I Regul. Pap. 2023, 70, 4949–4961. [Google Scholar] [CrossRef]
Figure 1. A flowchart of the research methodology proposed in this paper.
Figure 1. A flowchart of the research methodology proposed in this paper.
Electronics 14 04002 g001
Figure 2. Schematic diagram of parasitic resistance between key nodes and transistor ports of 6T-SRAM cells [7].
Figure 2. Schematic diagram of parasitic resistance between key nodes and transistor ports of 6T-SRAM cells [7].
Electronics 14 04002 g002
Figure 3. The impact of parasitic resistance between key nodes and the transistor ports of 6T-SRAM cells on performance metrics: (a) Write Dynamic Power, (b) Write Time, (c) Write Peak-to-Peak Power, (d) Write Average Power, (e) Read Average Power, (f) Read Peak-to-Peak Power, (g) RSNM, (h) HSNM.
Figure 3. The impact of parasitic resistance between key nodes and the transistor ports of 6T-SRAM cells on performance metrics: (a) Write Dynamic Power, (b) Write Time, (c) Write Peak-to-Peak Power, (d) Write Average Power, (e) Read Average Power, (f) Read Peak-to-Peak Power, (g) RSNM, (h) HSNM.
Electronics 14 04002 g003
Figure 4. Schematic diagram of (a) Multi-Layer Perceptron (MLP) model, (b) Long Short-Term Memory (LSTM), and (c) Convolutional Neural Network (CNN) model.
Figure 4. Schematic diagram of (a) Multi-Layer Perceptron (MLP) model, (b) Long Short-Term Memory (LSTM), and (c) Convolutional Neural Network (CNN) model.
Electronics 14 04002 g004
Figure 5. (a) Flowchart of NSGA-II, (b) Flowchart of CNN-informed NSGA-II.
Figure 5. (a) Flowchart of NSGA-II, (b) Flowchart of CNN-informed NSGA-II.
Electronics 14 04002 g005
Figure 6. Training results of CNN model: (a) Learning curve of CNN model during training process. Comparison of prediction results from trained CNN model with actual simulation results: (b) Write Peak-to-Peak Power, (c) Write Average Power, (d) Write Dynamic Power, (e) Write Time, (f) Read Peak-to-Peak Power, (g) Read Average Power, and (h) RSNM.
Figure 6. Training results of CNN model: (a) Learning curve of CNN model during training process. Comparison of prediction results from trained CNN model with actual simulation results: (b) Write Peak-to-Peak Power, (c) Write Average Power, (d) Write Dynamic Power, (e) Write Time, (f) Read Peak-to-Peak Power, (g) Read Average Power, and (h) RSNM.
Electronics 14 04002 g006
Figure 7. The HSPICE simulation results (Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, Read Average Power, and RSNM) for the 13 Pareto solutions.
Figure 7. The HSPICE simulation results (Write Peak-to-Peak Power, Write Average Power, Write Dynamic Power, Write Time, Read Peak-to-Peak Power, Read Average Power, and RSNM) for the 13 Pareto solutions.
Electronics 14 04002 g007
Table 1. Parasitic resistances between key nodes and the transistor ports of 6T-SRAM cells [7].
Table 1. Parasitic resistances between key nodes and the transistor ports of 6T-SRAM cells [7].
Res.Key NodesTransistor Ports
RbaxBL/BLBAX_D
RoaxO1/O2AX_S
RopuO1/O2PU_D
RopdO1/O2PD_D
RopugO1/O2PU_G
RopdgO1/O2PD_G
RwaxWLAX_G
RpdsGNDPD_S
RpudVDDPU_S
Table 2. Based on the impact of various parasitic resistances on 6T-SRAM performance metrics, an analysis is conducted to determine the optimization guidance for each type of parasitic resistance.
Table 2. Based on the impact of various parasitic resistances on 6T-SRAM performance metrics, an analysis is conducted to determine the optimization guidance for each type of parasitic resistance.
Performance MetricsOptimization Guidance
Decrease Increase
Write Dynamic Power-Roax, Ropd, Rbax, Rpds, Rwax
Write TimeRbax, Roax-
Write Peak-to-Peak Power-Roax, Ropd, Rbax, Rpds
Write Average Power-Roax, Ropd, Rbax, Rpds
Read Average Power-Roax, Ropd, Rbax, Rpds
Read Peak-to-Peak PowerRpds, RpudRbax, Roax
RSNMRpdsRoax, Rbax
HSNMRpds, Ropd, Ropu, Rpud-
Table 3. The performance evaluation metrics for the MLP, CNN, and LSTM, including the mean squared error (MSE) and the R-squared (R2) value.
Table 3. The performance evaluation metrics for the MLP, CNN, and LSTM, including the mean squared error (MSE) and the R-squared (R2) value.
ModelMSER2
MLP5.94 × 10 7 0.9204
CNN2.11 × 10 7 0.9690
LSTM2.04 × 10 7 0.9109
Table 4. The Pareto front (with 13 solutions) obtained from CNN-informed NSGA-II (unit: Ω ).
Table 4. The Pareto front (with 13 solutions) obtained from CNN-informed NSGA-II (unit: Ω ).
Pareto FrontRbaxRoaxRpds
1124.6020,884.7120,504.20
21895.0918,443.3024,354.19
3980.5017,337.8523,617.78
4514.2518,785.5924,215.14
5435.3919,451.5223,007.56
61123.0020,741.9024,631.09
7123.3919,868.9220,355.95
81019.9718,991.9924,109.28
9189.6120,791.2424,642.21
101158.7121,208.8121,186.57
11899.7220,332.6823,186.61
12433.4222,114.4020,551.84
13370.2915,871.5624,628.51
Table 5. A comparison of Write Dynamic Power, Write Time, HSNM, and RSNM for the 13th solution with the ideal (parasitic-free) 7 nm FinFET 6T-SRAM design [29], at a supply voltage (VDD) of 0.5 V.
Table 5. A comparison of Write Dynamic Power, Write Time, HSNM, and RSNM for the 13th solution with the ideal (parasitic-free) 7 nm FinFET 6T-SRAM design [29], at a supply voltage (VDD) of 0.5 V.
Write Dynamic PowerWrite TimeHSNMRSNM
This work12.49 uW33.33 ps0.3099 V0.1658 V
Ref. [29]67.87 uW94.28 ps0.2769 V0.1558 V
Table 6. A comparison of the runtime between the proposed CNN-informed NSGA-II framework and the MOIL framework [34] for optimizing SRAM cells. The number of samples used in this work for simulation and NN training is 5604; for the genetic algorithm, it is 4000; and the number of samples for the MOIL framework is 800.
Table 6. A comparison of the runtime between the proposed CNN-informed NSGA-II framework and the MOIL framework [34] for optimizing SRAM cells. The number of samples used in this work for simulation and NN training is 5604; for the genetic algorithm, it is 4000; and the number of samples for the MOIL framework is 800.
Unit: sThis WorkRef. [34]
Simulation time17,498.83276.0
NN training time70.8504.0
Genetic algorithm time11.33312.0
Total time17,580.97092.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Q.; Wu, Y.; Zhao, C.; Zhou, J. Optimization of 6T-SRAM Cell Based on CNN-Informed NSGA-II with Consideration of Parasitic Resistance. Electronics 2025, 14, 4002. https://doi.org/10.3390/electronics14204002

AMA Style

Zheng Q, Wu Y, Zhao C, Zhou J. Optimization of 6T-SRAM Cell Based on CNN-Informed NSGA-II with Consideration of Parasitic Resistance. Electronics. 2025; 14(20):4002. https://doi.org/10.3390/electronics14204002

Chicago/Turabian Style

Zheng, Qiwen, Ye Wu, Chun Zhao, and Jiafeng Zhou. 2025. "Optimization of 6T-SRAM Cell Based on CNN-Informed NSGA-II with Consideration of Parasitic Resistance" Electronics 14, no. 20: 4002. https://doi.org/10.3390/electronics14204002

APA Style

Zheng, Q., Wu, Y., Zhao, C., & Zhou, J. (2025). Optimization of 6T-SRAM Cell Based on CNN-Informed NSGA-II with Consideration of Parasitic Resistance. Electronics, 14(20), 4002. https://doi.org/10.3390/electronics14204002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop