Analog Design and Machine Learning: A Review
Abstract
1. Introduction
2. ML: An Overview
2.1. Terminology and Learning Paradigms
2.2. Supervised, Semi-Supervised, Unsupervised, and RL Tasks
2.2.1. Supervised Learning
2.2.2. Semi-Supervised Learning
2.2.3. Unsupervised Learning
2.2.4. RL
2.3. Learning Models
2.3.1. Bayesian Models
2.3.2. ANNs
2.3.3. SVMs
2.3.4. Linear and Logistic Regression
Linear Regression
Logistic Regression
2.3.5. Decision Trees
2.4. Evaluation Metrics
2.4.1. Classification Metrics
2.4.2. Regression Metrics
3. Review for Analog Design Based on ML Methods
3.1. Literature Search Strategy and Studies Categorization
3.2. Modeling and Abstraction of Analog Circuits
Overview of Modeling and Abstraction of Analog Circuits Techniques
3.3. Optimization and Sizing Techniques
Overview of Optimization and Sizing Techniques
3.4. Specification-Driven Design
Overview of Specification-Driven Design Techniques
3.5. AI-Assisted Design Automation
Overview of AI-Assisted Design Automation Techniques
4. Discussion
5. Future Work
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Mina, R.; Jabbour, C.; Sakr, G.E. A Review of Machine Learning Techniques in Analog Integrated Circuit Design Automation. Electronics 2022, 11, 435. [Google Scholar] [CrossRef]
- Afacan, E.; Lourenço, N.; Martins, R.; Dündar, G. Review: Machine learning techniques in analog/RF integrated circuit design, synthesis, layout, and test. Integration 2021, 77, 113–130. [Google Scholar] [CrossRef]
- Cao, W.; Benosman, M.; Zhang, X.; Ma, R. Domain knowledge-infused deep learning for automated analog/radio-frequency circuit parameter optimization. In Proceedings of the 59th ACM/IEEE Design Automation Conference, San Francisco, CA, USA, 10–14 July 2022. [Google Scholar] [CrossRef]
- Budak, A.F.; Bhansali, P.; Liu, B.; Sun, N.; Pan, D.Z.; Kashyap, C.V. DNN-Opt: An RL inspired optimization for analog circuit sizing using deep neural networks. In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021. [Google Scholar] [CrossRef]
- Settaluri, K.; Haj-Ali, A.; Huang, Q.; Hakhamaneshi, K.; Nikolic, B. AutoCkt: Deep reinforcement learning of analog circuit designs. In Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, (DATE), Grenoble, France, 9–13 March 2020. [Google Scholar] [CrossRef]
- Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
- Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised Machine Learning: A Review of Classification Techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 2007, 160, 2–24. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A.; RegGoodfellow, I.; Bengio, Y.; Courville, A. Regularization for Deep Learning. Deep Learn. 2016, 1, 216–261. [Google Scholar]
- Bishop, C.M. Bishop-Pattern Recognition and Machine Learning-Springer 2006. Antimicrob. Agents Chemother. 2014, 58, 7250–7257. [Google Scholar]
- Zhu, X. Semi-Supervised Learning Literature Survey; Technical Report 1530; Computer Sciences, University of Wisconsin-Madison: Madison, WI, USA, 2008; Available online: http://pages.cs.wisc.edu/~jerryzhu/pub/ssl_survey.pdf (accessed on 31 July 2025).
- Chapelle, O.; Scholkopf, B.; Zien, A. Semi-Supervised Learning (Chapelle, O. et al., Eds.; 2006) [Book reviews]. IEEE Trans. Neural Netw. 2009, 20, 542. [Google Scholar] [CrossRef]
- van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440. [Google Scholar] [CrossRef]
- Chen, Y.; Mancini, M.; Zhu, X.; Akata, Z. Semi-Supervised and Unsupervised Deep Visual Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 1327–1347. [Google Scholar] [CrossRef]
- Robert, C. Machine Learning, a Probabilistic Perspective. Chance 2014, 27, 62–63. [Google Scholar] [CrossRef]
- Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
- Jollife, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef]
- Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction. IEEE Trans. Neural Netw. 2005, 9, 1054. [Google Scholar] [CrossRef]
- Kaelbling, L.P.; Littman, M.L.; Moore, A.W. Reinforcement learning: A survey. J. Artif. Intell. Res. 1996, 4, 237–285. [Google Scholar] [CrossRef]
- Szepesvári, C. Algorithms for Reinforcement Learning; Springer Nature: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
- Bertsekas, D.P. Chapter 6 Approximate Dynamic Programming Approximate Dynamic Programming. In Dynamic Programming and Optimal Control, 3rd ed.; Athena Scientific: Belmont, MA, USA, 2010; Volume II. [Google Scholar]
- Rasmussen, C.E. Gaussian Processes in machine learning. In Advanced Lecture Notes in Computer Science; Springer Nature: Berlin/Heidelberg, Germany, 2003; Volume 3176. [Google Scholar] [CrossRef]
- Seeger, M. Gaussian processes for machine learning. Int. J. Neural Syst. 2004, 14, 69–106. [Google Scholar] [CrossRef]
- Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; De Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef]
- Williams, C.K.I. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. J. Am. Stat. Assoc. 2003, 98, 489. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Freedman, D.A. Statistical Models: Theory and Practice; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar] [CrossRef]
- Scott, A.J.; Hosmer, D.W.; Lemeshow, S. Applied Logistic Regression. Biometrics 1991, 47, 1632–1633. [Google Scholar] [CrossRef]
- Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–18 August 2016. [Google Scholar] [CrossRef]
- Powers, D.M.W. Evaluation: From precision, recall and f-measure to ROC, informedness, markedness & correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
- Saito, T.; Rehmsmeier, M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 2015, 10, e0118432. [Google Scholar] [CrossRef]
- Chai, T.; Draxler, R.R. Root mean square error (RMSE) or mean absolute error (MAE)? -Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef]
- Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; James, G.; Witten, D. An Introduction to Statistical Learning, 2nd ed.; Springer: New York, NY, USA, 2021; Volume 102. [Google Scholar]
- Li, X.; Zhang, W.; Wang, F.; Sun, S.; Gu, C. Efficient parametric yield estimation of analog/mixed-signal circuits via Bayesian model fusion. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, San Jose, CA, USA, 4–9 November 2012. [Google Scholar] [CrossRef]
- Wang, F.; Zaheer, M.; Li, X.; Plouchart, J.O.; Valdes-Garcia, A. Co-learning Bayesian model fusion: Efficient performance modeling of analog and mixed-signal circuits using side information. In Proceedings of the 2015 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2015, Austin, TX, USA, 2–6 November 2015. [Google Scholar] [CrossRef]
- Wang, F.; Cachecho, P.; Zhang, W.; Sun, S.; Li, X.; Kanj, R.; Gu, C. Bayesian model fusion: Large-scale performance modeling of analog and mixed-signal circuits by reusing early-stage data. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2016, 35, 1255–1268. [Google Scholar] [CrossRef]
- Hasani, R.M.; Haerle, D.; Baumgartner, C.F.; Lomuscio, A.R.; Grosu, R. Compositional neural-network modeling of complex analog circuits. In Proceedings of the International Joint Conference on Neural Networks, Anchorage, AK, USA, 14–19 May 2017. [Google Scholar] [CrossRef]
- Alawieh, M.; Wang, F.; Li, X. Efficient Hierarchical Performance Modeling for Integrated Circuits via Bayesian Co-Learning. In Proceedings of the 54th Annual Design Automation Conference, Austin, TX, USA, 18–22 June 2017. [Google Scholar] [CrossRef]
- Yang, Y.; Zhu, H.; Bi, Z.; Yan, C.; Zhou, D.; Su, Y.; Zeng, X. Smart-MSP: A Self-Adaptive Multiple Starting Point Optimization Approach for Analog Circuit Synthesis. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst 2018, 37, 531–544. [Google Scholar] [CrossRef]
- Gao, Z.; Tao, J.; Yang, F.; Su, Y.; Zhou, D.; Zeng, X. Efficient performance trade-off modeling for analog circuit based on Bayesian neural network. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, Westminster, CO, USA, 4–7 November 2019. [Google Scholar] [CrossRef]
- Islamoǧlu, G.; Çakici, T.O.; Afacan, E.; Dundar, G. Artificial Neural Network Assisted Analog IC Sizing Tool. In Proceedings of the SMACD 2019—16th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, Lausanne, Switzerland, 15–18 July 2019. [Google Scholar] [CrossRef]
- Hakhamaneshi, K.; Werblun, N.; Abbeel, P.; Stojanovic, V. BagNet: Berkeley analog generator with layout optimizer boosted with deep neural networks. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, Westminster, CO, USA, 4–7 November 2019. [Google Scholar] [CrossRef]
- Zhang, S.; Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Bayesian optimization approach for analog circuit synthesis using neural network. In Proceedings of the 2019 Design, Automation and Test in Europe Conference and Exhibition (DATE), Florence, Italy, 25–29 March 2019. [Google Scholar] [CrossRef]
- Alawieh, M.B.; Williamson, S.A.; Pan, D.Z. Rethinking sparsity in performance modeling for analog and mixed circuits using spike and slab models. In Proceedings of the 56th Annual Design Automation Conference, Las Vegas, NV, USA, 2–6 June 2019. [Google Scholar] [CrossRef]
- Alawieh, M.B.; Tang, X.; Pan, D.Z. S2-PM: Semi-supervised learning for efficient performance modeling of analog and mixed signal circuits. In Proceedings of the 24th Asia and South Pacific Design Automation Conference, ASP-DAC, Tokyo, Japan, 21–24 January 2019. [Google Scholar] [CrossRef]
- Gabourie, A.; Mcclellan, C.; Suryavanshi, S. Analog Circuit Design Enhanced with Artificial Intelligence. 2022. Available online: https://web.stanford.edu/class/archive/cs/cs221/cs221.1192/2018/restricted/posters/cjmcc/poster.pdf (accessed on 31 July 2025).
- Wang, M.; Lv, W.; Yang, F.; Yan, C.; Cai, W.; Zhou, D.; Zeng, X. Efficient Yield Optimization for Analog and SRAM Circuits via Gaussian Process Regression and Adaptive Yield Estimation. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2018, 37, 1929–1942. [Google Scholar] [CrossRef]
- Lyu, W.; Xue, P.; Yang, F.; Yan, C.; Hong, Z.; Zeng, X.; Zhou, D. An efficient Bayesian optimization approach for automated optimization of analog circuits. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 1954–1967. [Google Scholar] [CrossRef]
- Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Pan, P.C.; Huang, C.C.; Chen, H.M. Late breaking results: An efficient learning-based approach for performance exploration on analog and RF circuit synthesis. In Proceedings of the 56th Annual Design Automation Conference, Las Vegas, NV, USA, 2–6 June 2019. [Google Scholar] [CrossRef]
- Li, Y.; Wang, Y.; Li, Y.; Zhou, R.; Lin, Z. An Artificial Neural Network Assisted Optimization System for Analog Design Space Exploration. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2020, 39, 2640–2653. [Google Scholar] [CrossRef]
- Zhao, Z.; Zhang, L. Deep reinforcement learning for analog circuit sizing. In Proceedings of the IEEE International Symposium on Circuits and Systems, Seville, Spain, 12–14 October 2020. [Google Scholar] [CrossRef]
- Wang, H.; Yang, J.; Lee, H.-S.; Han, S. Learning to Design Circuits. arXiv 2020, arXiv:1812.02734. [Google Scholar] [CrossRef]
- Devi, S.; Tilwankar, G.; Zele, R. Automated Design of Analog Circuits using Machine Learning Techniques. In Proceedings of the 2021 25th International Symposium on VLSI Design and Test, (VDAT), Surat, India, 16–18 September 2021. [Google Scholar] [CrossRef]
- Yin, Y.; Wang, Y.; Xu, B.; Li, P. ADO-LLM: Analog design bayesian optimization with in-context learning of large language models. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, New York, NY, USA, 27–31 October 2024. [Google Scholar] [CrossRef]
- Dumesnil, E.; Nabki, F.; Boukadoum, M. RF-LNA circuit synthesis by genetic algorithm-specified artificial neural network. In Proceedings of the 2014 21st IEEE International Conference on Electronics, Circuits and Systems, (ICECS), Marseille, France, 7–10 December 2014. [Google Scholar] [CrossRef]
- Fukuda, M.; Ishii, T.; Takai, N. OP-AMP sizing by inference of element values using machine learning. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems, (ISPACS), Xiamen, China, 6–9 November 2017. [Google Scholar] [CrossRef]
- Lourenço, N.; Rosa, J.; Martins, R.; Aidos, H.; Canelas, A.; Póvoa, R.; Horta, N. On the exploration of promising analog ic designs via artificial neural networks. In Proceedings of the SMACD 2018—15th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, Prague, Czech Republic, 2–5 July 2018. [Google Scholar] [CrossRef]
- Wang, Z.; Luo, X.; Gong, Z. Application of deep learning in analog circuit sizing. In Proceedings of the ACM International Conference Proceeding Series, Shenzhen, China, 8–10 December 2018. [Google Scholar] [CrossRef]
- Lourenço, N.; Afacan, E.; Martins, R.; Passos, F.; Canelas, A.; Póvoa, R.; Horta, N.; Dundar, G. Using polynomial regression and artificial neural networks for reusable analog IC sizing. In Proceedings of the SMACD 2019—16th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, Lausanne, Switzerland, 15–18 July 2019. [Google Scholar] [CrossRef]
- Harsha, V.M.; Harish, B.P. Artificial neural network model for design optimization of 2-stage Op-amp. In Proceedings of the 2020 24th International Symposium on VLSI Design and Test, (VDAT), Bhubaneswar, India, 23–25 July 2020. [Google Scholar] [CrossRef]
- Murphy, S.D.; McCarthy, K.G. Automated design of CMOS operational amplifier using a neural network. In Proceedings of the 2021 32nd Irish Signals and Systems Conference, (ISSC), Athlone, Ireland, 10–11 June 2021. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, L.; Wang, Z.; Verma, N. A seizure-detection IC employing machine learning to overcome data-conversion and analog-processing non-idealities. In Proceedings of the 2015 IEEE Custom Integrated Circuits Conference (CICC), San Jose, CA, USA, 28–30 September 2015. [Google Scholar] [CrossRef]
- Hakhamaneshi, K.; Werblun, N.; Abbeel, P.; Stojanović, V. Late breaking results: Analog circuit generator based on deep neural network enhanced combinatorial optimization. In Proceedings of the 56th Design Automation Conference, Las Vegas, NV, USA, 2–6 June 2019. [Google Scholar] [CrossRef]
- Basso, D.; Bortolussi, L.; Videnovic-Misic, M.; Habal, H. Fast ML-driven Analog circuit layout using reinforcement learning and steiner trees. In Proceedings of the 2024 20th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design (SMACD), Volos, Greece, 2–5 July 2024. [Google Scholar] [CrossRef]
- Mehradfar, A.; Zhao, X.; Niu, Y.; Babakniya, S.; Alesheikh, M.; Aghasi, H.; Avestimehr, S. AICircuit: A multi-level dataset and benchmark for ai-driven analog integrated circuit design. arXiv 2024, arXiv:2407.18272. [Google Scholar] [CrossRef]
Cite | Data Size | Observed Features | AI Method | Evaluation | Results |
---|---|---|---|---|---|
[38] | RO: 4000 samples SRAM: 1000 samples | Power consumption, oscillation frequency, phase noise, and propagation delay | Semi-supervised BMF regressor | Prediction error | ~0.2–2% |
[39] | RO: 750 fundamental frequencies and 10 phase noise samples, LNA: 13 forward gain and 1 noise figure samples | Device-level process variations like threshold voltage, oscillation frequency, noise figure, and forward gain | Semi-supervised CL-BMF regressor | Relative error | ~0.28–0.33 dB |
[40] | Schematic: 3000 samples, Post-layout RO: 300 samples, Post-layout SRAM: 300 samples | Device-level variations and layout extracted parasitics | Supervised BMF regressor | Relative error | <2% |
[41] | Trimming: 695 samples, Load jumps: 433 samples, Line jumps: 501 samples | Trimming behavior and dynamic load/line transitions time-series signals | Supervised NARX and TDNN regressors | MSE, and R | Trimming: 7 × 10−5, Load: 5.6 × 10−3, Line: 1.4 × 10−3, R > 0.96 |
[42] | Delay-line: 100 labeled and 300 unlabeled samples, Amplifier: 250 labeled and 450 unlabeled samples | Propagation delay and power consumption across hierarchical circuit blocks | Semi-supervised BCL regressor | Relative error | Delay-line: 0.13%, Amplifier: 0.55% |
[43] | Simulated-oriented programmable amplifier: 3000–100,000 samples | Transistor sizing width and length, biasing parameters, voltage gain, and phase margin | Supervised G-OMP regressor | No explicit modeling error metric reported | Authors confirm equivalent or improved solution quality |
[44] | Charge pump: 200 samples, Amplifier: 260 samples | Transistor width parameters and circuit-level performance metrics | Supervised BNN regressor | Hypervolume (HV) and Weighted gain (WG) | Charge pump: 15.04 HV and 0.19 WG, Amplifier: 10.72 HV and 102 WG |
[45] | Over iterative design generations: 20,000 samples | Transistor sizing ratios width and length, bias currents, and power consumption | Supervised FNN regressor | No explicit modeling error metric reported | Authors confirm ANN outputs closely match SPICE simulation results |
[46] | Optical receiver: 435 samples, DNN queries: 77,487 samples | Voltage gain, phase margin, diagram margin, and post-layout design specifications | Supervised DNN classifier | Sample compression efficiency | 300× |
[47] | Operational amplifier: 130 samples, Charge pump: 890 samples | Design specifications like voltage gain, phase margin, transistor dimensions (width and length), and passive component resistance and capacitance | Supervised Ensemble (BO|GP|NN) regressor | Figure of Merit (FOM) | Operational amplifier: FOM not reported Charge pump: 3.17 FOM |
[48] | Labeled: 90 samples, Gibbs: 2800 samples | High-dimensional process parameters targeting power performance | Supervised ANN regressor | Relative error | 2.39% |
[49] | Comparator: 50 orthogonal matching pursuit (OMP) samples and 70 S2-PM samples, Voltage-controlled oscillator: 50 OMP samples and 40 S2-PM samples, Unlabeled: 20 samples | Process variation parameters, including threshold voltage shifts | Semi-supervised Sparse regressor | Relative error | Comparator: OMP: 2.50%, S2-PM: 2.53% and Voltage-controlled oscillator: OMP: 1.55%, S2-PM: 1.6% |
[50] | PySPICE: 4000 samples | Input impedance, voltage gain, power consumption, transistor dimensions (width and length), and passive component resistance and capacitance | Supervised FNN regressor | MSE | 10−4 with ideal model, 10−2 with PySpice |
Cite | Data Size | Observed Features | AI Method | Evaluation | Results |
---|---|---|---|---|---|
[51] | 24 parameters and 50,000 Monte Carlo simulations used for yield estimation | Design parameters include sizing, yield, and process corner metrics | Supervised BO–GPR regressor | Failure rate | 1% |
[52] | Simulation budget of 100–1000 per circuit, including 20–40 initial design-of-experiment (DoE) samples | Design variables like transistor sizes, passive dimensions, biasing levels, and performance metrics like gain, efficiency, and area | Supervised BO–GPR regressor | No explicit modeling error metric reported | Demonstrated equal or superior Pareto quality with significantly reduced simulation cost |
[53] | Operational amplifier: 500 samples, Power amplifier: 500 samples | Includes performance specifications such as gain, phase margin, unity gain frequency, power-added efficiency, and output power, along with design variables such as transistor dimensions, resistors, and capacitor values | Supervised BO–GPR regressor | No explicit modeling error metric reported | Demonstrated equal or superior Pareto front quality with 4–5× fewer simulations compared to non-dominated sorting genetic algorithm—II (NSGA-II) and genetic algorithm-based sizing of parameters for analog design (GASPAD) |
[54] | Genetic algorithm: 260 population samples and 16 sampling segments | Device sizing parameters such as transistor geometry and biasing levels, along with performance metrics including gain, power, and bandwidth | Supervised BLR regressor and SVM classifier | No explicit modeling error metric reported | Demonstrated comparable or improved performance metrics with up to 1245–1518% speed-up in circuit synthesis |
[55] | Operational amplifier: 158 samples, Cascode band-pass filter circuit: 632 samples | Includes transistor widths, resistances, and capacitances, along with performance metrics such as gain, unity gain bandwidth, bandwidth, and phase margin | Supervised ANN regressor | Model error and R | <1% and 0.99 |
[56] | 50,000 training tuples | Design variables such as transistor lengths, widths, and bias voltages, as well as performance attributes like DC gain, bandwidth, phase margin, and gain margin | Reinforcement PGNN regressor | No explicit prediction error metrics reported | Model performance evaluated via convergence of the reward function, defined as a weighted combination of normalized DC gain, bandwidth, phase margin, and gain margin |
[57] | Three-stage transimpedance amplifier: 40.000 SPICE samples, Two-stage transimpedance amplifier: 50.000 SPICE samples | Circuit operating characteristics, including DC operating points and AC magnitude/phase responses, along with transistor-level parameters such as threshold voltage, transconductance, and saturation voltage | Reinforcement DDPG regressor | No explicit surrogate-model error metrics reported | Three-stage transimpedance amplifier: satisfied FOM score with a 250× reduction. Two-stage transimpedance amplifier: reached 97.1% of expert-designed bandwidth with a 25× improvement in sample efficiency |
[5] | Transimpedance amplifier: 500 samples, Operational amplifier: 1000 samples, Passive-element filter: 40 samples | Design objectives include gain, bandwidth, phase margin, power, and area, evaluated against design-space parameters such as transistor widths/lengths, capacitor values, and resistor values | Reinforcement DNN-PPO regressor | No explicit surrogate-model error metric | Achieved 40× higher sample efficiency than a genetic-algorithm baseline |
[58] | Width-over-length sweep: 65.534 samples, Transconductance-to-drain-current sweep: 7.361 samples, GA-guided local refinement: 632 samples | Design objectives include voltage gain, phase margin, unity gain frequency, power consumption, and slew rate | Supervised ANN and reinforcement DDPG regressors | Prediction score | Width-over-length sweep: 90%, Transconductance-to-drain-current sweep: 93%, GA-guided local refinement: 75.8% and DDPG 25–250× less sample-efficient than the supervised/GA pipeline |
[59] | Five seed points plus twenty iterations, each evaluating one LLM-generated and four GP-BO candidates: 105 samples | Amplifier: 14 continuous design variables—all transistor widths/lengths plus compensation resistor Rz and capacitor Cc); Comparator: 12 continuous design variables—width and length of six transistors | Supervised LLM-guided GP–BO regressor | No explicit surrogate-model error metric | Satisfies every design specification for both amplifier and comparator, achieves the top figure-of-merit among all methods, and reaches convergence with a 4× reduction in optimization iterations |
Cite | Data Size | Observed Features | AI Method | Evaluation | Results |
---|---|---|---|---|---|
[60] | Low-noise amplifier: 235 samples | Bandwidth, 1-dB compression point, center frequency, third-order intercept point, noise figure, forward gain, transistor width/length ratio, source inductance, bias voltage | Supervised ANN regressor | Prediction accuracy | Matching networks: 99.22% accuracy; transistor geometries: 95.23% accuracy |
[61] | 20,000 samples after cleaning and normalization | Current and power consumption, direct current gain, phase margin, gain–bandwidth product, slew rate, total harmonic distortion, common and power rejection ratio, output and input voltage range, output resistance, input referred noise | Supervised FNN regressor | Average prediction accuracy | 92.3% |
[62] | Original: 16,600 samples, Augmented: 700,000 samples | Direct current gain, supply current consumption, gain bandwidth, phase margin | Supervised ANN regressor | MSE and MAE | 0.0123 and 0.0750 |
[63] | Not precisely specified, 5.000–40.000 samples | Gain, bandwidth, power consumption | Supervised DNN regressor | Average match rate and RMSE | 95.6% and ≈30 |
[64] | Pareto-optimal circuit sizing solutions: 700 samples | Load capacitance, circuit performance measures such as gain, unity gain frequency, power consumption, phase margin, widths and lengths | Supervised MP and ANN regressors | MAE | MP: 0.00807 and ANN: 0.00839 |
[65] | SPICE: 7.409 samples | Gain, phase margin, unity gain bandwidth, area, slew rate, power consumption, widths and lengths of selected transistors, bias current, and compensation capacitor | Supervised ANN regressor | MSE and R2 | ≈4.26 × 10−10 and 94.29% |
[66] | LT-SPICE: 40.000 samples | DC gain, phase margin, unity gain bandwidth, slew rate, area, compensation capacitor value and transistor widths, lengths, compensation capacitor, and bias current | Supervised FNN regressor | MAE | 0.160 |
Cite | Data Size | Observed Features | AI Method | Evaluation | Results |
---|---|---|---|---|---|
[67] | Electroencephalogram segments: 4.098 samples | Frequency-domain energy in multiple bands and time-domain characteristics | Supervised ANN classifier | Accuracy, Recall | ≈96% and ≈95% |
[68] | Post-layout simulated designs: 338 samples | Design parameters of an optical link receiver front-end in 14 nm technology, including resistor dimensions, capacitor dimensions, and transistor properties such as number of fins and number of fingers; performance metrics such as gain, bandwidth, and other specification errors | Supervised DNN classifier | No explicit surrogate-model evaluation metric reported | Specification-compliant design in 338 simulations, 7.1 h, over 200× faster than baseline |
[69] | Synthetic floorplans with 5–20 devices for training, an 11-device circuit for evaluation, and no explicit total sample count reported | Topological relationships between devices represented as sequence pairs; device parameters such as dimensions, shape configurations, symmetry, and alignment; performance objectives including occupied area and half-perimeter wirelength | Reinforcement DNN classifier | No explicit model-level evaluation metric reported | Reduced refinement effort by 95.8% and produced layouts 13.8% smaller and 14.9% shorter than manual designs |
[70] | No explicit dataset size reported, multiple analog circuit topologies, including telescopic and folded cascode operational amplifiers | Gain, phase margin, power consumption, slew rate, transistor dimensions, bias currents, and compensation capacitance | Reinforcement DNN regressor | No explicit model-level evaluation metric reported | Fewer simulations, demonstrating higher sample efficiency in sizing multiple analog amplifier topologies |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liakos, K.G.; Plessas, F. Analog Design and Machine Learning: A Review. Electronics 2025, 14, 3541. https://doi.org/10.3390/electronics14173541
Liakos KG, Plessas F. Analog Design and Machine Learning: A Review. Electronics. 2025; 14(17):3541. https://doi.org/10.3390/electronics14173541
Chicago/Turabian StyleLiakos, Konstantinos G., and Fotis Plessas. 2025. "Analog Design and Machine Learning: A Review" Electronics 14, no. 17: 3541. https://doi.org/10.3390/electronics14173541
APA StyleLiakos, K. G., & Plessas, F. (2025). Analog Design and Machine Learning: A Review. Electronics, 14(17), 3541. https://doi.org/10.3390/electronics14173541