Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization
Abstract
1. Introduction
- Multi-surrogate model layer: Rather than pre-training a large foundation model, we introduce a multi-surrogate reconfigurable model based on ensemble learning and GNN trained and incrementally updated on run-specific Ngspice [35] data (with optional warm-start) covering multiple circuit families and parameter distributions. This model is designed for rapid adaptation (fine-tuning) to new targets with minimal additional simulations.
- Simulator-in-the-loop orchestration: The surrogate is embedded in a multi-fidelity optimization loop that periodically validates and corrects predictions with Ngspice simulations, ensuring that model drift and overconfidence are detected and mitigated. The loop also includes deterministic caching, deduplication, and selective parallel dispatch of uncached simulations, and drives early stopping via dual convergence metrics—hypervolume (HV) expansion and raw IGD reduction—to monitor Pareto spread and proximity while detecting drift or overconfidence.
- Uncertainty-aware control: Predictions are augmented with explicit epistemic uncertainty estimates, which are incorporated into both penalization and fidelity selection, improving robustness against erroneous surrogate optima.
- Active acquisition and co-tuning: Optional active learning queries and DoE (Design of Experiments) seeding (correlated LHS, mixed strategies) are coupled with Optuna (Optuna is a general-purpose, define by run hyperparameter optimization framework) [36] joint tuning of surrogate and NSGA-II hyperparameters, adaptively steering both data collection and multi-objective search dynamics online.
- Extensibility and reproducibility: The platform is designed as a modular, component-oriented system with strong governance features—event manifests, structured logging, and schema-controlled data storage—supporting reproducibility, auditability, and rapid experimentation.
2. Materials and Methods
2.1. System Architecture
2.2. Application Lifecycle
2.3. Orchestration and Modules Interaction
2.4. Cache Subsystem
2.5. Closed-Loop Optimization Algorithm
- Candidate Generation: A pool of size is drawn from the design domain , typically using Latin Hypercube Sampling or uniform fallback.
- Surrogate Inference: Each candidate is evaluated by the surrogate model(s), producing mean predictions and uncertainty estimates
- Acquisition Scoring: The candidates are scored according to the active learning acquisition functions (uncertainty, diversity, hybrid, or hypervolume improvement). The scoring step identifies a batch of promising individuals for the evolutionary operators.
- Evolutionary Update: The batch is combined with the current population and evolved under the configured evolutionary algorithm (e.g., NSGA-II, MOEA/D, or SPEA2), producing a new population of size .
- Fidelity Selection: From the evolved set, a verification mask is applied to decide which individuals should be evaluated by high-fidelity SPICE simulations. The mask depends on both uncertainty thresholds and verification quotas, with optional bias toward predicted non-dominated solutions. The resulting set is denoted .
- SPICE Evaluation and Dataset Update: All are evaluated with SPICE, yielding ground-truth objectives . The dataset is updated as:
- Surrogate Retraining: The surrogate parameters are updated using the expanded dataset . This step reduces predictive bias and variance for regions that were previously uncertain.
- Archive Maintenance: The non-dominated set (Pareto archive) is updated as:
| Algorithm 1 Closed loop surrogate optimization. | |
| 1. Inputs: 2. 3. via SPICE 4. Surrogates (Ensemble, GNN) 5. 6. 7. 8. 9. 10. 11. Procedure: 12. # Initialization 13. 14. 15. 16. 17. Train 18. 19. # Optimization loop 20. 21. 22. 23. 24. 25. 26. # Fidelity selection 27. 28. 29. } 30. 31. # Surrogate update 32. ) 33. ) 34. 35. # Archive update 36. 37. 38. # Stopping condition 39. 40. break 41. end if 42. end for 43. 44. | # warm start candidates data # evaluate with SPICE # candidate pool # surrogate inference # acquisition # evolutionary step # threshold + quota # expensive evaluations # dataset update # return best/terminal Pareto set |
2.5.1. Problem Statement
- If one design is strictly better than another in every metric, the worse one is discarded.
- If two designs trade off (i.e., one exhibits better gain, the other better power consumption), both remain as valid Pareto-optimal solutions.
2.5.2. Surrogate Models
2.5.3. Ensemble Surrogate Model
- 3.
- Multilayer Perceptrons (MLPs) (when PyTorch is available) or Random Forests (RF) (when scikit-learn is available), or
- 4.
- Lightweight linear models when computational resources are constrained or no external libraries are present.
- 5.
- Bagging (Bootstrap Aggregating): each regressor is trained on a resampled dataset.
- 6.
- Boosting (e.g., AdaBoost): regressors are trained sequentially, with later ones focusing on samples that earlier models mispredicted.
- 7.
- Stacking: a meta-model is trained to combine the outputs of base regressors.
- 8.
- For each performance metric (say, power, area, or gain error), the corresponding measures how much the ensemble disagrees about that objective’s value at .
- 9.
- A small means all regressors roughly agree: the surrogate is confident about its prediction for that metric.
- 10.
- A large means regressors disagree: the surrogate is uncertain, so we should be cautious.
2.5.4. GNN-Based Surrogate Model
2.5.5. Integration in the Optimization Loop
2.5.6. Candidate Generation and Acquisition
2.5.7. Evolutionary Algorithm Layer
2.5.8. Adaptive Fidelity Controller
2.6. Uncertainty-Aware Penalization
- Uncertainty threshold (user-defined through CLI): global cutoff used by the adaptive fidelity controller. If , the candidate is marked for SPICE verification (unless quota rules apply).
- Penalty threshold (): cutoff used inside the penalization term. By default, . If the user provides a separate value via the CLI, this overrides the default.
3. Results
3.1. Circuits Under Test
3.2. Simulation Set-Up
3.3. Results and Analysis
3.3.1. MOEA/D Algorithm
3.3.2. Surrogate-Guided Optimization
3.3.3. Multi-Fidelity Adaptive Optimization
4. Discussion
4.1. Summary of Key Observations
4.2. Improving Result Quality
4.2.1. Algorithm-Specific Tuning
4.2.2. Adaptive Data Enrichment
4.2.3. Verification Cadence and Batch Size
4.3. Improving Runtime Performance
4.3.1. Process-Based Parallelism
4.3.2. Mitigating I/O Bottlenecks
4.3.3. Inference Optimization
4.4. Overall Assessment
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| AST | Abstract Syntax Tree |
| BO | Bayesian Optimization |
| CDF | Cumulative Distribution Function |
| CLI | Command Line Interface |
| DoE | Design of Experiments |
| DSL | Domain Specific Language |
| DUT | Device Under Test |
| EA | Evolutionary Algorithm |
| EBNF | Extended Bakus-Naur Form |
| EI | Expected Improvement |
| GA | Genetic Algorithm |
| GAT | Graph Attention Layer |
| GCN | Graph Convolutional Network |
| GNN | Graph Neural Network |
| GP | Gaussian Process |
| HF | High Fidelity |
| HPO | Hyperparameters Optimization |
| HV | Hypervolume |
| IGD | Inverted Generational Distance |
| LHS | Latin Hypercube Sampling |
| LF | Low Fidelity |
| ML | Machine Learning |
| MLP | Multilayer Perceptron |
| MOEA | Multi-objective Evolutionary Algorithm |
| MOEA/D | Multi-objective Evolutionary Algorithm based on Decomposition |
| MSM | Multi-fidelity Surrogate Model |
| NSGA-II/-III | Non-dominated Sorting Genetic Algorithm II/III |
| Probability Density Function | |
| RBF | Radial Basis Function |
| RF | Random Forest |
| SAEA | Surrogate-assisted Evolutionary Algorithm |
| SAO | Surrogate-assisted Optimization |
| SBX | Simulated Binary Crossover |
| SPEA2 | Strength Pareto Evolutionary Algorithm 2 |
| TTL | Time To Live |
References
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
- Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; da Fonseca, V.G. Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
- Liu, S.; Wang, H.; Peng, W. Surrogate-Assisted Evolutionary Algorithms for Expensive Combinatorial Optimization: A Survey. Complex. Intell. Syst. 2024, 10, 5933–5949. [Google Scholar] [CrossRef]
- Yu, H.; Gong, Y.; Kang, L.; Sun, C.; Zeng, J. Dual-Drive Collaboration Surrogate-Assisted Evolutionary Algorithm by Coupling Feature Reduction and Reconstruction. Complex. Intell. Syst. 2024, 10, 171–191. [Google Scholar] [CrossRef]
- Chugh, T.; Rahat, A.; Volz, V.; Zaefferer, M. Towards Better Integration of Surrogate Models and Optimizers. In High-Performance Simulation-Based Optimization; Bartz-Beielstein, T., Filipič, B., Korošec, P., Talbi, E.G., Eds.; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2020; Volume 833, pp. 119–139. [Google Scholar] [CrossRef]
- Jin, Y. Surrogate-Assisted Evolutionary Computation: Recent Advances and Future Challenges. Swarm Evol. Comput. 2011, 1, 61–70. [Google Scholar] [CrossRef]
- Ruan, X.; Li, K.; Derbel, B.; Liefooghe, A. Surrogate Assisted Evolutionary Algorithm for Medium Scale Multi-Objective Optimisation Problems. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference (GECCO ’20), New York, NY, USA, 8–12 July 2020; pp. 560–568. [Google Scholar] [CrossRef]
- Binois, M.; Wycoff, N. A Survey on High-dimensional Gaussian Process Modeling with Application to Bayesian Optimization. ACM Trans. Evol. Learn. Optim. 2022, 2, 1–26. [Google Scholar] [CrossRef]
- Gammel, W.P.; Sauppe, J.P.; Bradley, P. A Gaussian Process Based Surrogate Approach for the Optimization of Cylindrical Targets. Phys. Plasmas 2024, 31, 072705. [Google Scholar] [CrossRef]
- Zhang, S.; Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Bayesian Optimization Approach for Analog Circuit Synthesis Using Neural Network. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 1463–1468. [Google Scholar] [CrossRef]
- Huang, G.; Hu, J.; He, Y.; Liu, J.; Ma, M.; Shen, Z.; Wu, J.; Xu, Y.; Zhang, H.; Zhong, K.; et al. Machine Learning for Electronic Design Automation: A Survey. ACM Trans. Des. Autom. Electron. Syst. 2021, 26, 40. [Google Scholar] [CrossRef]
- Rashid, R.; Krishna, K.; George, C.P.; Nambath, N. Machine Learning Driven Global Optimisation Framework for Analog Circuit Design. Microelectron. J. 2024, 151, 106362. [Google Scholar] [CrossRef]
- Fan, S.; Lu, H.; Zhang, S.; Cao, N.; Zhang, X.; Li, J. Graph-Transformer-based Surrogate Model for Accelerated Converter Circuit Topology Design. In Proceedings of the 61st ACM/IEEE Design Automation Conference (DAC ’24), New York, NY, USA, 23–27 June 2024; Article 172, pp. 1–6. [Google Scholar] [CrossRef]
- Ho, J.; Boyle, J.A.; Liu, L.; Gerstlauer, A. LASANA: Large-Scale Surrogate Modeling for Analog Neuromorphic Architecture Exploration. arXiv 2025, arXiv:2507.10748. [Google Scholar] [CrossRef]
- Hakhamaneshi, K.; Nassar, M.; Phielipp, M.; Abbeel, P.; Stojanovic, V. Pretraining Graph Neural Networks for Few-Shot Analog Circuit Modeling and Design. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2023, 42, 2163–2173. [Google Scholar] [CrossRef]
- El Sayed, Z.; Wang, Z.; Selmani, H.; Knechtel, J.; Sinanoglu, O.; Alrahis, L. Graph Neural Networks for Integrated Circuit Design, Reliability, and Security: Survey and Tool. ACM Comput. Surv. 2025, 58, 1–44. [Google Scholar] [CrossRef]
- Ren, H.; Kokai, G.F.; Turner, W.J.; Ku, T.-S. ParaGraph: Layout Parasitics and Device Parameter Prediction using Graph Neural Networks. In Proceedings of the 57th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 20–24 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Liu, M.; Turner, W.J.; Kokai, G.F.; Khailany, B.; Pan, D.Z.; Ren, H. Parasitic-Aware Analog Circuit Sizing with Graph Neural Networks and Bayesian Optimization. In Proceedings of the 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 1–5 February 2021; pp. 1372–1377. [Google Scholar] [CrossRef]
- Li, C.; Hu, D.; Zhang, X. Pre-Layout Parasitic-Aware Design Optimizing for RF Circuits Using Graph Neural Network. Electronics 2023, 12, 465. [Google Scholar] [CrossRef]
- Dong, Z.; Cao, W.; Zhang, M.; Tao, D.; Chen, Y.; Zhang, X. CKTGNN: Circuit Graph Neural Network for Electronic Design Automation. In Proceedings of the International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, 1–5 May 2023; Available online: https://par.nsf.gov/servlets/purl/10408869 (accessed on 18 November 2025).
- Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. An Efficient Bayesian Optimization Approach for Automated Optimization of Analog Circuits. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 1954–1967. [Google Scholar] [CrossRef]
- Lyu, W.; Yang, F.; Yan, C.; Zhou, D.; Zeng, X. Batch Bayesian Optimization via Multi-Objective Acquisition Ensemble for Automated Analog Circuit Design. Proceedings of Machine Learning Research. In Proceedings of the 35th International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 3306–3314. Available online: https://proceedings.mlr.press/v80/lyu18a.html (accessed on 14 August 2025).
- Yin, Y.; Wang, Y.; Xu, B.; Li, P. ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of Large Language Models. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design (ICCAD ’24), Newark, NJ, USA, 27–31 October 2024; Association for Computing Machinery: New York, NY, USA, 2024; Article 81, pp. 1–9. [Google Scholar] [CrossRef]
- Hernández-Lobato, D.; Hernández-Lobato, J.; Shah, A.; Adams, R. Predictive Entropy Search for Multi-Objective Bayesian Optimization. Proceedings of Machine Learning Research. In Proceedings of the 33rd International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1492–1501. Available online: https://proceedings.mlr.press/v48/hernandez-lobatoa16.html (accessed on 14 August 2025).
- Poddar, S.; Oh, Y.; Lai, Y.; Zhu, H.; Hwang, B.; Pan, D.Z. INSIGHT: Universal Neural Simulator for Analog Circuits Harnessing Autoregressive Transformers. arXiv 2024. [Google Scholar] [CrossRef]
- Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Test Problems for Evolutionary Multiobjective Optimization. In Evolutionary Multiobjective Optimization; Abraham, A., Jain, L., Goldberg, R., Eds.; Advanced Information and Knowledge Processing; Springer: London, UK, 2005; pp. 105–145. [Google Scholar] [CrossRef]
- Forrester, A.I.J.; Sóbester, A.; Keane, A.J. Multi-Fidelity Optimization via Surrogate Modelling. Proc. R. Soc. A Math. Phys. Eng. Sci. 2007, 463, 3251–3269. [Google Scholar] [CrossRef]
- Fernández-Godino, M.G. Review of Multi-Fidelity Models. Adv. Comput. Sci. Eng. 2023, 1, 351–400. [Google Scholar] [CrossRef]
- Leng, J.-X.; Feng, Y.; Huang, W.; Shen, Y.; Wang, Z.-G. Variable-Fidelity Surrogate Model Based on Transfer Learning and Its Application in Multidisciplinary Design Optimization of Aircraft. Phys. Fluids 2024, 36, 017131. [Google Scholar] [CrossRef]
- Míchal, J.; Dobeš, J. Parallelized A Posteriori Multiobjective Optimization in RF Design. Electronics 2023, 12, 2343. [Google Scholar] [CrossRef]
- Xu, Z.; Zhao, Z.; Liu, J. Deterministic Multi-Objective Optimization of Analog Circuits. Electronics 2024, 13, 2510. [Google Scholar] [CrossRef]
- Wei, Y.; Qi, G.; Wang, Y.; Yan, N.; Zhang, Y.; Feng, L. Efficient Microwave Filter Design by a Surrogate-Model-Assisted Decomposition-Based Multi-Objective Evolutionary Algorithm. Electronics 2022, 11, 3309. [Google Scholar] [CrossRef]
- Li, Y.; Sun, J.; Wang, Z. Multiobjective Parallel Optimization of Wideband Microwave Filters Using Adaptive Kriging and Evolutionary Algorithms. Electronics 2023, 12, 4607. [Google Scholar] [CrossRef]
- Raza, A.; Yang, L.; Abbas, R. A Surrogate-Assisted Evolutionary Multiobjective Framework for RF Power Amplifier Design. Electronics 2023, 12, 2471. [Google Scholar] [CrossRef]
- Ngspice Development Team. Ngspice: Open-Source Spice Simulator. SourceForge. Available online: https://ngspice.sourceforge.io/ (accessed on 14 August 2025).
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next-Generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19), Anchorage, AK, USA, 4–8 August 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 2623–2631. [Google Scholar] [CrossRef]
- Cornetta, G.; Touhafi, A.; Contreras, J.; Zaragoza, A. Supplementary Materials for “Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization”. 2025. Available online: https://zenodo.org/records/17503736 (accessed on 20 November 2025).
- Salvaire, F. PySpice v1.5: Python Interface to Ngspice and Xyce Circuit Simulators. PySpice Documentation. 2021. Available online: https://pyspice.fabrice-salvaire.fr/releases/v1.5/ (accessed on 14 August 2025).
- Rocklin, M. Dask: Parallel Computation with Blocked Algorithms and Task Scheduling. In Proceedings of the 14th Python in Science Conference (SciPy 2015), Austin, TX, USA, 6–12 July 2015; pp. 130–136. Available online: https://proceedings.scipy.org/articles/Majora-7b98e3ed-013.pdf (accessed on 29 October 2025).
- Zhou, X.; Wang, X.; Gu, X. A Decomposition-Based Multiobjective Evolutionary Algorithm with Weight Vector Adaptation. Swarm Evol. Comput. 2021, 61, 100825. [Google Scholar] [CrossRef]
- Asilian Bidgoli, A.; Rahnamayan, S.; Erdem, B.; Mohammadi, A. Machine Learning-Based Framework to Cover Optimal Pareto-Front in Many-Objective Optimization. Complex. Intell. Syst. 2022, 8, 5287–5308. [Google Scholar] [CrossRef]
- Luan, Y.; He, J.; Yang, J.; Lan, X.; Yang, G. Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning. Int. J. Intell. Syst. 2023, 2023, 1666735. [Google Scholar] [CrossRef]
- Zhang, X.; Li, G.; Lin, X.; Zhang, Y.; Chen, Y.; Zhang, Q. Gliding over the Pareto Front with Uniform Designs. In Advances in Neural Information Processing Systems; Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., Zhang, C., Eds.; Curran Associates, Inc.: New York, NY, USA, 2024; Volume 37, pp. 2215–2245. [Google Scholar] [CrossRef]
- Komosinski, M.; Mensfelt, A. Enhancing Quality–Diversity Optimization Through Domain-Specific Dissimilarity as Crowding Distance. In Proceedings of the Genetic and Evolutionary Computation Conference, Málaga, Spain, 14–18 July 2025; ACM: New York, NY, USA, 2025. [Google Scholar] [CrossRef]






















| Design | Device(s) | Sampling Range |
|---|---|---|
| Folded cascode (1 μm) | M1, M2, M5, M6, M9–M12 | W: 1–40 μm, L: 1–3 μm |
| M3, M4 | W: 1–80 μm, L: 1–3 μm | |
| M7, M8 | W: 1–20 μm, L: 1–3 μm | |
| Telescopic cascode (1 μm) | M1–M4 | W: 1–20 μm, L: 1–3 μm |
| M5–M10, M12, M13 | W: 1–40 μm, L: 1–3 μm | |
| M11 | W: 1–20 μm, L: 1–10 μm | |
| RC-compensated op-amp (50 nm) | M1–M9 | W: 0.1–1 μm, L: 50–200 nm |
| R | 1–100 kΩ | |
| C | 0.1–0.5 pF | |
| Transistor-compensated op-amp (50 nm) | M1, M2, M7, M10–M13 | W: 1–5 μm, L: 50–200 nm |
| M3–M6, M8, M9 | W: 1–10 μm, L: 50–200 nm | |
| C | 0.1–0.5 pF |
| Parameter | Value/Setting | Description |
|---|---|---|
| Hardware | MacBook Pro (2.6 GHz Intel Core i7, 16 GB RAM) | Host system for all simulations |
| Objectives | Gain (20–100 dB), Bandwidth (1 kHz–1 GHz) | Primary optimization targets |
| Algorithms | NSGA-II/NSGA-III/SPEA2/MOEA/D | Multi-objective evolutionary algorithms |
| Population/Generations | 200/150 (SGO & MF); 100/50 (SPICE-only) | GA configuration for main and baseline runs |
| Crossover/Mutation | 0.9/0.1 | Probabilities for genetic operators |
| Tournament Size | 2 | Number of competitors per selection round |
| Early Stopping | 0.01 (relative hypervolume change) | Convergence criterion |
| AC Analysis | 1 Hz–1 GHz (100 points, 27 °C) | Frequency response for gain and bandwidth |
| Transient Analysis | 1 ns step, 2 µs duration | Slew-rate evaluation |
| SPICE Tolerances | reltol = 1 × 10−3, abstol = 1 × 10−12, vntol = 1 × 10−6 | Numerical solver settings |
| Parallelism | Enabled (processes or local Dask cluster) | Concurrent circuit evaluations |
| Cache Precision | 6 decimal places | Design deduplication threshold |
| DoE Sampling | 100 Monte Carlo samples | Training data for surrogate and MF models |
| Parameter Ranges | See Table 1 | Monte Carlo design space |
| Correlation | ρ ∈ [0.1, 0.2] | Weak positive correlation |
| Random Seed | Fixed (across all runs) and set to 101 | Ensures fair cross-algorithm comparison |
| Design | Algorithm | Average Throughput per Simulation Mode [sim/s] | Average Wall-Clock Time per Simulation Mode [s] | Speedup (Processes) | Speedup (Dask) |
|---|---|---|---|---|---|
| Miller-compensated op-amp | NSGA-II | 15.22/36.76/30.69 | 5.24/2.21/2.62 | 2.37 | 2.00 |
| NSGA-III | 15.74/36.48/31.50 | 5.03/2.20/2.54 | 2.29 | 1.98 | |
| SPEA2 | 15.54/36.69/31.65 | 4.98/2.13/2.47 | 2.34 | 2.02 | |
| Transistor-compensated op-amp | NSGA-II | 14.48/34.78/28.48 | 2.91/1.25/1.49 | 2.33 | 1.95 |
| NSGA-III | 12.47/33.95/28.78 | 3.44/1.29/1.51 | 2.67 | 2.28 | |
| SPEA2 | 11.58/33.71/28.13 | 3.11/1.08/1.28 | 2.88 | 2.43 | |
| Folded cascode amplifier | NSGA-II | 20.27/40.78/35.20 | 4.10/2.07/2.38 | 1.98 | 1.72 |
| NSGA-III | 17.33/40.09/33.90 | 4.49/1.97/2.32 | 2.28 | 1.94 | |
| SPEA2 | 15.17/40.43/34.96 | 5.53/2.10/2.42 | 2.63 | 2.28 | |
| Telescopic cascode amplifier | NSGA-II | 17.22/40.95/35.08 | 4.73/2.01/2.33 | 2.35 | 2.03 |
| NSGA-III | 15.62/39.33/34.48 | 5.27/2.12/2.41 | 2.49 | 2.19 | |
| SPEA2 | 13.91/40.31/34.75 | 5.97/2.11/2.42 | 2.83 | 2.47 |
| Design | Wall Clock Time [s] | Total Individuals | Total Simulated | Total Cached | Cache Hit Rate | Throughput [sim/s] |
|---|---|---|---|---|---|---|
| Miller-compensated op-amp | 110.96 | 5000 | 1438 | 3562 | 0.712 | 12.96 |
| Transistor-compensated op-amp | 67.34 | 5000 | 437 | 4563 | 0.913 | 6.49 |
| Folded cascode amplifier | 133.92 | 5000 | 1396 | 3604 | 0.721 | 10.42 |
| Telescopic cascode amplifier | 132.43 | 5000 | 1307 | 3693 | 0.739 | 9.97 |
| Design | Wall Clock Time [s] | Total Individuals | Total Simulated | Total Cached | Cache Hit Rate | Throughput [sim/s] |
|---|---|---|---|---|---|---|
| Miller-compensated op-amp | 120.31 | 5000 | 1583 | 3417 | 0.683 | 13.16 |
| Transistor-compensated op-amp | 59.40 | 5000 | 339 | 4661 | 0.932 | 5.71 |
| Folded cascode amplifier | 133.35 | 5000 | 1376 | 3624 | 0.725 | 10.32 |
| Telescopic cascode amplifier | 122.08 | 5000 | 1165 | 3835 | 0.767 | 9.54 |
| Design | Initial MAE (Min–Max, Median) | Final MAE (Min–Max, Median) | Best MAE (Min–Max, Median) | Mean Improvement [%] |
|---|---|---|---|---|
| Miller-compensated op-amp | 0.3780–0.5197 (median 0.4146) | 0.2061–0.3550 (median 0.2217) | 0.0543–0.3550 (median 0.1276) | 35.3% |
| Transistor-compensated op-amp | 0.0957–0.1204 (median 0.1052) | 0.0163–0.1234 (median 0.0756) | 0.0019–0.0767 (median 0.0590) | 42.8% |
| Folded cascode amplifier | 0.0277–0.1121 (median 0.0312) | 0.0059–0.0371 (median 0.0222) | 0.0017–0.0312 (median 0.0222) | 43.5% |
| Telescopic cascode amplifier | 0.3091–0.3372 (median 0.3299) | 0.0839–0.2463 (median 0.2155) | 0.0159–0.2155 (median 0.1825) | 44.1% |
| Design | Initial MAE (Min–Max, Median) | Final MAE (Min–Max, Median) | Best MAE (Min–Max, Median) | Mean Improvement [%] |
|---|---|---|---|---|
| Miller-compensated op-amp | 0.3780–0.5197 (median 0.4146) | 0.2061–0.3550 (median 0.2217) | 0.0543–0.3550 (median 0.1276) | 63.6% |
| Transistor-compensated op-amp | 0.0957–0.1204 (median 0.1052) | 0.0163–0.1234 (median 0.0756) | 0.0019–0.0767 (median 0.0590) | 41.4% |
| Folded cascode amplifier | 0.0277–0.1121 (median 0.0312) | 0.0059–0.0371 (median 0.0222) | 0.0017–0.0312 (median 0.0222) | 63.6% |
| Telescopic cascode amplifier | 0.3091–0.3372 (median 0.3299) | 0.0839–0.2463 (median 0.2155) | 0.0159–0.2155 (median 0.1825) | 41.4% |
| Design | Algorithm | Mean Wall-Clock Time [s/gen] | Total Wall-Clock [s] | Wall-Clock Speedup | Parallel Efficiency [%] | Time Reduction [%] | Per-Generation Speedup |
|---|---|---|---|---|---|---|---|
| Differential op-amp | NSGA-II | 0.91 → 0.32 | 151.4 → 65.3 | 2.32× | 29.0 | 56.9 | 2.83× |
| NSGA-III | 1.10 → 0.33 | 188.8 → 67.9 | 2.78× | 34.8 | 64.1 | 3.37× | |
| SPEA2 | 1.06 → 0.32 | 180.2 → 72.1 | 2.50× | 31.3 | 60.0 | 3.25× | |
| Transistor-compensate op-amp | NSGA-II | 1.17 → 0.43 | 188.8 → 78.0 | 2.42× | 30.3 | 58.7 | 2.71× |
| NSGA-III | 1.29 → 0.39 | 209.7 → 71.0 | 2.95× | 36.9 | 66.1 | 3.33× | |
| SPEA2 | 1.30 → 0.41 | 213.2 → 77.1 | 2.77× | 34.6 | 63.8 | 3.17× | |
| Folded cascode | NSGA-II | 0.73 → 0.31 | 121.6 → 59.6 | 2.04× | 25.5 | 51.0 | 2.36× |
| NSGA-III | 0.72 → 0.26 | 121.7 → 52.3 | 2.33× | 29.1 | 57.1 | 2.71× | |
| SPEA2 | 0.82 → 0.27 | 143.1 → 59.7 | 2.40× | 30.0 | 58.3 | 3.06× | |
| Telescopic cascode | NSGA-II | 0.87 → 0.30 | 144.4 → 54.7 | 2.64× | 33.0 | 62.2 | 3.14× |
| NSGA-III | 0.86 → 0.27 | 166.6 → 58.2 | 2.86× | 35.8 | 65.1 | 3.49× | |
| SPEA2 | 0.98 → 0.28 | 188.8 → 78.0 | 2.42× | 30.3 | 58.7 | 2.71× |
| Feature | SPICE-Only | Surrogate-Guided Optimization (SGO) | Multi-Fidelity Optimization (MFO) |
|---|---|---|---|
| Evaluation method | Pure SPICE | Hybrid (surrogate + periodic SPICE) | Adaptive hybrid (uncertainty-driven SPICE fraction) |
| Speedup | Baseline (1×) | ≈20× (≈95% reduction) | ≈10× (≈90% reduction) |
| Accuracy guarantee | Perfect (All SPICE) | Approximate (final sweep verifies) | Near-perfect (adaptive verification ensures quality) |
| Computational cost | High | Low-Medium | Medium |
| Model training | None | Online (continuous) | Online with rollback |
| Uncertainty quantification | Not applicable | Yes (MC-dropout/ensemble) | Yes (with adaptive penalty) |
| Best suited for | Small design problems | Large design spaces/expensive simulations | Accuracy-critical applications |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cornetta, G.; Touhafi, A.; Contreras, J.; Zaragoza, A. Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization. Electronics 2026, 15, 105. https://doi.org/10.3390/electronics15010105
Cornetta G, Touhafi A, Contreras J, Zaragoza A. Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization. Electronics. 2026; 15(1):105. https://doi.org/10.3390/electronics15010105
Chicago/Turabian StyleCornetta, Gianluca, Abdellah Touhafi, Jorge Contreras, and Alberto Zaragoza. 2026. "Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization" Electronics 15, no. 1: 105. https://doi.org/10.3390/electronics15010105
APA StyleCornetta, G., Touhafi, A., Contreras, J., & Zaragoza, A. (2026). Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization. Electronics, 15(1), 105. https://doi.org/10.3390/electronics15010105

