1. Introduction
In recent years, with the continuous development and innovation in the field of integrated circuit design, traditional manual circuit parameter tuning has become increasingly more difficult to meet emerging application requirements [
1]. Therefore, there is great demand to optimize circuit parameters through electronic design automation (EDA) tools, which can largely liberate engineers from design constraints, improve design speed, and enhance circuit performances.
Circuit parameter optimization can generally be categorized into stochastic methods and deterministic methods based on algorithmic strategies [
2]. The stochastic methods, inspired by natural phenomena, such as particle swarm optimization (PSO) [
3], simulated annealing (SA) [
4], evolutionary algorithms (EA) [
5], genetic algorithms (GA) [
6], Bayesian optimization (BO) [
7], and others, have shown promising results that benefit from random variance. In contrast, deterministic optimization methods seek the optimal solution for circuit parameters by computing specific mathematical functions and equations, without utilizing random methods throughout the entire process [
8]. In practical circuit design, using deterministic methods to optimize circuit parameters allows engineers to conduct repeated experiments on a circuit. Each experiment yields the same solution consistently, regardless of the number of repetitions. This facilitates engineers in finding a specific set of circuit parameter values that meet their requirements.
Based on the number of performance metrics to be optimized, circuit parameter optimization can be divided into single-objective optimization and multi-objective optimization [
9]. Single-objective optimization focuses on optimizing a single performance of the circuit to obtain its extremum. In multi-objective optimization, the optimization of circuit parameters is not only aimed at a single circuit performance but also considers multiple circuit performances that may often have competing relationships. Compared to single-objective optimization, this is more challenging as it requires seeking a balance among multiple circuit performance metrics. In this scenario, we introduce the concept of Pareto optimality [
10]. Pareto optimality refers to a state in multi-objective optimization problems where further optimization cannot be achieved by improving one objective without sacrificing others. If for a particular solution, none of the objectives can be improved without worsening others, then that solution is considered Pareto optimal. Pareto optimality emphasizes the need to balance trade-offs when facing conflicting objectives and to determine the best possible solution, making it a core concept in multi-objective optimization. The Pareto optimal front reflects the different optimization capabilities of the circuit when considering the trade-offs involved in multi-objective optimization. Obtaining the Pareto front solutions as accurately as possible is the primary objective of multi-objective optimization.
Reference [
11] proposes a classical weighted sum method (WSM) to obtain the Pareto front. WSM involves multiplying each objective function by a weighting factor and then summing all the weighted objective functions together to transform multiple objective functions into an aggregated objective function. It systematically changes each weighting coefficient according to a certain pattern, with each different single objective optimization determining a different optimal solution to obtain the set of solutions on the Pareto front. Reference [
12] applies the idea of multi-objective optimization to the design of analog integrated circuits and radio frequency integrated circuits, seeking Pareto design points among various circuit performance metrics. Reference [
13] utilizes the NSGA-II [
14] to obtain the Pareto solution set and applied it to the design of ring oscillators. Reference [
15] proposes an efficient surrogate-assisted constrained multi-objective evolutionary algorithm for analog circuit sizing, which has been shown to achieve better results compared to the NSGA-II. As research advances, the application of multi-objective optimization methods in circuit parameter optimization has become increasingly sophisticated, and the algorithms for obtaining the Pareto front are continually improving.
However, up to now, using deterministic methods for multi-objective optimization of circuit parameters has not been fully investigated yet. Current research primarily uses deterministic methods to optimize circuit parameters for a single objective. It is challenging to avoid using random variance throughout the entire process of addressing multi-objective optimization, which would inevitably produce stochastic outcomes. In this paper, an innovation is made to an existing deterministic optimization algorithm to improve its efficiency and effectiveness, contributing to its ability to efficiently identify global optimal solutions for a single objective. Then, we integrate the WSM method and our deterministic optimization method together to realize deterministic multi-objective optimization, which would generate a deterministic Pareto front among various circuit performances. As our experimental results indicated, the Pareto front obtained by our method is superior in quality compared to that obtained by NSGA-II.
The remainder of this paper is organized as follows.
Section 2 presents our proposed methods, including group-based deterministic optimization algorithm and deterministic multi-objective optimization.
Section 3 discusses the experimental results, while
Section 4 concludes this paper.
2. Proposed Methods
Starting from an initial set of circuit parameters, the deterministic optimization algorithm proposed in [
16] utilizes the characteristic boundary curve (CBC) algorithm to determine the correction values for circuit parameters at each optimization iteration, continuously refining the circuit parameter values to converge towards the optimal solution. However, this method suffers from some limitations. Firstly, its optimization efficiency and effectiveness are poor when circuit biases are involved as parameters to be optimized since it is originally targeted at optimizing device sizes with fixed circuit biases provided. Secondly, it lacks means to let its optimized results escape from local optima, making it mostly converge to a local optimum. Thirdly, it can only optimize a single objective.
In this paper, we are motivated to propose a novel deterministic multi-objective optimization method, which has the following three features:
- (1)
Boost the efficiency and effectiveness of single-objective deterministic optimization through our proposed various schemes;
- (2)
Enhance the single-objective deterministic optimization to be able to derive global optimal solutions;
- (3)
Improve the deterministic optimization from single-objective to be multiple-objective.
2.1. Design Parameter Classification and Correction
The key to deterministic optimization is to determine the parameter correction (i.e., parameter change) at each optimization iteration. In this paper, we propose a new method to calculate parameter correction.
Firstly, we classify the design parameters based on their types, including bias voltage, bias current, transistor length, transistor width, resistor, and capacitor. Then the parameter correction is calculated for each type (index of
i) of design parameters via the modified generalized boundary curve (GBC) algorithm proposed in [
17], which has the following objective function:
where
represents the parameter correction of the
ith type of design parameters, factor
α is a positive constant for the scaling purpose, variable
λ controls the weight of the squared Euclidean norm of
, and
is the linearized error of the
ith type of design parameters:
where
refers to the performance specification,
is the performance gradients of current design parameter values (i.e.,
v) calculated by linearizing circuit performance
f(
v) at
v (i.e., Equation (3)), and
is the vector divided from
that represents the performance gradients of the
ith type of design parameters:
Compared with the original CBC algorithm used to derive parameter correction, the modified GBC algorithm features much higher efficiency and can address the issue when the cost function is strongly nonlinear [
17]. Moreover, since different types of design parameters have distinct sensitivities to a parameter change, calculating the parameter correction of each type of design parameters rather than the whole design parameters ensures more smooth and accurate updating of design parameters during the iterative optimization process, making it possess much better optimization efficiency and effectiveness than the work of [
16].
2.2. Bias-Aware Updating Scheme
Through our experimental studies, we have found that tuning the biases is more critical than adjusting device sizes when transistors work at improper regions, whose performance is usually far away from the specification. This observation has inspired us to bring forth the following bias-aware weight scheme, which can largely enhance the optimization efficacy:
- (1)
When
is applied:
- (2)
When
is applied:
where
and
refer to the lower and upper bound performance requirements, respectively,
a belongs to [0, 1),
b is a natural number,
and
refer to the weights of the
ith type of design parameters that are bias-type (e.g., bias voltage, bias current, or device sizes of any bias circuits) and other-type of design parameters, respectively.
At each optimization iteration, after deriving the parameter correction of each type of design parameters by following the instruction described in the last section, the proposed bias-aware weight scheme is applied to update the design parameter values as follows:
As one can see, when the current performance is quite distant from the specification, only bias-type design parameters can be tuned. Once it passes a certain threshold, both bias-type and other-type design parameters are adjusted simultaneously with distinct weights. Here, a and b are two crucial hyperparameters of our proposed group-based deterministic optimization algorithm. The hyperparameter a determines the region of circuit performance when only bias-type design parameters can be adjusted. In contrast, the hyperparameter b is used to decide the exact weights for bias-type and other-type design parameters when they can be tuned simultaneously. Specifically, a larger value of b would result in a more dramatic decrease and increase of weights for bias-type and other-type design parameters, respectively, as the circuit performance gets close to the target specification.
2.3. Group-Based Exploration Scheme
The deterministic optimization algorithm proposed in [
16] can easily get stuck at a local optimum and has no means to jump out of it. In this paper, except for terminating condition (7), the terminating condition expressed in (8) is proposed to provide a chance for deterministic optimization to escape from local optima.
where
µ is the index of optimization iteration. Condition (8) denotes that the performance can hardly be further improved in a successive user-defined (i.e.,
τ) number of optimization iterations while the improvement amount is controlled by a user-specified parameter
δ.
However, although condition (8) can help deterministic optimization to escape from local optima to some extent, the improvement is still limited since the exploration space of deterministic sizing is restricted by the initial design point. Thus, any inferior initial design point may lead to a poor optimization result in the end. To address this problem, in this paper, we propose a group-based exploration scheme. Specifically, we create a group of deterministic optimization individuals, each of which starts optimization separately from a unique random-produced initial design point. In this way, the exploration space is greatly enlarged and global optima are able to be figured out.
Our proposed group-based deterministic optimization algorithm is illustrated in Algorithm 1, which integrates the three aspects as presented in
Section 2.1,
Section 2.2 and
Section 2.3. As shown at Lines 1–2, a group of individuals are created to start optimization with randomly generated initial design points. The for-loop depicted between Line 9 and Line 13 carries out our proposed parameter correction calculating and bias-aware design parameter updating methods. It is worth noting that although the initial design points are randomly generated beforehand, the optimized results are deterministic as long as these initial design points remain unchanged.
Algorithm 1: Group-Based Deterministic Optimization Algorithm |
1. | Initial a group Gro including M individuals; |
2. | Randomly produce M sets of initial design points P; |
3. | For each individual Ind in Gro: |
4. | Start optimization with one of P; |
5. | While iteration index µ < max number of iterations allowed: |
6. | Evaluate the performance of Ind for current design parameter values ; |
7. | Update the best performance of Gro (i.e., ); |
8. | If any Ind’s performance reaches Spec: break While; |
9. | For each type (index of i) of design parameters of Ind: |
10. | Calculate its parameter correction ; |
11. | Calculate its updating weight ; |
12. | Update the circuit parameter values using ; |
13. | End for |
14. | End While |
15. | End For |
16. | Return and its corresponding circuit parameter values |
2.4. WSM-Based Multi-Objective Optimization
Weighted sum method (WSM) is able to solve multi-objective optimization problems in a relatively short amount of time. When seeking the Pareto front through minimizing the objective function, no additional equality or inequality constraints are required. This characteristic makes WSM a perfect fit for integrating with deterministic optimization to form deterministic multi-objective optimization. Specifically, the WSM is modified by employing our group-based deterministic optimization to minimize each objective function, which corresponds to one of the circuit performance attributes.
Algorithm 2 illustrates our multi-objective optimization. The values of circuit performance attributes derived from evaluating design parameter values
may exhibit considerable disparities, which inspires us to conduct normalization for the performance function of each circuit performance attribute. The algorithm initially utilizes the group-based deterministic optimization algorithm to calculate the extremum of each circuit performance
and corresponding circuit parameters
(Line 1). Then, the algorithm makes use of the obtained extrema and circuit parameter values for normalization. In the formula of Line 2 of the algorithm, the numerator ensures that the normalized function is non-negative and the denominator ensures that the range of the normalized function lies within the unit interval, where
represents the extremum of the
nth circuit performance attribute while
represents the value of the
nth circuit performance calculated by varying the circuit parameter
to achieve the extremum of the
xth circuit performance attribute.
Algorithm 2: Deterministic Multi-Objective Optimization Algorithm |
1. | Calculate the extremum of each circuit performance attributes and their corresponding circuit parameter values by group-based deterministic optimization algorithm; |
2. | For the performance function of each (index of n) performance attribute (i.e., ), convert to its normalized function (i.e., ) using:
|
3. | While iteration index m < max number of iterations allowed: |
4. | Construct the weight vector W and the objective function using:
|
5. | |
6. | Calculate the coordinates of the Pareto solution :
|
7. | End While |
After normalization, the algorithm iteratively constructs the weight vector
W and the objective function
, determines the extremum of the objective function
and its corresponding design parameter values
, and calculates the coordinates of individual Pareto solutions based on these design parameter values to derive the Pareto front. In Line 4,
represents the vector composed of the normalized functions of all circuit performance attributes while
represents the weight corresponding to each normalized circuit performance attribute, respectively, that must satisfy the following conditions:
This function ensures the directional consistency of various circuit performance attributes and ultimately yields the Pareto front with no duplicate solutions. It is worth noting that in Line 5 of the algorithm, the objective function is composed of the normalized performance functions rather than the performance functions of all circuit performance attributes, which would contribute to more accurate results.
We choose hypervolume, spacing metric [
18], and number of simulations (i.e.,
) as three metrics to evaluate the quality of multi-objective optimization. As depicted in
Figure 1, the hypervolume measures the volume of the hypercube formed by the obtained Pareto front and a reference point in the objective space, with a larger value indicating better optimization. The spacing metric
S is used to measure the distribution degree of non-dominated solutions, with smaller values indicating better distribution and diversity of non-dominated solutions, which is formally expressed as follows:
where
is the number of nondominated solutions in the data set,
is the sum of the differences in terms of the objective function values between solution
and its two nearest neighbors for each objective, and
is the average of the sum. The
reflects the speed of the optimization by recording the number of times the algorithm performs circuit simulations. The experimental results were normalized when used to calculate the hypervolume.