Fast and Efficient Sensitivity Aware Multi-Objective Optimization of Analog Circuits

This article introduces a novel approach for generating low-sensitive Pareto fronts of analog circuit performances. The main idea consists of taking advantage from the social interaction between particles within a multi-objective particle swarm optimization algorithm by progressively guiding the global leading process towards low sensitive solutions inside the landscape. We show that the proposed approach significantly outperforms already proposed techniques dealing with the generation of sensitivity-aware Pareto fronts, not only in terms of computing time, but also with regards to the number of solutions forming the tradeoff surface. Performances of our approach are highlighted via the design of two analog circuits.


Introduction
Due to the unending technology evolution and the ever greedy need for higher performance, electronic circuit sizing has become very complex. Only skilled designers are able to propose a priori 'optimal' sizing for such circuits. Due to metaheuristics (in addition to the use of circuit simulators [1][2][3] for evaluating performances/constraints), designers and engineers (not only highly experienced specialists) have been able to handle complex circuit performances and, thus, they were capable of generating 'optimal' sizing that maximize/minimize such circuits/systems performances, while satisfying 'complex' intrinsic/extrinsic constraints [4][5][6][7][8]. Evolutionary algorithms and swarm intelligence techniques are examples of such nature inspired optimization techniques [6][7][8][9][10].
Due to the variability of CMOS technology parameters and tolerance of component values, sensitivity analysis has become unavoidable, as stressed in [15][16][17][18]. The available literature offers a plethora of published papers offering different approaches for sizing analog circuits, while taking into consideration sensitivity analysis. Most of them use the Richardson exploration technique within an in-loop sizing approach; see for instance [17][18][19][20]. However, the already published techniques generally proceed a posteriori for generating low sensitive solutions when handling multi-objective optimization problems. For instance, Pareto front, also known as the resulting archive, can be generated by applying multi-objective metaheuristics. Afterwards, sensitivity analysis of the feasible solutions can be evaluated. In this process, the solutions presenting sensitivity values higher than a certain acceptable threshold are discarded, while only interesting ones are maintained.
As stressed in [15], the resulting feasible set may be formed by a considerably reduced (or even null) number of solutions. The authors in [15] proposed a novel idea, allowing a full archive to be obtained that has been formed by low sensitive solutions. It consists of sensitivity within the inheritance process of an evolutionary algorithm, like the non-dominated sorting genetic algorithm NSGA-II, and the proposed approach considers sensitivity values, when ranking Pareto fronts during the algorithm execution. Different inheritance strategies have been considered and it has been shown that the most interesting one consists of starting with a high acceptable value of the sensitivity threshold, then linearly reducing it as iterations go on. The viability of that approach was showcased via two CMOS analog circuits, namely: Second-generation current conveyor (CCII) and a voltage follower. However, the significant drawback of that approach is its high computing time (around ten hours for optimizing analog circuits).
In this article we propose a new approach, based on the use of a swarm intelligence technique to alleviate the aforementioned burden. The proposed idea consists of taking benefits of the social interaction between particles, by altering the choice of the swarm's leader, within a particle swarm optimization approach, and taking into account the sensitivity values for guiding the global move of the swarm. Our approach, as shown below, allows reducing the Pareto front generation time to the sixth, as compared to the one proposed in [15].
The rest of the article is structured as follows: Section 2 gives a brief overview of particle swarm optimization technique. Section 3 details the proposed multi-objective optimization approach. Section 4 presents the results obtained when applying the proposed approach for optimizing two CMOS analog circuits. Section 5 offers a brief discussion on the results obtained in Section 4. Finally, the conclusions are set out in Section 6.

Particle Swarm Optimization Technique
Metaheuristics are nature-inspired optimization techniques. A taxonomy of such stochastic optimization techniques is proposed in [21], where metaheuristics are classified into seven categories according to their intrinsic mechanisms: (i) Stochastic algorithms, such as tabu search [22]; (ii) Evolutionary algorithms, such as genetic algorithms [23]; (iii) physical algorithms, such as harmony search [24]; (iv) probabilistic algorithms, such as Bayesian algorithm [25]; (v) swarm algorithms (also known as swarm intelligence techniques (SI)), such as particle swarm optimization (PSO) [26]; (vi) immune algorithms, such as Dendritic cell algorithms [27]; and (vii) neural algorithms, such as Hopfield network [28]. All are known to be efficient and robust techniques.
PSO mimics swarming habits of animal species living in large colonies, namely fish and birds. It is known to be a very rapid algorithm. Besides, it is easy to be implemented within a computer program. Its convergence mechanism consists of moving the swarm' particles, i.e., the problem variables in the fitness landscape via two simple equations, representing velocity and position of each particle, see Equations (1) and (2) that exploit psycho-social tradeoff between oneself trust and the particle's social relationships. (1) where i is the particle index, w is the inertia coefficient (0.8 ≤ w ≤ 1.2), c 1 and c 2 are respectively the cognitive and the social acceleration coefficient (0 ≤ c 1 , c 2 ≤ 2). r 1 , r 2 are uniformly generated random values (0 ≤ r 1 , r 2 ≤ 1) regenerated every velocity update. v i (t), and x i (t), are respectively, the particle's velocity and position at time t. p b (t) and g b (t) are the particle's individual and swarm's best solution as of time t, respectively. Figure 1 illustrates the basic concept of PSO algorithm that lies in using random weights to accelerate particles towards their individual and swarm's best locations. MOPSO-CD is a variant of PSO that can handle multi-objective problems. It considers a crowding distance based routine to ensure good spread of the solutions along the Pareto front [29]. Its flowchart is depicted at Figure 2. In the following MOPSO-CD is used.

Proposed Multi-Objective Optimization Approach
As introduced above, gb presents the current most promising location amongst the particle's neighborhood. It is nothing but the first side of the balance "exploration/intensification" of the metaheuristic's search mechanism. It represents the global search part (contrary to gp, which represents the second side, i.e., intensification or local search). MOPSO-CD is a variant of PSO that can handle multi-objective problems. It considers a crowding distance based routine to ensure good spread of the solutions along the Pareto front [29]. Its flowchart is depicted at Figure 2. In the following MOPSO-CD is used. MOPSO-CD is a variant of PSO that can handle multi-objective problems. It considers a crowding distance based routine to ensure good spread of the solutions along the Pareto front [29]. Its flowchart is depicted at Figure 2. In the following MOPSO-CD is used.

Proposed Multi-Objective Optimization Approach
As introduced above, gb presents the current most promising location amongst the particle's neighborhood. It is nothing but the first side of the balance "exploration/intensification" of the

Proposed Multi-Objective Optimization Approach
As introduced above, g b presents the current most promising location amongst the particle's neighborhood. It is nothing but the first side of the balance "exploration/intensification" of the metaheuristic's search mechanism. It represents the global search part (contrary to g p , which represents the second side, i.e., intensification or local search).
Each particle position change is influenced by the swarm's best roost g b , i.e., by the experience of the whole swarm (or the experience of its neighborhood, depending on the chosen communication strategy). The proposed idea consists of altering the evolution process of PSO by including sensitivity within the choice of g b . Thus, at each iteration, the overall swarm searching process will be influenced by the most promising solution obtained so far, i.e. current less sensitive particle. p b is not subject to the same process, otherwise the program can easily converge prematurely. Richardson extrapolation technique [17][18][19][20] is applied herein within the in-loop algorithm to compute sensitivity values.
For fixing g b , different strategies have been considered (see Cases #0-4 below). For comparison, we present g b choice schemes, similar to those in [15]. Similarly, the same application examples were considered, and the same computing machines were used, i.e., Intel I7 3GHz 8Go 64 bits PCs. The number of iterations (100) is the stop criterion. The archive and the population sizes are both equal to 50. Regarding PSO, we considered the following parameters: c 1 = c 2 = 1, w = 0.4.

•
Case #0: It is a direct generation of the Pareto front, using MOPSO-CD. This case serves as a reference for the computing time.

•
Case #1: It consists of generating the Pareto front then eliminating solutions presenting sensitivity values higher than the predefined threshold. This case serves as a reference for the number of (remaining) valid solutions forming the Pareto front.

•
Case #2: This case considers sensitivity as a constraint (penalty technique is applied).

•
Case #3: It involves executing the algorithm ignoring sensitivity for the half of the total number of iterations, then, each iteration choosing g b as the lowest sensitive particle found so far.

•
Case #4: Here we consider a linear decrease of the sensitivity threshold, starting from 1 down to the predefined acceptable one. At each iteration, the global first found particle offering a sensitivity value lower than the 'dynamic' threshold, is taken as g b . Figure 3 shows the flowchart of this case #4. metaheuristic's search mechanism. It represents the global search part (contrary to gp, which represents the second side, i.e., intensification or local search). Each particle position change is influenced by the swarm's best roost gb, i.e., by the experience of the whole swarm (or the experience of its neighborhood, depending on the chosen communication strategy). The proposed idea consists of altering the evolution process of PSO by including sensitivity within the choice of gb. Thus, at each iteration, the overall swarm searching process will be influenced by the most promising solution obtained so far, i.e. current less sensitive particle. pb is not subject to the same process, otherwise the program can easily converge prematurely. Richardson extrapolation technique [17][18][19][20] is applied herein within the in-loop algorithm to compute sensitivity values. For fixing gb, different strategies have been considered (see Cases #0-4 below). For comparison, we present gb choice schemes, similar to those in [15]. Similarly, the same application examples were considered, and the same computing machines were used, i.e., Intel I7 3GHz 8Go 64 bits PCs. The number of iterations (100) is the stop criterion. The archive and the population sizes are both equal to 50. Regarding PSO, we considered the following parameters: c1 = c2 = 1, w = 0.4.
• Case #0: It is a direct generation of the Pareto front, using MOPSO-CD. This case serves as a reference for the computing time.

Application to Analog Circuit Optimization
As introduced above, both circuits of [15], namely a second-generation CMOS current conveyor (CCII shown in Figure 4) and a CMOS voltage follower (VF shown in Figure 5), are optimized herein to highlight performances of the proposed approach. The sizing of these circuits is performed as in [15], using AMS 0.35 µm technology, and with the following bias conditions: Vdd = -Vss = 1.5 V and Ibias = 50 µA, for the VF; and Vdd = -Vss = 2.5 V and Ibias = 100 µA, for the CCII. • Case #3: It involves executing the algorithm ignoring sensitivity for the half of the total number of iterations, then, each iteration choosing gb as the lowest sensitive particle found so far.
• Case #4: Here we consider a linear decrease of the sensitivity threshold, starting from 1 down to the predefined acceptable one. At each iteration, the global first found particle offering a sensitivity value lower than the 'dynamic' threshold, is taken as gb. Figure 3 shows the flowchart of this case #4.

Application to Analog Circuit Optimization
As introduced above, both circuits of [15], namely a second-generation CMOS current conveyor (CCII shown in Figure 4) and a CMOS voltage follower (VF shown in Figure 5), are optimized herein to highlight performances of the proposed approach. The sizing of these circuits is performed as in [15], using AMS 0.35 µm technology, and with the following bias conditions: Vdd = -Vss = 1.5V and Ibias = 50µA, for the VF; and Vdd = -Vss = 2.5 V and Ibias = 100 µA, for the CCII.  The CCII is optimized to minimize the input parasitic impedance (Rx) and to maximize the current transfer bandpass (Fci). The optimization of the VF consists on minimizing the voltage offset (Offset) and maximizing the voltage transfer bandpass (Fcv). In both cases, the threshold sensitivity level is 1%. Table 1 summarizes the results obtained for the aforementioned cases, and shows a comparison between those results, by applying the approach provided in [15]. Compt-time and Nb-PF refer to the computing time and number of points forming the final archive, respectively.  • Case #3: It involves executing the algorithm ignoring sensitivity for the half of the total number of iterations, then, each iteration choosing gb as the lowest sensitive particle found so far.
• Case #4: Here we consider a linear decrease of the sensitivity threshold, starting from 1 down to the predefined acceptable one. At each iteration, the global first found particle offering a sensitivity value lower than the 'dynamic' threshold, is taken as gb. Figure 3 shows the flowchart of this case #4.

Application to Analog Circuit Optimization
As introduced above, both circuits of [15], namely a second-generation CMOS current conveyor (CCII shown in Figure 4) and a CMOS voltage follower (VF shown in Figure 5), are optimized herein to highlight performances of the proposed approach. The sizing of these circuits is performed as in [15], using AMS 0.35 µm technology, and with the following bias conditions: Vdd = -Vss = 1.5V and Ibias = 50µA, for the VF; and Vdd = -Vss = 2.5 V and Ibias = 100 µA, for the CCII.  The CCII is optimized to minimize the input parasitic impedance (Rx) and to maximize the current transfer bandpass (Fci). The optimization of the VF consists on minimizing the voltage offset (Offset) and maximizing the voltage transfer bandpass (Fcv). In both cases, the threshold sensitivity level is 1%. Table 1 summarizes the results obtained for the aforementioned cases, and shows a comparison between those results, by applying the approach provided in [15]. Compt-time and Nb-PF refer to the computing time and number of points forming the final archive, respectively.  The CCII is optimized to minimize the input parasitic impedance (Rx) and to maximize the current transfer bandpass (Fci). The optimization of the VF consists on minimizing the voltage offset (Offset) and maximizing the voltage transfer bandpass (Fcv). In both cases, the threshold sensitivity level is 1%. Table 1 summarizes the results obtained for the aforementioned cases, and shows a comparison between those results, by applying the approach provided in [15]. Compt-time and Nb-PF refer to the computing time and number of points forming the final archive, respectively.  Figures 6 and 7 show Pareto fronts obtained for the different considered cases for both CCII and VF. Figures 8 and 9 offer a comparison for both CCII and VF between fronts obtained by Case #4, and that in [15], respectively.  Figures 8 and 9 offer a comparison for both CCII and VF between fronts obtained by Case #4, and that in [15], respectively.      Figures 8 and 9 offer a comparison for both CCII and VF between fronts obtained by Case #4, and that in [15], respectively.      Figures 8 and 9 offer a comparison for both CCII and VF between fronts obtained by Case #4, and that in [15], respectively.      Table 1 correspond to the average ones executing 30 runs.

Discussion
Table 1 clearly highlights that the proposed multi-objective optimization approach outperforms those proposed in published works dealing with the generation of sensitivity aware Pareto fronts, particularly in [15]; it allows getting a full archive (similarly to [15]) within around one sixth of the time. In [15], the approach consists of taking benefits from the inheritance process within genetic algorithms, in order to convey the considered features from one generation to the other. However, within NSGA-II, the ranking processes, as well as the evolutionary operators, are very time consuming. Here, we exploit the fact that the information sharing between particles within a PSO swarm is very rapid, hence the important time reduction. The computation time becomes reasonable; making our approach more suitable for a CAD program (the overall computation time is mainly limited by the Richardson exploration technique execution time, as it can be noticed when comparing Case #0 and Case #4 results in Table 1). Further, as shown in Figures 8 and 9, the proposed approach allows more interesting Pareto fronts to be generated. It is thus proven that our approach is quite suitable for sizing analog circuits and ensures convergence to the optimal trade off surface without impairing the expected overall performances of the metaheuristic.

Conclusions
An MOPSO-CD sensitivity aware optimization approach has been proposed. It allows Pareto   Table 1 correspond to the average ones executing 30 runs.   Table 1 correspond to the average ones executing 30 runs.  Table 1 clearly highlights that the proposed multi-objective optimization approach outperforms those proposed in published works dealing with the generation of sensitivity aware Pareto fronts, particularly in [15]; it allows getting a full archive (similarly to [15]) within around one sixth of the time. In [15], the approach consists of taking benefits from the inheritance process within genetic algorithms, in order to convey the considered features from one generation to the other. However, within NSGA-II, the ranking processes, as well as the evolutionary operators, are very time consuming. Here, we exploit the fact that the information sharing between particles within a PSO swarm is very rapid, hence the important time reduction. The computation time becomes reasonable; making our approach more suitable for a CAD program (the overall computation time is mainly limited by the Richardson exploration technique execution time, as it can be noticed when comparing Case #0 and Case #4 results in Table 1). Further, as shown in Figures 8 and 9, the proposed approach allows more interesting Pareto fronts to be generated. It is thus proven that our approach is quite suitable for sizing analog circuits and ensures convergence to the optimal trade off surface without impairing the expected overall performances of the metaheuristic.

Conclusions
An MOPSO-CD sensitivity aware optimization approach has been proposed. It allows Pareto fronts linking conflicting performances of analog circuits to be generated. The basic idea consists of Figure 10. Boxplot corresponding to the computation times for 30 runs for both CCII and VF. Table 1 clearly highlights that the proposed multi-objective optimization approach outperforms those proposed in published works dealing with the generation of sensitivity aware Pareto fronts, particularly in [15]; it allows getting a full archive (similarly to [15]) within around one sixth of the time. In [15], the approach consists of taking benefits from the inheritance process within genetic algorithms, in order to convey the considered features from one generation to the other. However, within NSGA-II, the ranking processes, as well as the evolutionary operators, are very time consuming. Here, we exploit the fact that the information sharing between particles within a PSO swarm is very rapid, hence the important time reduction. The computation time becomes reasonable; making our approach more suitable for a CAD program (the overall computation time is mainly limited by the Richardson exploration technique execution time, as it can be noticed when comparing Case #0 and Case #4 results in Table 1). Further, as shown in Figures 8 and 9, the proposed approach allows more interesting Pareto fronts to be generated. It is thus proven that our approach is quite suitable for sizing analog circuits and ensures convergence to the optimal trade off surface without impairing the expected overall performances of the metaheuristic.

Conclusions
An MOPSO-CD sensitivity aware optimization approach has been proposed. It allows Pareto fronts linking conflicting performances of analog circuits to be generated. The basic idea consists of 'positively' altering the exploration process of the considered metaheuristic by introducing sensitivity analysis within the social influence of a swarm on a particle's move decision. The proposed multi-objective optimization approach outperforms those proposed in the current literature. In addition, it is very suitable for being implemented within a sizing CAD tool. Similar philosophies can be applied to handle other complex objectives, such as yield optimization, as well as to other engineering problems.
The Richardson exploration technique remains a limitation regarding the computing time reduction, as stressed in Section 5. Our current work focusses on using metamodels for sensitivity evaluation.