Next Article in Journal
Low-Temperature Bonding for Heterogeneous Integration of Silicon Chips with Nanocrystalline Diamond Films
Previous Article in Journal
Fabrication of an Integrated, Flexible, Wireless Pressure Sensor Array for the Monitoring of Ventricular Pressure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel ANN-PSO Method for Optimizing a Small-Signal Equivalent Model of a Dual-Field-Plate GaN HEMT

1
Innovation Center for Electronic Design Automation Technology, Hangzhou Dianzi University, Hangzhou 310018, China
2
Empyrean Technology Co., Ltd., Beijing 100102, China
*
Author to whom correspondence should be addressed.
Micromachines 2024, 15(12), 1437; https://doi.org/10.3390/mi15121437
Submission received: 24 October 2024 / Revised: 20 November 2024 / Accepted: 25 November 2024 / Published: 28 November 2024

Abstract

:
This study introduces a novel method that integrates artificial neural networks (ANNs) with the Particle Swarm Optimization (PSO) algorithm to enhance the efficiency and precision of parameter optimization for the small-signal equivalent model of dual-field-plate GaN HEMT devices. We initially train an ANN model to predict the S-parameters of the device, and subsequently utilize the PSO algorithm for parameter optimization. Comparative analysis with the NSGA2 and DE algorithms, based on convergence speed and accuracy, underscores the superiority of the PSO algorithm. Ultimately, this ANN-PSO approach is employed to automatically optimize the internal parameters of a 4 × 250 μm dual-field-plate GaN HEMT equivalent circuit model within the frequency range of 1–18 GHz. The method’s effectiveness under varying bias conditions is validated through comparison with traditional physical formula analysis methods. The results demonstrate that the ANN-PSO method significantly enhances the automation and efficiency of parameter optimization while maintaining model accuracy, providing a reference for the optimization of other device models.

1. Introduction

GaN HEMTs hold significant potential for development in the RF domain [1,2,3] due to their remarkable properties such as high electron mobility, saturation velocity, and thermal stability [4]. However, GaN HEMT devices still encounter challenges like breakdown and leakage currents under high-electric-field and power operation conditions. The field plate (FP) is a widely adopted electric field optimization technology [5], and various types of GaN HEMT devices [6,7,8] featuring field-plate structures have been proposed and fabricated. Research has demonstrated that GaN HEMTs, after applying an FP, can effectively balance the internal electric field of the device [9,10], significantly improve the breakdown voltage [11], and suppress current collapse [12,13]. Notably, devices with dual field plates (DFPs), integrating both a source field plate (SFP) and a gate field plate (GFP), exhibit higher breakdown voltage [14] and lower dynamic on-resistance [15] compared to those with a single FP.
As the manufacturing of GaN HEMTs with FPs becomes increasingly prevalent, the conventional small-signal equivalent model for GaN HEMTs [16,17,18] proves increasingly inadequate for accurately depicting their physical behavior. Substantial research has been dedicated to modeling FPs in GaN HEMTs [19,20,21]. Notably, the research conducted by JY Wang et al. [22] introduced an innovative small-signal model for GaN HEMTs by incorporating an equivalent circuit for the FP. While this advancement significantly enhances the model’s accuracy, it also increases the complexity of the equivalent circuit. Owing to the reliance on intricate physical formula derivation and the extensive knowledge possessed by engineers, the time required to extract and optimize the parameters of the new small-signal model is substantially greater than that for traditional models.
To enhance the efficiency of optimizing this novel small-signal equivalent model, we propose utilizing an ANN as a surrogate for traditional simulators. Neural networks are well regarded in the realm of RF and microwave modeling and design [23] due to their robust learning capabilities in capturing nonlinear input–output relationships. There is a wide range of research on GaN technology using AI applications [24,25,26,27]; in the context of GaN HEMTs, extensive research has employed neural networks to predict device breakdown characteristics [28], current–voltage (I–V) characteristics [29], and small-signal (S-parameter) characteristics [30,31,32]. However, the majority of these neural network training studies are based on three-dimensional electromagnetic field simulations in TCAD or other software, with relatively fewer studies focusing on SPICE simulations, particularly those involving DFP equivalent structures.
In the domain of model parameter extraction algorithms, the majority of research gravitates towards evolutionary algorithms, including popular ones such as genetic algorithms [33,34,35] and PSO [36,37]. A minority of researchers have proposed approaches that integrate algorithms with neural networks [38], achieving commendable results. However, there is still a significant gap in research that effectively marries well-trained neural network models with optimization techniques to further enhance the optimization of GaN HEMT model parameters.
This paper initially introduces the DFP GaN HEMT device and its small-signal equivalent model. It then presents the ANN model trained for this model. Finally, it proposes a method that applies the ANN model to the PSO algorithm to optimize the small-signal model parameters of the DFP GaN HEMT device equivalent circuit and contrasts it with the precision and time expenditure of models constructed through conventional analytical methods. Verification is conducted on a DFP GaN HEMT fabricated using a 450 nm process, featuring four gate fingers, each measuring 250 μm. The results demonstrate that, compared to the traditional method of parameter optimization using physical formula analysis, this approach not only ensures model accuracy but also automates parameter optimization, thereby enhancing efficiency.

2. Materials and Methods

2.1. Novel Device Model

Figure 1 illustrates a cross-section of a DFP GaN HEMT manufactured in China, which integrates a GFP and an SFP to simultaneously enhance collapse, breakdown, and gain characteristics [22]. The gate length ( L g ) is 0.45 μm, the GFP length ( L g f p ) is 0.3 μm, and the SFP length ( L s f p ) is 1.2 μm. The length of the source access domain ( L g s ) is 1 μm, and the length of the drain access domain ( L g d ) is 3 μm. These characteristic dimensions are sourced from Reference [22] and are typically provided by the manufacturer upon device delivery, as SPICE model device modeling often follows actual production. Our modeling was executed with precise knowledge of the device’s characteristic dimensions. Compared to the conventional GaN HEMT device model, this model exhibits greater topological complexity, and the analytical effort required to derive reasonable parameters using traditional methods is excessively high. Thus, we chose this model to demonstrate and validate our work.
We selected 450 nm since we have already conducted tests on devices of this size using the Keysight N5247A PNA-X produced by Keysight Technologies microwave network analyzer, which provides more accurate data than theoretical predictions. The measurement setup was calibrated using the short–open–load–through (SOLT) method. It is noteworthy that the method proposed in this paper is based on modifications made to the equivalent structure of the DFP HEMT. This equivalent structure is not directly related to the device’s dimensions. In fact, the gate length does not directly influence the internal parameter extraction throughout the automated optimization process. As long as it is within a reasonable range for manufacturable devices, this method can be applied.
The latest small-signal equivalent model structure of the device [22] is depicted in Figure 2, which is segmented into four principal parts. The area outside the dashed line signifies the peripheral equivalent, the area within the red box denotes the equivalent subcircuit of the intrinsic transistor, and the blue and green boxes represent the equivalent subcircuits of the GFP and the SFP, respectively. This equivalent structure applies to all GFP devices. Consequently, based on this equivalent topological structure, the optimization methods proposed in our work are not limited to the devices in Reference [22] but are universally applicable to all FP devices.
After introducing the two field plates, the intrinsic parameters of the novel HEMT equivalent circuit model reached 14 ( C g s 1 , C g d 1 , C d s 1 , C g s 2 , C g d 2 , C d s 2 , C d s 3 , R d s 1 , R d s 2 , R d s 3 , g m 1 , g m 2 , τ 1 , τ 2 ), these model parameters will be continuously optimized to adjust the model accuracy.

2.2. Model Optimization Method

The optimization method for the small-signal equivalent model of DFP GaN HEMT devices can be succinctly summarized as follows: training a neural network model and optimizing with PSO. As illustrated in Figure 3, the principal steps involved are producing datasets, training the ANN model, and invoking the PSO algorithm.

2.2.1. Producing Datasets

The small-signal equivalent circuit utilized in this study is depicted in Figure 2. Notably, the parameters of the peripheral components ( C p d a , C g d a , C p g a , C p g i , C p d i , C g d i , L g , L d , L s , R g , R d , R s ) remain independent of the bias. Modelers can determine these parameters under “zero drain bias pinched-off conditions” using low-frequency Y parameters [39]. The equivalent subcircuits within the three dashed boxes are influenced by the bias conditions of the device’s operation. We extricate these bias-dependent internal parameters from the circuit and employ them as input conditions for the trained neural network, with the simulated S-parameter values of the device model serving as the output. We opted to use the Advanced Design System (ADS) 2017 simulator from Keysight Technologies to sample the datasets. The range of input parameter values is detailed in Table 1, which corresponds to those set by engineers during optimization. We randomly sampled and simulated 10,000 sets of data under bias conditions of V d s = 28 V, 40 V, 48 V, corresponding to V g s = 1 V, −1 V, −3 V, which contained one set of on-state and two sets of off-state working conditions. These three conditions represent potential voltage scenarios encountered during HEMT operation. Most other bias conditions fall within the scope described by these three scenarios. In exceptional circumstances, one can adhere to the following procedure: initially, gather samples to construct the database; subsequently, train the ANN; and finally, employ PSO for optimization to derive model parameters. Throughout this sequence, the configurations can remain constant.
During model training, using a dataset with high similarity can significantly enhance training accuracy. To ensure the model’s high precision, we meticulously curated the datasets, excluding interference data under resonance and other anomalous conditions, as shown in Figure 4. We set a specific threshold (0.5 in Figure 4) to identify and exclude the interference curves that deviate significantly from the majority. This strategy helps filter out potential outliers that might affect model performance, thereby ensuring robustness and accuracy in the training results. Following this data refinement, 5245 datasets were retained for the bias condition of V d s = 28 V, V g s = 1 V, while the other two bias datasets consistently retained 10,000 sets each. These operations effectively mitigate the “garbage in, garbage out” issue in training and bolster the model’s reliability. Subsequent results indicated that after data cleaning, the model training accuracy increased from 78% to 99%.
We meticulously divided our dataset into an 80% training set and 20% test set. This ratio aims to balance the thoroughness of model training with the accuracy of validation. By utilizing the majority of the data for training, we ensure that the neural network can learn the complex patterns and relationships within the data. Meanwhile, the retained 20% test set provides an independent and representative sample set for the validation of model performance. This division strategy not only provides our model with ample training data to capture subtle features within the data but also ensures that we can accurately assess the model’s generalization capabilities through a significant test set. In this way, we can ensure that the model maintains a high level of performance and predictive accuracy when dealing with unseen samples.

2.2.2. Training the ANN Model

After a meticulous data calibration process, our training set demonstrated significant similarity and consistency in the sample data, laying a solid foundation for building a robust neural network model. In light of this, we adopted the widely recognized hold-out method for model training and validation. This method is highly regarded for its simplicity and the absence of any prior assumptions about data distribution, ensuring that our research possessed high adaptability and universality.
During the training phase of the neural network, we normalized all data, which facilitated an accelerated convergence speed and enhanced both the model’s generalization capability and numerical stability. The loss function employed was the mean squared error (MSE), as illustrated in Formula (1). This loss function enabled smoother weight adjustments, mitigating significant weight changes caused by individual outliers. Formula (2) depicts the calculation method for the model’s R-squared (R2), a widely used metric to assess model precision. The closer its value is to 1, the higher the predictive accuracy of the model. We selected the Adam optimizer due to its ability to adaptively adjust the learning rate, thereby expediting the convergence process.
M S E = 1 N i = 1 N y ^ i y i 2
R 2 = 1 i = 1 N y i y ^ i 2 i = 1 N y i y ¯ 2
In Formulas (1) and (2), y ^ i is the predicted value of the model, while y ¯ denotes the mean value.
The final architecture of the ANN comprises 1 input layer, 2 hidden layers, and 1 output layer, as illustrated in Figure 5. The input layer contains 15 nodes, representing 14 internal parameters ( C g s 1 , C g d 1 , C d s 1 , C g s 2 , C g d 2 , C d s 2 , C d s 3 , R d s 1 , R d s 2 , R d s 3 , g m 1 , g m 2 , τ 1 , τ 2 ) and one frequency parameter (freq). The output layer comprises 8 nodes, each corresponding to the real and imaginary parts of S 11 , S 12 , S 21 , S 22 . Both hidden layers are composed of 256 neurons each. The ReLU activation function is utilized in the input and hidden layers to enhance the network’s nonlinear expression.

2.2.3. Invoking the PSO Algorithm

The trained neural network model can function as a black-box model, synergized with other optimization algorithms. In this study, we employ the PSO algorithm in conjunction with the trained ANN to deduce the internal parameters of the equivalent model. All optimization algorithms utilized in this work are sourced from the Python open-source library Pymoo, a renowned repository of optimization algorithms. The algorithms within this library are highly flexible and scalable, making them suitable for addressing a diverse array of optimization challenges. To validate the efficacy and superiority of the PSO algorithm selected for parameter optimization in this paper, we also incorporated the Differential Evolution (DE) algorithm and the Non-dominated Sorting Genetic Algorithm II (NSGA2) from the Pymoo library to compare their effectiveness in the parameter optimization process.
The PSO algorithm is a heuristic search technique grounded in swarm intelligence [37,40], which addresses optimization problems by emulating the collective behaviors of birds and fish in nature. Within the PSO framework, the solution space of a problem is conceptualized as an n-dimensional domain, where each potential solution, termed a particle, possesses its own velocity and position within this domain. The algorithm progressively refines the positions and velocities of particles by considering both the individual optimal positions and the global optimal position of the entire swarm, thereby striving to discover the optimal solution. The PSO algorithm boasts advantages such as rapid convergence and high precision.
In the PSO algorithm, we configure a swarm comprising 200 particles. The number of particles is an important parameter in the PSO algorithm, as it affects the algorithm’s search capability and convergence speed. Choosing 200 particles is based on experimentation and experience. While this number may increase computational costs, having more particles can expand the coverage of the search space, increasing the probability of finding the global optimal solution. Overall, this enhances the efficiency of the algorithm. Within the solution space, characterized by 14 internal parameters of the model, the frequency required for the neural network to forecast the input is maintained within the range of 1–18 GHz, with a scanning increment of 0.1 GHz. In the update strategy of PSO, we assign the inertia factor w of the particles as 0.8; this is a commonly used value which dictates the extent to which the current particle’s velocity influences its velocity in the subsequent iteration. A larger inertia factor keeps particles at higher speeds, favoring global search; a smaller inertia factor decreases particle speeds, favoring local search. The value of 0.8 balances the ability for global and local search. The acceleration constants c 1 and c 2 are both set to 2, indicating the rates at which particles move towards their personal best (pbest) and the global best (gbest) positions. The acceleration factors (c1 and c2) represent the random acceleration weights towards the personal best (pbest) and global best (gbest) of a particle. According to Reference [41], setting the acceleration coefficients to 2 generates better solutions. It is also a compromise choice that accelerates the convergence speed of the algorithm while maintaining a certain level of stability. The formulas governing the updates for particle velocity and position are presented in Equations (3) and (4), respectively.
v i k + 1 = w v i k + c 1 r a n d 1 ( 0 , 1 ) ( p b e s t i k x i k ) + c 2 r a n d 2 ( 0 , 1 ) ( g b e s t i k x i k )
x i k + 1 = x i k + v i k + 1
The DE algorithm [42] is a population-based, single-objective optimization technique that iteratively refines the population through mutation, crossover, and selection operations. It is adept at solving global optimization problems involving continuous functions. The DE algorithm is distinguished by its straightforward operation, minimal parameter requirements, and ease of implementation. Initially, the DE algorithm generates mutated individuals from the initial population using a differential strategy. Subsequently, these mutated individuals are combined with target individuals according to specific rules to produce trial individuals. Finally, the fitness of the trial individuals is compared with that of the current population, and the superior individuals are retained for the next generation. This iterative process continues until the termination condition is satisfied. A pivotal parameter in the DE algorithm is the crossover constant (CR), which ranges between 0 and 1. In this experiment, we utilized the DE algorithm package from Pymoo and set the crossover constant to 0.9. This configuration enhances the interdependence of the optimization parameters and facilitates the algorithm’s convergence.
The NSGA2 [43] is an advanced genetic algorithm that seeks optimal solutions for optimization problems by emulating natural selection and genetic mechanisms. Initially, the NSGA2 algorithm performs non-dominated sorting and crowding distance calculations on the initial population. It then generates offspring through processes of selection, crossover, and mutation. These offspring are subsequently combined with the parent generation, followed by a repeat of non-dominated sorting and crowding distance calculations to select the most superior individuals to form a new parent population. This iterative process continues until the termination conditions are met. Through these steps, NSGA2 effectively identifies a set of solutions that balance multiple objectives. In this study, the implementation of NSGA2 is achieved by utilizing the default NSGA2 method available in the Pymoo library.
In selecting the objective function (fitness), we utilize the sum of the errors of the scattering parameters as the objective function, as delineated in Formula (5). In actual measurement data, the S 12 component is significantly smaller compared to other S-parameters, which can easily result in incorrect approximations when employing the PSO algorithm to fit the target. Consequently, the S 12 component in the function is multiplied by a weight coefficient of 3 to ensure the accuracy of the optimization in this part [44].
F i t n e s s = Δ S 11 + 3 Δ S 12 + Δ S 21 + Δ S 22
In Formula (5), the representation of the S-parameter is shown in Formula (6).
Δ S i j = n = 1 N S i j , n m e a s u r e d S i j , n p r e d i c t e d i , j = 1 , 2 ;   n = 1 , 2 N
The N in Formula (6) represents the number of data within the frequency range of 1–18 GHz.
In the algorithm comparison experiment of this study, to ensure the effectiveness of the comparison, we set the population size of the three optimization algorithms to 200 and the maximum number of iterations to 200. We verified the convergence speed and accuracy advantages of the PSO optimization algorithm in small-signal model parameter optimization through the fitness change curve of training iterations. Ultimately, the optimal solution parameters obtained by the algorithm are applied to the small-signal model and compared with the traditional optimization results derived from analytical calculations using physical formulas.

3. Results

The verification device utilized a China-produced DFP GaN HEMT fabricated using a 450 nm process, featuring four gate fingers, each 450 nm in length and 250 μm in width. The operational bias conditions were set to V d s = 28 V, 40 V, and 48 V, corresponding to V g s = 1 V, −1 V, and −3 V, respectively, with S-parameter measurements conducted over the range of 1–18 GHz. Under the condition of V d s = 28 V, V g s = 1 V, the gate-source voltage V g s was positive, and the device operated in the on-state; under the condition of V d s = 40 V, V g s = −1 V, the gate-source voltage V g s was negative, and the device was turned off. The device is designed for high-power applications in the S-band to Ku-band at voltage of 28 V, so an additional set of conditions with a high drain bias of V d s = 48 V, V g s = −3 V was added. To validate the proposed optimizing method for the small-signal equivalent circuit model, this paper optimized the parameters under the aforementioned three bias conditions and compared the results with those derived manually by engineers.

3.1. ANN Accuracy

Figure 6 illustrates the changes in test accuracy, training accuracy, and train loss during the training process of the ANN model under three distinct bias conditions. Table 2 presents the training results. The training error (Train_Loss) is computed using the MSE (Formula (1)). Both the training accuracy (Train_Accuracy) and the testing accuracy (Test_Accuracy) are determined using R 2 (Formula (2)).
Additionally, we randomly extracted data under three bias conditions beyond the training set and compared the predicted outputs of the ANN model with the simulated data. Upon validation, the comparison and error between these two sets is illustrated in Figure 7.

3.2. Optimization Algorithm Comparison

The trained neural network, in conjunction with the PSO algorithm, was employed to optimize the internal parameters of the small-signal equivalent circuit model under three different bias conditions. The algorithm targets the fitting of the S-parameter values of the measured device. By leveraging the neural network to rapidly predict the S-parameter performance corresponding to 15 model parameters and iterating, a set of circuit parameters suitable for the model netlist simulate is ultimately obtained. Optimization tests were carried out using thte DE, NSGA2, and PSO optimization algorithms. The trend in fitness changes during the optimization and the final fitness of the three algorithms are documented and illustrated in Figure 8.

3.3. ANN-PSO Optimization Results

Table 3 presents the optimization results of the model parameters using PSO under the three bias operating conditions, alongside the extraction results obtained through the conventional analytical derivation approach [22], and enumerates the absolute errors.
Moreover, the model parameters derived from PSO optimization and traditional analytical derivation are employed to simulate the equivalent circuit within the frequency range of 1–18 GHz. A comparison between the measured parameters and the simulated parameters is illustrated in Figure 9.
Additionally, we introduced the relative error of the scattering parameter [22] to assess the accuracy of the small-signal model. The error is calculated as shown in Formula (7), where n denotes the number of frequency points. The total percentage error E T O T of the model was determined by averaging the evaluated errors of the four S x y components of the device. The error calculation results are displayed in Table 4.
E x y | y = 1 , 2 x = 1 , 2 = S x y s i m ( i ) S x y m e a s ( i ) n S x y m e a s 2 n

4. Discussion

In the experimental results, the R 2 fitting degree of the model, as trained and tested in Table 2, exceeds 0.99. Furthermore, in the comparison of randomly selected experimental data with simulation results (Figure 7), the average error in the S-parameters remains within 0.15%. This high-precision implementation is attributed to the adoption of a rather complex neural network architecture. The magnitude of the error between the neural network predictions and the simulator simulation results is crucial for the subsequent optimization work of the algorithm. To make the neural network’s prediction accuracy approach that of the simulator, we configured 256 neurons in each hidden layer. In practice, during testing, we found that the configuration of 256 neurons exceeded the requirements, as approximately 32 neurons could achieve an accuracy of around 98%. However, this 1% difference in accuracy could lead to significant modeling errors when combined with the PSO algorithm for optimization. Therefore, we decided to increase the number of neurons to 256. With the support of modern computing resources, this configuration does not significantly increase time and computational costs during training, while also enhancing the applicability and robustness of our approach. Hence, sacrificing a bit of speed to gain this additional 1% accuracy is highly worthwhile. Additionally, we opted for ReLU as the activation function. Compared to tanh and sigmoid activation functions that require complex exponential operations, ReLU is implemented simply through a max() operation, making it computationally simpler and more cost-effective. The ReLU function also exhibits sparse activation properties, allowing some neurons in the network to remain inactive for specific inputs, thereby reducing unnecessary computations. In this way, we mitigate some of the computational burden brought by the number of neurons.
The curve in Figure 7 represents the results of our randomly selected tests, providing evidence of the success of our ANN architecture. The data in the figure is visibly fitted very well. This indicates that the model can nearly perfectly predict the outcomes for both the training and testing data, demonstrating excellent predictive performance and generalization capability. Under these circumstances, ANN models can efficiently supplant traditional simulators for the swift and accurate prediction of the simulation S-parameters of the GaN HEMT, thereby reducing simulation time and enhancing parameter tuning efficiency. For a simple ordinary circuit, a simulator typically takes several seconds to simulate once, whereas a neural network requires less than 0.001 s. During the optimization process, we continuously obtain simulation values. As the number of iterations increases, the advantages of a precise ANN model, like the one in Figure 7, in terms of speed and time become increasingly apparent, thereby enhancing overall parameter adjustment efficiency.
In the horizontal comparison experiment of the optimization algorithm, the results in Figure 8 illustrate the superiority of the PSO algorithm on the GaN HEMT small-signal model. During the optimization of the DE algorithm, there was an occurrence of premature convergence, which may be attributed to the fact that, in the optimization of the small-signal model, the value of S 12 is small and the value of S 21 is large compared to other S-parameters. Although the weights were adjusted in the fitness function, it remains challenging for the DE algorithm to significantly enhance the diversity of the particle swarm, leading to a reduction in search capability. The NSGA2 algorithm exhibits a slight deficiency in model accuracy, potentially due to its limited ability to handle multi-objective optimization problems with constraints. Compared to these two algorithms, the PSO optimization algorithm not only converges more rapidly but also demonstrates superior accuracy in solving the model parameter optimization problem presented in this paper.
The parameter optimization results of the ANN-PSO method and the traditional approach are presented in Table 3. It is evident that the optimized parameters of the neural network model are remarkably close to the outcomes derived from traditional analytic methods, indicating that the final parameters optimized by the ANN-PSO method can also be physically interpreted. Additionally, under the 40 V bias condition, it is noteworthy that the error in the value of C g s 2 is somewhat significant. According to reference [22], the C g s 2 in the classic model is 528.5. The manually tuned value was adjusted from this based on experience. The optimized C g s 2 result, in fact, is closer to the result derived from physical theory calculations, which indicates that our optimization method can reduce the errors introduced by conventional empirical adjustments to some extent.
When engineers employ traditional methods to optimize parameters, they must consider the actual physical significance to balance the relationships between the parameters of various components. The analysis and parameter adjustment work requires extensive trial and error, often taking several days to complete. In contrast, training an ANN model requires only a few hours, and it can directly yield the output results from the input parameters, without the necessity of understanding and explicating the specific relationship between the internal output values and the input values. Consequently, modelers are no longer required to engage in a complex derivation process of physical formulas to calculate parameters, thereby lowering the entry requirements for modeling optimization tasks and reducing the time cost.
In general, the integration of ANN and PSO facilitates the automation of model parameter tuning, providing significant advantages in terms of time efficiency and cost over traditional methods. Our research was conducted under three bias conditions, and the S-parameter fitting under each condition proved satisfactory. However, it is noteworthy that at V d s = 28 V the S 21 error reached 12.36%, compared to a manually adjusted error of 6.872%. Although the overall error remains within the 5% range, improving the S21 fitting could enhance the model’s accuracy. This discrepancy might stem from our objective function 5 in the optimization algorithm, where we only considered the coefficient of the S 12 parameter. In fact, S 21 is significantly larger than others, and we also divided it by 20 when plotting the curve. Future work could involve incorporating a specific coefficient for S 21 in the objective function to improve optimization precision. From the comparative results, it is evident that the parameters optimized by the ANN-PSO method exhibit greater accuracy in fitting the S 11 and S 12 parameters of the model. Figure 9 visually depicts the S-parameter performance of the GaN HEMT small-signal model optimized by the ANN-PSO method in the Smith charts. As illustrated in Table 4, the average error of the S-parameters of the model, when compared to the measured data across the frequency range of 1–18 GHz, is below 5%. This indicates that the small-signal model optimized by ANN-PSO aligns very well with the measured data, and such an error margin is acceptable for modeling purposes.
However, we believe that there remains potential for further improvement in this accuracy. Incorporating more relevant data during training might slightly enhance the ANN model’s precision. Nevertheless, given the already high accuracy of the neural network, expanding the dataset seems unnecessary as it would also prolong the time required for parameter extraction. Our aim is to train a sufficiently precise network with minimal data. To enhance accuracy, the focus should shift towards improving the structure of the ANN or refining the search algorithms. In the future, we might consider integrating the device’s bias conditions as input parameters for neural network training. Nevertheless, this approach would complicate the creation of a high-quality dataset, as variations between curves under different conditions would become more distinct. On the other hand, this study has not significantly improved the PSO algorithm itself; future efforts could refine PSO and integrate it with ANN to boost optimization accuracy. Additionally, exploring simpler, more precise small-signal equivalent topological structures of DFP GaN HEMT models could be beneficial.

5. Conclusions

A rapid and efficient method has been proposed to optimize the novel small-signal equivalent circuit model of a DFP GaN HEMT. The trained neural network model achieves an impressive simulation prediction accuracy of 99.9% for the S-parameters, seamlessly supplanting the simulator. The neural network was coupled with the PSO algorithm to facilitate the automatic optimization of the equivalent circuit model parameters. The optimized small-signal model exhibits an overall average error of 4.43% when compared to the measured data of a 4 × 250 μm DFP GaN HEMT under various bias conditions, closely matching the results obtained through traditional analytical methods. Furthermore, the optimization’s impact on the performance of S 11 and S 22 surpasses that of conventional optimization techniques. Using this approach to establish a GaN HEMT model requires a short amount of time, typically just one to two hours, and the model’s accuracy is also satisfactory. This method holds promise for accelerating the equivalent modeling of various FP devices in the future.

Author Contributions

Conceptualization, J.L.; methodology, H.S., W.Z., J.W. (Jinye Wang) and H.J.; writing—original draft preparation, H.S.; writing—review and editing, Y.W. and J.W. (Junchao Wang); supervision, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquires can be directed to the corresponding authors.

Acknowledgments

Profound gratitude is extended to Junchao Wang for his sagacious guidance and invaluable assistance in the formatting of this paper.

Conflicts of Interest

Author Wenyong Zhou was employed by the company Empyrean Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Ma, R.; Wang, Z.; Yang, X.; Lanfranco, S. Implementation of a current-mode class-S RF power amplifier with GaN HEMTs for LTE-Advanced. In Proceedings of the WAMICON 2012 IEEE Wireless & Microwave Technology Conference, Cocoa Beach, FL, USA, 15–17 April 2012; pp. 1–6. [Google Scholar]
  2. Ye, H.; Wu, C.; Venkatesan, N.; Wang, J.; Cao, Y.; Xie, A.; Beam, E.; Fay, P. Ferroelectric-gated GaN HEMTs for RF and mm-wave switch applications. In Proceedings of the 2022 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA), Hsinchu, Taiwan, 18–21 April 2022; pp. 1–2. [Google Scholar]
  3. Jarndal, A.; Hussein, A.; Crupi, G.; Caddemi, A. Reliable noise modeling of GaN HEMTs for designing low-noise amplifiers. Int. J. Numer. Model. Electron. Netw. Devices Fields 2020, 33, e2585. [Google Scholar] [CrossRef]
  4. He, J.; Cheng, W.C.; Wang, Q.; Cheng, K.; Yu, H.; Chai, Y. Recent advances in GaN-based power HEMT devices. Adv. Electron. Mater. 2021, 7, 2001045. [Google Scholar] [CrossRef]
  5. Berzoy, A.; Lashway, C.R.; Moradisizkoohi, H.; Mohammed, O.A. Breakdown voltage improvement and analysis of GaN HEMTs through field plate inclusion and substrate removal. In Proceedings of the 2017 IEEE 5th Workshop on Wide Bandgap Power Devices and Applications (WiPDA), Albuquerque, NM, USA, 30 October–1 November 2017; pp. 138–142. [Google Scholar]
  6. Ahsan, S.A.; Ghosh, S.; Sharma, K.; Dasgupta, A.; Khandelwal, S.; Chauhan, Y.S. Capacitance modeling in dual field-plate power GaN HEMT for accurate switching behavior. IEEE Trans. Electron Devices 2015, 63, 565–572. [Google Scholar] [CrossRef]
  7. Kellogg, K.; Khandelwal, S.; Craig, N.; Dunleavy, L. Improved charge modeling of field-plate enhanced algan/gan hemt devices using a physics based compact model. In Proceedings of the 2018 IEEE BiCMOS and Compound Semiconductor Integrated Circuits and Technology Symposium (BCICTS), San Diego, CA, USA, 15–17 October 2018; pp. 102–105. [Google Scholar]
  8. Bothe, K.M.; Ganguly, S.; Guo, J.; Liu, Y.; Niyonzima, A.; Tornblad, O.; Fisher, J.; Gajewski, D.A.; Sheppard, S.T.; Noori, B. Improved X-band performance and reliability of a GaN HEMT with sunken source connected field plate design. IEEE Electron Device Lett. 2022, 43, 354–357. [Google Scholar] [CrossRef]
  9. Bahat-Treidel, E.; Hilt, O.; Brunner, F.; Sidorov, V.; Würfl, J.; Tränkle, G. AlGaN/GaN/AlGaN DH-HEMTs breakdown voltage enhancement using multiple grating field plates (MGFPs). IEEE Trans. Electron Devices 2010, 57, 1208–1216. [Google Scholar] [CrossRef]
  10. Wu, H.; Fu, X.; Hu, S. A 650 V Enhancement Mode GaN HEMT Device with Field Plate for Power Electronic Applications. In Proceedings of the 2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Kuala Lumpur, Malaysia, 12–13 June 2021; pp. 1–5. [Google Scholar]
  11. Godfrey, D.; Nirmal, D.; Arivazhagan, L.; Roy, B.; Chen, Y.L.; Yu, T.H.; Yeh, W.K.; Godwinraj, D. Investigation of AlGaN/GaN HEMT Breakdown analysis with Source field plate length for High power applications. In Proceedings of the 2020 5th International Conference on Devices, Circuits and Systems (ICDCS), Coimbatore, India, 5–6 March 2020; pp. 244–246. [Google Scholar]
  12. Chanchal, C.; Visvkarma, A.K.; Malik, A.; Laishram, R.; Rawal, D.; Saxena, M. Dependence of Gate Leakage Current on Efficacy of Gate Field Plate in AlGaN/GaN HEMT. In Proceedings of the 2022 IEEE VLSI Device Circuit and System (VLSI DCS), Kolkata, India, 26–27 February 2022; pp. 265–268. [Google Scholar]
  13. Hasan, M.T.; Asano, T.; Tokuda, H.; Kuzuhara, M. Current collapse suppression by gate field-plate in AlGaN/GaN HEMTs. IEEE Electron Device Lett. 2013, 34, 1379–1381. [Google Scholar] [CrossRef]
  14. Neha; Kumari, V.; Gupta, M.; Saxena, M. Breakdown Voltage Analysis of Different Field Plate AlGaN/GaN HEMTs: TCAD based Assessment. In Proceedings of the 2018 IEEE Electron Devices Kolkata Conference (EDKCON), Kolkata, India, 24–25 November 2018; pp. 407–412. [Google Scholar]
  15. Hu, Q.; Zeng, F.; Cheng, W.C.; Zhou, G.; Wang, Q.; Yu, H. Reducing dynamic on-resistance of p-GaN gate HEMTs using dual field plate configurations. In Proceedings of the 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA), Singapore, 20–23 July 2020; pp. 1–4. [Google Scholar]
  16. Wen, Z.; Xu, Y.; Wang, C.; Zhao, X.; Xu, R. An efficient parameter extraction method for GaN HEMT small-signal equivalent circuit model. Int. J. Numer. Model. Electron. Netw. Devices Fields 2017, 30, e2127. [Google Scholar] [CrossRef]
  17. Khandelwal, S.; Chauhan, Y.S.; Hodges, J.; Albahrani, S.A. Non-linear rf modeling of gan hemts with industry standard asm gan model. In Proceedings of the 2018 IEEE BiCMOS and Compound Semiconductor Integrated Circuits and Technology Symposium (BCICTS), San Diego, CA, USA, 15–17 October 2018; pp. 93–97. [Google Scholar]
  18. Jarndal, A.; Kompa, G. A new small-signal modeling approach applied to GaN devices. IEEE Trans. Microw. Theory Tech. 2005, 53, 3440–3448. [Google Scholar] [CrossRef]
  19. Hodges, J.; Albahrani, S.A.; Khandelwal, S. A computationally efficient modelling methodology for field-plates in gan hemts. In Proceedings of the 2019 IEEE BiCMOS and Compound semiconductor Integrated Circuits and Technology Symposium (BCICTS), Nashville, TN, USA, 3–6 November 2019; pp. 1–4. [Google Scholar]
  20. Menokey, A.; Ajoy, A. Analytical model for off-state channel potential and electric field distribution in an N-polar GaN-based field-plated MIS-HEMT. In Proceedings of the 2022 IEEE International Conference on Emerging Electronics (ICEE), Bangalore, India, 11–14 December 2022; pp. 1–5. [Google Scholar]
  21. Čučak, D.; Vasić, M.; García, O.; Oliver, J.A.; Alou, P.; Cobos, J.A.; Wang, A.; Martín-Horcajo, S.; Romero, M.F.; Calle, F. Physics-based analytical model for input, output, and reverse capacitance of a GaN HEMT with the field-plate structure. IEEE Trans. Power Electron. 2016, 32, 2189–2202. [Google Scholar] [CrossRef]
  22. Wang, J.; Liu, J.; Zhao, Z. A novel small-signal equivalent circuit model for GaN HEMTs incorporating a dual-field-plate. J. Semicond. 2024, 45, 052302. [Google Scholar] [CrossRef]
  23. Zhang, Q.J.; Gupta, K.C.; Devabhaktuni, V.K. Artificial neural networks for RF and microwave design-from theory to practice. IEEE Trans. Microw. Theory Tech. 2003, 51, 1339–1350. [Google Scholar] [CrossRef]
  24. Wang, Z.; Li, L.; Yao, Y. A machine learning-assisted model for GaN ohmic contacts regarding the fabrication processes. IEEE Trans. Electron Devices 2021, 68, 2212–2219. [Google Scholar] [CrossRef]
  25. Wang, Z.; Li, L.; Leon, R.C.; Yang, J.; Shi, J.; van der Laan, T.; Usman, M. Improving Semiconductor Device Modeling for Electronic Design Automation by Machine Learning Techniques. IEEE Trans. Electron Devices 2023, 71, 263–271. [Google Scholar] [CrossRef]
  26. Yee, N.; Lu, A.; Wang, Y.; Porter, M.; Zhang, Y.; Wong, H.Y. Rapid Inverse Design of GaN-on-GaN Diode with Guard Ring Termination for BV and (V F Q)- 1 Co-Optimization. In Proceedings of the 2023 35th International Symposium on Power Semiconductor Devices and ICs (ISPSD), Hong Kong, China, 28 May–1 June 2023; pp. 143–146. [Google Scholar]
  27. Wu, T.L.; Kutub, S.B. Machine learning-based statistical approach to analyze process dependencies on threshold voltage in recessed gate AlGaN/GaN MIS-HEMTs. IEEE Trans. Electron Devices 2020, 67, 5448–5453. [Google Scholar] [CrossRef]
  28. Liu, S.; Duan, X.; Wang, S.; Zhang, J.; Hao, Y. Optimization of dual field plate AlGaN/GaN HEMTs using artificial neural networks and particle swarm optimization algorithm. IEEE Trans. Device Mater. Reliab. 2023, 23, 204–210. [Google Scholar] [CrossRef]
  29. Abubakr, A.; Hassan, A.; Ragab, A.; Yacout, S.; Savaria, Y.; Sawan, M. High-temperature modeling of the IV characteristics of GaN150 HEMT using machine learning techniques. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
  30. Marinković, Z.; Crupi, G.; Caddemi, A.; Marković, V. On the neural approach for FET small-signal modelling up to 50 GHz. In Proceedings of the 10th Symposium on Neural Network Applications in Electrical Engineering, Belgrade, Serbia, 23–25 September 2010; pp. 89–92. [Google Scholar]
  31. Khusro, A.; Hashmi, M.S.; Ansari, A.Q.; Auyenur, M. A new and reliable decision tree based small-signal behavioral modeling of GaN HEMT. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; pp. 303–306. [Google Scholar]
  32. Khusro, A.; Husain, S.; Hashmi, M.S.; Ansari, A.Q. Small signal behavioral modeling technique of GaN high electron mobility transistor using artificial neural network: An accurate, fast, and reliable approach. Int. J. RF Microw. Comput.-Aided Eng. 2020, 30, e22112. [Google Scholar] [CrossRef]
  33. Ryu, W.; Yim, M.J.; Ahn, S.; Lee, J.; Kim, W.; Paik, K.W.; Kim, J. High-frequency SPICE model of anisotropic conductive film flip-chip interconnections based on a genetic algorithm. IEEE Trans. Components Packag. Technol. 2000, 23, 542–545. [Google Scholar]
  34. Li, Y.; Cho, Y.Y. Parallel genetic algorithm for SPICE model parameter extraction. In Proceedings of the 20th IEEE International Parallel & Distributed Processing Symposium, Rhodes, Greece, 25–29 April 2006; p. 8. [Google Scholar]
  35. Li, R.; Li, D.; Du, H.; Hai, C.; Han, Z. SOI MOSFET model parameter extraction via a compound genetic algorithm. Bandaoti Xuebao (Chin. J. Semicond.) 2006, 27, 796–803. [Google Scholar]
  36. Wu, Y. Parallel hybrid evolutionary algorithm based on chaos-GA-PSO for SPICE model parameter extraction. In Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; Volume 1, pp. 688–692. [Google Scholar]
  37. Rizzo, S.A.; Salerno, N.; Raciti, A.; Bazzano, G.; Raffa, A.; Veneziano, P. Parameters optimization of a behavioural SPICE model of an automotive grade SiC MOSFET using Particle Swarm Optimization algorithm. In Proceedings of the 2020 International Symposium on Power Electronics, Electrical Drives, Automation and Motion (SPEEDAM), Sorrento, Italy, 24–26 June 2020; pp. 381–386. [Google Scholar]
  38. Sarvaghad-Moghaddam, M.; Orouji, A.A.; Ramezani, Z.; Elhoseny, M.; Farouk, A.; Arun Kumar, N. Modelling the spice parameters of SOI MOSFET using a combinational algorithm. Clust. Comput. 2019, 22, 4683–4692. [Google Scholar] [CrossRef]
  39. White, P.M.; Healy, R.M. Improved equivalent circuit for determination of MESFET and HEMT parasitic capacitances from “Coldfet” measurements. IEEE Microw. Guid. Wave Lett. 1993, 3, 453–454. [Google Scholar] [CrossRef]
  40. Cai, H.; Zhang, J.; Liu, M.; Yang, S.; Wang, S.; Liu, B.; Zhang, J. Adaptive particle swarm optimization based hybrid small-signal modeling of GaN HEMT. Microelectron. J. 2023, 137, 105834. [Google Scholar] [CrossRef]
  41. He, Y.; Ma, W.J.; Zhang, J.P. The parameters selection of PSO algorithm influencing on performance of fault diagnosis. MATEC Web Conf. 2016, 63, 02019. [Google Scholar] [CrossRef]
  42. Price, K. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  43. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  44. Majumdar, A.; Chatterjee, S.; Chatterjee, S.; Chaudhari, S.S.; Poddar, D.R. Optimization of intrinsic elements from small signal model of GaN HEMT by using PSO. In Proceedings of the 2015 IEEE Applied Electromagnetics Conference (AEMC), Guwahati, India, 18–21 December 2015; pp. 1–2. [Google Scholar]
Figure 1. Cross-section of the DFP GaN HEMT. The letters D, S, and G indicate the drain, source, and gate, respectively. SIN2 and SIN1 represent two different silicon nitride layers.
Figure 1. Cross-section of the DFP GaN HEMT. The letters D, S, and G indicate the drain, source, and gate, respectively. SIN2 and SIN1 represent two different silicon nitride layers.
Micromachines 15 01437 g001
Figure 2. The novel complete small-signal equivalent circuit of a DFP GaN HEMT.
Figure 2. The novel complete small-signal equivalent circuit of a DFP GaN HEMT.
Micromachines 15 01437 g002
Figure 3. The internal parameter optimization method for the DFP equivalent circuit utilizing ANN-PSO. This method is divided into three steps: producing datasets, training the ANN model, and invoking the PSO algorithm.
Figure 3. The internal parameter optimization method for the DFP equivalent circuit utilizing ANN-PSO. This method is divided into three steps: producing datasets, training the ANN model, and invoking the PSO algorithm.
Micromachines 15 01437 g003
Figure 4. Diagram of dataset curves before and after cleaning (partial real part data of S 11 ).
Figure 4. Diagram of dataset curves before and after cleaning (partial real part data of S 11 ).
Micromachines 15 01437 g004
Figure 5. The architecture of the ANN: 1 input layer, 2 hidden layers, 1 output layer. The input layer encompasses 15 inputs, each hidden layer contains 256 neurons, and the output layer consists of 8 outputs.
Figure 5. The architecture of the ANN: 1 input layer, 2 hidden layers, 1 output layer. The input layer encompasses 15 inputs, each hidden layer contains 256 neurons, and the output layer consists of 8 outputs.
Micromachines 15 01437 g005
Figure 6. The training curves of the bias condition of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. An epoch represents a complete iteration over the entire training dataset during the training process of a neural network.
Figure 6. The training curves of the bias condition of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. An epoch represents a complete iteration over the entire training dataset during the training process of a neural network.
Micromachines 15 01437 g006
Figure 7. The predicted data from the model under randomly selected parameters, compared with simulated data from the simulator. (ac) Random comparison under the condition of V d s = 28 V, V g s = 1 V; (df) Random comparison under the condition of V d s = 40 V, V g s = −1 V; (gi) Random comparison under the condition of V d s = 48 V, V g s = −3 V.
Figure 7. The predicted data from the model under randomly selected parameters, compared with simulated data from the simulator. (ac) Random comparison under the condition of V d s = 28 V, V g s = 1 V; (df) Random comparison under the condition of V d s = 40 V, V g s = −1 V; (gi) Random comparison under the condition of V d s = 48 V, V g s = −3 V.
Micromachines 15 01437 g007
Figure 8. The final fitness and fitness curves of the three algorithms under bias conditions of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. The epoch represents the number of iterations of the algorithm.
Figure 8. The final fitness and fitness curves of the three algorithms under bias conditions of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. The epoch represents the number of iterations of the algorithm.
Micromachines 15 01437 g008
Figure 9. (Color online) Measured and simulated S-parameters for a 4 × 250 μm DFP GaN HEMT under the bias condition of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. The frequency range is 1–18 GHz.
Figure 9. (Color online) Measured and simulated S-parameters for a 4 × 250 μm DFP GaN HEMT under the bias condition of (a) V d s = 28 V, V g s = 1 V; (b) V d s = 40 V, V g s = −1 V; (c) V d s = 48 V, V g s = −3 V. The frequency range is 1–18 GHz.
Micromachines 15 01437 g009
Table 1. The maximum and minimum values of the intrinsic elements delineated for sampling.
Table 1. The maximum and minimum values of the intrinsic elements delineated for sampling.
Intrinsic ElementLBUB
C g s 1 (fF)5002000
C g d 1 (fF)200400
C d s 1 (fF)550
C g s 2 (fF)200700
C g d 2 (fF)1050
C d s 2 (fF)130
C d s 3 (fF)1200
R d s 1 (Ω)520
R d s 2 (Ω)100500
R d s 3 (Ω)2070
g m 1 (ms)100400
g m 2 (ms)50250
τ 1 (ps)30005500
τ 2 (ps)25005000
Table 2. The training results of the ANN model under three bias conditions.
Table 2. The training results of the ANN model under three bias conditions.
Bias ConditionTrain_Loss (MSE)Train_Accuracy ( R 2 )Test_Accuracy ( R 2 )
V d s = 28 V, V g s = 1 V0.0061700.9999190.999920
V d s = 40 V, V g s = −1 V0.0064200.9998250.999823
V d s = 48 V, V g s = −3 V0.0053040.9999170.999917
Table 3. The manually tuned values and PSO optimization values of the small-signal model parameters for a 4 × 250 μm DFP GaN HEMT.
Table 3. The manually tuned values and PSO optimization values of the small-signal model parameters for a 4 × 250 μm DFP GaN HEMT.
ParameterManually TunedPSO-OptimizedAbsolute Error *
Bias condition at V d s = 28 V, V g s = 1 V
C g s 1 (fF)950.9960.79.8
C g d 1 (fF)229.0228.70.3
C d s 1 (fF)46.3952.46.01
C g s 2 (fF)338.7359.821.1
C g d 2 (fF)35.4040.7415.34
C d s 2 (fF)20.3121.931.62
C d s 3 (fF)134.5131.43.1
R d s 1 (Ω)16.0112.183.83
R d s 2 (Ω)133.8156.3422.54
R d s 3 (Ω)50.0149.520.49
g m 1 (ms)202.2213.711.5
g m 2 (ms)110.8115.24.4
τ 1 (ps)4.3094.1810.128
τ 2 (ps)3.6773.9010.244
Bias condition at V d s = 40 V, V g s = −1 V
C g s 1 (fF)976.7942.734.0
C g d 1 (fF)294.8270.224.6
C d s 1 (fF)15.5010.175.33
C g s 2 (fF)660.2414.4245.8
C g d 2 (fF)10.9112.861.95
C d s 2 (fF)13.6611.052.61
C d s 3 (fF)104.7106.51.8
R d s 1 (Ω)8.3178.8870.57
R d s 2 (Ω)208.9207.51.4
R d s 3 (Ω)28.4533.65.15
g m 1 (ms)288.0288.40.4
g m 2 (ms)167.1151.715.4
τ 1 (ps)4.3074.1600.147
τ 2 (ps)3.6243.9310.307
Bias condition at V d s = 48 V, V g s = −3 V
C g s 1 (fF)913.1898.614.5
C g d 1 (fF)244.3264.520.2
C d s 1 (fF)9.7318.7610.97
C g s 2 (fF)539.1459.779.4
C g d 2 (fF)13.1312.940.19
C d s 2 (fF)23.3923.580.19
C d s 3 (fF)60.7762.031.26
R d s 1 (Ω)6.4496.7840.335
R d s 2 (Ω)351.8340.111.7
R d s 3 (Ω)44.145.41.3
g m 1 (ms)207.6223.916.3
g m 2 (ms)179.3164.315
τ 1 (ps)4.6094.0020.607
τ 2 (ps)3.8813.8920.011
* Absolute error: The absolute value of the difference between two sets.
Table 4. The error comparison between the proposed method and the traditional approach under three distinct bias conditions.
Table 4. The error comparison between the proposed method and the traditional approach under three distinct bias conditions.
Bias
Condition
Optimization
Method
E 11 (%) E 12 (%) E 21 (%) E 22 (%) E TOT (%)
V d s = 28 VManual1.5493.1926.8722.3933.5015
V g s = 1 VANN-PSO1.2313.19912.362.7914.8953
V d s = 40 VManual1.1764.7974.0722.1733.0545
V g s = −1 VANN-PSO1.6375.3695.3913.7904.0468
V d s = 48 VManual1.3149.2844.5112.5764.4213
V g s = −3 VANN-PSO1.0079.2485.0152.1724.3605
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, H.; Zhou, W.; Wang, J.; Jin, H.; Wu, Y.; Wang, J.; Liu, J. A Novel ANN-PSO Method for Optimizing a Small-Signal Equivalent Model of a Dual-Field-Plate GaN HEMT. Micromachines 2024, 15, 1437. https://doi.org/10.3390/mi15121437

AMA Style

Shen H, Zhou W, Wang J, Jin H, Wu Y, Wang J, Liu J. A Novel ANN-PSO Method for Optimizing a Small-Signal Equivalent Model of a Dual-Field-Plate GaN HEMT. Micromachines. 2024; 15(12):1437. https://doi.org/10.3390/mi15121437

Chicago/Turabian Style

Shen, Haowen, Wenyong Zhou, Jinye Wang, Hangjiang Jin, Yifan Wu, Junchao Wang, and Jun Liu. 2024. "A Novel ANN-PSO Method for Optimizing a Small-Signal Equivalent Model of a Dual-Field-Plate GaN HEMT" Micromachines 15, no. 12: 1437. https://doi.org/10.3390/mi15121437

APA Style

Shen, H., Zhou, W., Wang, J., Jin, H., Wu, Y., Wang, J., & Liu, J. (2024). A Novel ANN-PSO Method for Optimizing a Small-Signal Equivalent Model of a Dual-Field-Plate GaN HEMT. Micromachines, 15(12), 1437. https://doi.org/10.3390/mi15121437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop