Next Article in Journal
ZnO/Graphene Composite from Solvent-Exfoliated Few-Layer Graphene Nanosheets for Photocatalytic Dye Degradation under Sunlight Irradiation
Next Article in Special Issue
Modeling the Effects of Threading Dislocations on Current in AlGaN/GaN HEMT
Previous Article in Journal
Passivity-Based Control for Output Voltage Regulation in a Fuel Cell/Boost Converter System
Previous Article in Special Issue
Buffer Traps Effect on GaN-on-Si High-Electron-Mobility Transistor at Different Substrate Voltages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GaN JBS Diode Device Performance Prediction Method Based on Neural Network

School of Microelectronics, Xidian University, Xi’an 710071, China
*
Authors to whom correspondence should be addressed.
Micromachines 2023, 14(1), 188; https://doi.org/10.3390/mi14010188
Submission received: 16 December 2022 / Revised: 6 January 2023 / Accepted: 8 January 2023 / Published: 12 January 2023
(This article belongs to the Special Issue GaN-Based Semiconductor Devices, Volume II)

Abstract

:
GaN JBS diodes exhibit excellent performance in power electronics. However, device performance is affected by multiple parameters of the P+ region, and the traditional TCAD simulation method is complex and time-consuming. In this study, we used a neural network machine learning method to predict the performance of a GaN JBS diode. First, 3018 groups of sample data composed of device structure and performance parameters were obtained using TCAD tools. The data were then input into the established neural network for training, which could quickly predict the device performance. The final prediction results show that the mean relative errors of the on-state resistance and reverse breakdown voltage are 0.048 and 0.028, respectively. The predicted value has an excellent fitting effect. This method can quickly design GaN JBS diodes with target performance and accelerate research on GaN JBS diode performance prediction.

1. Introduction

The vertical GaN Schottky diodes have been widely used in high-power electronic circuits due to their low switching voltage and fast switching performance [1]. However, the performance of the vertical GaN Schottky diode still has many shortcomings, such as the reverse breakdown voltage is not ideal. The GaN junction barrier Schottky diode (JBS) combines the advantages of both the Schottky diode and PIN diode. It has low on-state resistance and high reverse breakdown voltage, which can significantly improve the performance of power electronics systems [2].
However, the on-state resistance and breakdown voltage of the GaN JBS diode is affected by multiple parameters in the P+ region [3]. The traditional device simulation and experimental test methods have a long cycle and low efficiency, so design requires a lot of human resources. The rapid development of neural network provides another choice for rapidly predicting the structure or properties of devices and materials. There is research on the performance prediction of neural networks on MOSFET and SiC devices and GaN materials [4,5,6]. A GaN JBS diode device has more influence parameters and a more complex mechanism, and a more complex neural network model is necessary to describe the device accurately. Therefore, the neural network structure needs to be fully optimized.
This paper proposes a method to predict and optimize the performance of GaN JBS diodes using an optimized neural network. The network input determined by us includes the doping concentration in the drift region (Epidop), the doping concentration in the P+ region (Impdop), the ratio of the width of the P+ region to the spacing between adjacent P+ regions (L), and the injection depth of the P+ region (Impthickness). The network output includes the on-state resistance (Ron) and reversed breakdown voltage (BV). Then, TCAD Sentaurus was used to accumulate the sample data. After training the data, accuracy and mean relative error (MRE) were used to evaluate the prediction results.

2. GaN JBS Diode TCAD Modeling and Simulation

2.1. Device Structure Simulation

This paper’s device simulation and sample data accumulation are based on TCAD Sentaurus. Figure 1a shows the device model of GaN JBS diode. The anode is defined as a Schottky contact, and the cathode is defined as an ohmic contact. The JBS structure is formed on the device surface in four gauss-doped P-type regions. The drift region and substrate are N-doped. When the reverse voltage is applied to the device, the PN junction composed of P+ region and N-type drift region can withstand voltage. The large electric field falls in the P+ region, thus increasing the breakdown voltage. The specific structural parameters of the device are shown in Table 1 [7].
Among them, parameter L was set as half of the width of P+ region to present the width of P+ region and the spacing between adjacent P+ regions.
TCAD Sentaurus analyzed the forward and reverse characteristics. Some critical physical models were applied, including avalanche ionization, high field mobility, bandgap narrowing, doping dependence, and Auger recombination. The electrode voltage signal was set; the forward and reverse IV curves are shown in Figure 1b,c. We can see that on-state resistance (Ron) is 0.938 mohm, and breakdown voltage (BV) is 549 V. The simulation results show that the GaN JBS diode model based on TCAD Sentaurus accords with reality.

2.2. Data Gathering

The P+ region and the drift region greatly influence the forward and reverse characteristics of the GaN JBS diode. To better represent the device, the input parameters were determined as doping concentration of drift region (Epidop), doping concentration of P+ region (Impdop), a ratio of the width of P+ region to the spacing between adjacent P+ region (L), and injection depth of P+ region (Impthickness). To obtain accurate prediction results, it is necessary to set the input parameters reasonably. Figure 2 shows the influence of input parameters on Ron and BV based on TCAD Sentaurus simulation. Through simulation results, the reasonable range of Epidop, Impdop, L, and Impthickness should be 3 × 1015–1 × 1016 cm−3, 3 × 1017–1 × 1018 cm−3, 0.2–0.6 µm, 0.15–0.4 µm [8]. Table 2 lists the specific values of each input parameter.
Then, the TCAD Sentaurus tool was used to conduct a simulation according to the values of the above variables, and extract Ron and BV corresponding to each sample from the results. After removing the samples that failed in the simulation, a data set with a sample capacity of 3018 was finally formed (Supplemental Material). Datasets with sufficient samples can effectively improve the generalization of the model. Then, 3018 sets of datasets were divided into the training set, verification set, and test set at the proportion of 8:1:1. The input and output data were standardized and normalized in the training set and verification set [5]. To truly reflect the neural network’s generalization, the algorithm should not know the information about any test set, so the variance and mean of the test set came from the primary data of the training set.

3. Establishment of Neural Network Structure

To better extract data features, this paper uses a convolutional neural network [9]. Neural network architecture is composed of an input layer, hidden layer, and output layer [10]. The forecasting object determines the input and output layers. Figure 3a shows the basic network structure, and the key to design lies in the hidden layer. This paper’s hidden layer includes all connection modules, convolution modules, and batch-normalized layers. Each layer of the network structure is described below.
  • Input and output layer. According to the established data set, the doping concentration of drift region (Epidop), the spacing of P+ region (L), the injection depth of P+ region (Impthickness), and the injection concentration of P+ region (Impdop) were set as inputs. On-state resistance (Ron) and breakdown voltage (BV) were set as outputs. Therefore, the network architecture has four inputs and two outputs.
  • Fully connected module. Since the dimension of the ground input vector of the dataset is small, a fully connected module [11] was added after the input layer for dimension expansion to facilitate subsequent convolution operations. In addition, a batch normalization layer [12] was added after each complete connection layer to prevent overfitting.
  • Convolution module. The convolution layer of neural network architecture established in this paper includes a transposed convolution module, a double-branch convolution module, and a convolution module. Unlike the convolution module, the transposed convolution module [13] can expand the data dimension. Therefore, the transposed convolution module was added to expand the input dimension further. The dual-branch convolution module can extract data features and prevent gradient disappearance or explosion. The structure of the double-branch convolution module is shown in Figure 3b. The output features of the two channels were then spliced, and the spliced features were used as the output of the dual-branch convolution module. In addition, a batch normalization layer was added between each convolutional layer to prevent overfitting.
The above three parts constitute the network structure established in this paper. Meanwhile, the number of layers of each module in the hidden layer significantly impacts prediction results, so further optimization is needed. Figure 4 shows the process for determining the number of layers. Using the exhaustive method, multiple neural network structures with different layers were defined, and the data were input for training. The errors between the predicted and actual values were compared; the one with the slightest error was selected as the optimal neural network structure. The final determined network structure is shown in Figure 5. It consists of three input layers, two transposed convolutional modules, four double-branch convolutional modules, three convolutional modules, and three output layers. The number of neurons in each layer is also marked below the structure of each layer.

4. Predicted Results

In this paper, the Pytorch deep learning framework is rewritten in Python based on torch to implement acceleration specifically for GPU [14]. The framework is easy to use and supports dynamic computing graphs and efficient memory use. In addition, simulation prediction is carried out on the calculator platform based on RTX3060 and R7-5800H. Firstly, each training batch’s loss function, ReLu, was calculated [15]. Then, the ADAM [16] optimizer was used to backpropagate the network parameters until the convolutional neural network converges. At this point, we obtained the trained model. The prediction model uses the early stop [17] method to control whether the training is over. When the prediction error of the model on the verification set is not reduced or reaches a certain number of iterations, the training breaks, and the parameters in the previous iteration results are used as the final parameters of the model. The last saved network weight parameters are taken as the final model parameters. After training, mean relative error (MRE) was used to characterize the prediction effect. It is defined as
MRE = 1 N i = 0 N 1 | y i f i | y i
where y i and f i represent the predicted value and the true value, respectively.
Using a determined neural network model to train sample data, the predicted results (Supplemental Material) show that the mean relative errors of Ron and BV are 0.028 and 0.048, respectively. Figure 6 shows the Ron and BV comparison of the predicted and real values. For a clearer view of the predicted results, test group data were arranged from smallest to largest, and the corresponding predicted value changed accordingly.
Figure 6 shows the training loss in the training process; (a) and (b) are the training loss of Ron and BV, respectively. The loss in the training process gradually decreases and finally becomes stable. In Figure 7, the black symbol represents the true value, and the nearest red symbol represents the corresponding predicted value. The closer the red and black marks are, the better the predicted result is. There is an excellent fitting result between the predicted and real values. In addition, the fitting degree of Ron is higher than that of BV. In Figure 7d, there is a slight deviation between the predicted values and the real values of BV in the 220–300 V range. This phenomenon can be analyzed. Breakdown voltage (BV) is the voltage applied to the device when the reverse leakage current reaches 1 × 10−6 A/cm3. However, the leakage current may not reach this standard for different device structure parameters when the device is broken down. In this case, the breakdown voltage (BV) is determined by the maximum electric field. The kind of data also truly reflects the performance change of GaN JBS diode (Supplemental Material); these are not invalid data. Because of the two methods of collecting BV, the error of prediction result is relatively large.
To explain the predicted result more intuitively, a bar chart of relative errors is shown in Figure 8. The number of predicted values with minor mean relative errors is much more significant than that with large mean relative errors. Figure 8a shows that 90% of Ron’s prediction errors are in the range of 0 to 0.06, and the errors are all less than 0.08. In Figure 8b, 90% of BV’s prediction errors are in the range of 0 to 0.09, and the errors are all less than 0.11. All these prove that the prediction results are relatively ideal.
In addition, decision tree [18], K-nearest neighbor (KNN) [19], and support vector machine (SVM) [20] were used to train and predict text data samples and compare them with the predicted results of the convolutional neural network. Figure 9 shows the MRE of the predicted effects of three traditional machine learning methods and convolutional neural networks. In Figure 9a, the MREs of the decision tree, KNN, SVM, and DNN prediction results for Ron are 0.36211, 0.05868, 0.45824, and 0.028, in order. The MREs of the decision tree, KNN, SVM, DNN prediction results for BV are 0.25708, 0.13592, 0.53633, 0.048 in order. The neural network structure established in this paper has obvious advantages in predicting the performance of GaN JBS diodes.

5. Conclusions

This paper applies the neural network to the performance prediction of GaN JBS diode devices. The input parameters are adjusted according to the prediction results to optimize the device characteristics. A total of 3018 groups of sample data, including device structure and performance parameters, were obtained by TCAD tool simulation, and the neural network was used for training. The results show a superb fitting result between the predicted and real values. The mean relative error of on-state resistance and reverse breakdown voltage is only 0.028 and 0.048, respectively. It can quickly and conveniently establish the association between GaN JBS diode device structure and performance index, accelerating the research of GaN JBS diode performance prediction.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mi14010188/s1.

Author Contributions

Conceptualization, S.W. and X.D.; methodology, H.M., S.W. and X.D.; software, S.W. and X.D.; validation, H.M., X.D. and S.L.; formal analysis, H.M.; investigation, H.M.; resources, H.M. and X.D.; data curation, H.M.; writing—original draft preparation, H.M.; writing—review and editing, X.D. and S.W.; visualization, H.M.; supervision, J.Z.; project administration, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 62004152 and Grant 62127812. And part by Key Research and Development program in Shaanxi Province 2020ZDLGY05-05.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abhinay, S.; Arulkumaran, S.; Ng, G.I.; Ranjan, K.; Deki, M.; Nitta, S.; Amano, H. Improved breakdown voltage in vertical GaN Schottky barrier diodes on free-standing GaN with Mgcompensated drift layer. Jpn. J. Appl. Phys. 2020, 59, 010906. [Google Scholar] [CrossRef]
  2. Koehler, A.D.; Anderson, T.J.; Tadjer, M.J.; Nath, A.; Feigelson, B.N.; Shahin, D.I.; Hobart, K.D.; Kub, F.J. Vertical GaN junction barrier Schottky diodes. ECS J. Solid State Sci. Technol. 2016, 6, Q10. [Google Scholar] [CrossRef]
  3. Liu, Y.; Shu, Y.; Kuang, S. Design and optimization of vertical GaN PiN diodes with fluorine-implanted termination. IEEE J. Electron. Devices Soc. 2020, 8, 241. [Google Scholar] [CrossRef]
  4. Kilic, A.; Yildirim, R.; Eroglu, D. Machine Learning Analysis of Ni/SiC Electrodeposition Using Association Rule Mining and Artificial Neural Network. J. Electrochem. Soc. 2021, 168, 062514. [Google Scholar] [CrossRef]
  5. Wei, J.; Wang, H.; Zhao, T.; Jiang, Y.L.; Wan, J. A New Compact MOSFET Model Based on Artificial Neural Network with Unique Data Preprocessing and Sampling Techniques. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2022, 1. [Google Scholar] [CrossRef]
  6. Wang, Z.; Li, L.; Yao, Y. A Machine Learning-Assisted Model for GaN Ohmic Contacts Regarding the Fabrication Processes. IEEE Trans. Elect. Dev. 2021, 68, 5. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Liu, Z.; Tadjer, M.J.; Sun, M.; Piedra, D.; Hatem, C.; Anderson, T.J.; Luna, L.E.; Nath, A.; Koehler, A.D.; et al. Vertical GaN junction barrier Schottky rectifiers by selective ion implantation. IEEE Electron Device Lett. 2017, 38, 1097. [Google Scholar] [CrossRef]
  8. Yuan, H.; Wang, C.; Tang, X.; Song, Q.; He, Y.; Zhang, Y.; Zhang, Y.; Xiao, L.; Wang, L.; Wu, Y. Experimental study of high performance 4H-SiC floating junction jbs diodes. IEEE Access 2020, 8, 93039. [Google Scholar] [CrossRef]
  9. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Chen, T.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354. [Google Scholar] [CrossRef] [Green Version]
  10. Çolak, A.B.; Güzel, T.; Yıldız, O.; Özer, M. An experimental study on determination of the shottky diode current-voltage characteristic depending on temperature with artificial neural network. Physica B 2021, 608, 412852. [Google Scholar] [CrossRef]
  11. Ha, J.; Lee, G.; Kim, J. Machine Learning Approach for Characteristics Prediction of 4H-Silicon Carbide NMOSFET by Process Conditions. In Proceedings of the 2021 IEEE Region 10 Symposium (TENSYMP), Jeju, Republic of Korea, 23–25 August 2021; Volume 1. [Google Scholar]
  12. Kalayeh, M.M.; Shah, M. Training faster by separating modes of variation in batch-normalized models. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1483. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Mei, Q.; Gül, M.; Azim, M.R. Densely connected deep neural network considering connectivity of pixels for automatic crack detection. Autom. Constr. 2020, 110, 103018. [Google Scholar] [CrossRef]
  14. Li, S.; Zhao, Y.; Varma, R.; Salpekar, O.; Noordhuis, P.; Li, T.; Paszke, A.; Smith, J.; Vaughan, B.; Chintala, S.; et al. Pytorch distributed: Experiences on accelerating data parallel training. arXiv 2020, arXiv:2006.15704. [Google Scholar] [CrossRef]
  15. Wang, G.; Giannakis, G.B.; Chen, J. Learning ReLU networks on linearly separable data: Algorithm, optimality, and generalization. IEEE Trans. Signal Process. 2019, 67, 2357. [Google Scholar] [CrossRef] [Green Version]
  16. Khan, A.H.; Cao, X.; Li, S.; Katsikis, V.N.; Liao, L. BAS-ADAM: An ADAM based approach to improve the performance of beetle antennae search optimizer. IEEE/CAA J. Autom. Sin. 2020, 7, 461. [Google Scholar] [CrossRef]
  17. Raskutti, G.; Wainwright, M.J.; Yu, B. Early stopping and non-parametric regression: An optimal datadependent stopping rule. J. Mach. Learn. Res. 2014, 15, 335. [Google Scholar]
  18. Rokach, L. Decision Forest: Twenty years of research. Inf. Fusion 2016, 27, 111. [Google Scholar]
  19. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, H.Y.; Li, J.H.; Yang, F.L. Overview of support vector machine analysis and algorithm. Appl. Res. Comput. 2014, 31, 1281. [Google Scholar]
Figure 1. Figure (a) shows the schematic diagram of GaN JBS device structure, and the forward and reverse features of GaN JBS device under the structural parameters given in Table 1: (b) forward I-V curve, (c) reverse I-V curve.
Figure 1. Figure (a) shows the schematic diagram of GaN JBS device structure, and the forward and reverse features of GaN JBS device under the structural parameters given in Table 1: (b) forward I-V curve, (c) reverse I-V curve.
Micromachines 14 00188 g001
Figure 2. Ron and BV vary with different sensitive parameters, The corresponding sensitive parameters in Figure (ad) are, respectively, doping concentration in drift region (Epidop), doping concentration in P+ region (Impdop), a ratio of the width of P+ region to the spacing between adjacent P+ region (defined by parameter L), and injection depth of P+ region (Impthickness).
Figure 2. Ron and BV vary with different sensitive parameters, The corresponding sensitive parameters in Figure (ad) are, respectively, doping concentration in drift region (Epidop), doping concentration in P+ region (Impdop), a ratio of the width of P+ region to the spacing between adjacent P+ region (defined by parameter L), and injection depth of P+ region (Impthickness).
Micromachines 14 00188 g002
Figure 3. Neural network structure. (a) The basic neural network structure comprises input, hidden, and output layers. (b) Structural diagram of double-branch convolution module, one branch contains three successively cascaded convolution kernels of 3 × 1 one-dimensional convolution layer, and the other includes a convolution kernel of 5 × 1 one-dimensional convolution layer.
Figure 3. Neural network structure. (a) The basic neural network structure comprises input, hidden, and output layers. (b) Structural diagram of double-branch convolution module, one branch contains three successively cascaded convolution kernels of 3 × 1 one-dimensional convolution layer, and the other includes a convolution kernel of 5 × 1 one-dimensional convolution layer.
Micromachines 14 00188 g003
Figure 4. The flow chart for determining the optimal network Structure.
Figure 4. The flow chart for determining the optimal network Structure.
Micromachines 14 00188 g004
Figure 5. The neural network structure diagram finally determined in this paper includes, from left to right, four input layers, three fully connected modules, two transpose convolutional modules, four double-branch convolutional modules, three convolutional modules, three fully connected modules, and two output layers. At the bottom of each layer, the number of neurons is marked.
Figure 5. The neural network structure diagram finally determined in this paper includes, from left to right, four input layers, three fully connected modules, two transpose convolutional modules, four double-branch convolutional modules, three convolutional modules, three fully connected modules, and two output layers. At the bottom of each layer, the number of neurons is marked.
Micromachines 14 00188 g005
Figure 6. Training loss in the training process. (a,b) are training loss of Ron and BV, respectively. With the increase in the number of iterations, the loss in the training process gradually decreases and finally becomes stable.
Figure 6. Training loss in the training process. (a,b) are training loss of Ron and BV, respectively. With the increase in the number of iterations, the loss in the training process gradually decreases and finally becomes stable.
Micromachines 14 00188 g006
Figure 7. The prediction results of Ron and BV. (a) The disordered prediction results of Ron. (b) The prediction results of Ron under forward ranking. (c) The disordered prediction results of BV. (d) The prediction results of BV under forward order.
Figure 7. The prediction results of Ron and BV. (a) The disordered prediction results of Ron. (b) The prediction results of Ron under forward ranking. (c) The disordered prediction results of BV. (d) The prediction results of BV under forward order.
Micromachines 14 00188 g007
Figure 8. The error distribution histogram of the predicted results; (a,b) are the error distribution histograms of Ron and BV, respectively.
Figure 8. The error distribution histogram of the predicted results; (a,b) are the error distribution histograms of Ron and BV, respectively.
Micromachines 14 00188 g008
Figure 9. The mean relative errors of the predicted results of the neural network established in this paper are compared with those of traditional machine learning methods; (a,b) are the comparison of Ron and BV, respectively.
Figure 9. The mean relative errors of the predicted results of the neural network established in this paper are compared with those of traditional machine learning methods; (a,b) are the comparison of Ron and BV, respectively.
Micromachines 14 00188 g009
Table 1. GaN JBS device structure parameters.
Table 1. GaN JBS device structure parameters.
ParamentsValues
Total Width10 µm
Drift Region Thickness5 µm
Substrate Thickness10 µm
P+ Region Thickness (Impthickness)0.4 µm
P+ Region Width (2 L)0.8 µm
P+ Gap Width (2−2 L)1.2 µm
Drift Region Doping (EpiDop)5 × 1015 cm−3
Substrate Doping1 × 1019 cm−3
P+ Region Doping (Impdop)1 × 1018 cm−3
Table 2. The control group of each sensitive parameter in the sample data.
Table 2. The control group of each sensitive parameter in the sample data.
ParamentsValues
Drift Region Doping (cm−3)3 × 1015, 4 × 1015, 5 × 1015, 6 × 1015,
7 × 1015, 8 × 1015, 9 × 1015, 1 × 1016
P+ Region Doping (cm−3)3 × 1017, 4 × 1017, 5 × 1017, 6 × 1017,
7 × 1017, 8 × 1017, 9 × 1017, 1 × 1018
The ratio of P+ region width to
adjacent P+ region spacing
2:8, 1:3, 3:7, 7:13, 2:3, 9:11, 1:1, 3:2
P+ Region Thickness (µm)0.15, 0.20, 0.25, 0.30, 0.35, 0.40
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, H.; Duan, X.; Wang, S.; Liu, S.; Zhang, J.; Hao, Y. GaN JBS Diode Device Performance Prediction Method Based on Neural Network. Micromachines 2023, 14, 188. https://doi.org/10.3390/mi14010188

AMA Style

Ma H, Duan X, Wang S, Liu S, Zhang J, Hao Y. GaN JBS Diode Device Performance Prediction Method Based on Neural Network. Micromachines. 2023; 14(1):188. https://doi.org/10.3390/mi14010188

Chicago/Turabian Style

Ma, Hao, Xiaoling Duan, Shulong Wang, Shijie Liu, Jincheng Zhang, and Yue Hao. 2023. "GaN JBS Diode Device Performance Prediction Method Based on Neural Network" Micromachines 14, no. 1: 188. https://doi.org/10.3390/mi14010188

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop