Next Article in Journal
A Machine Learning and Integration Based Architecture for Cognitive Disorder Detection Used for Early Autism Screening
Previous Article in Journal
Laplacian Support Vector Machine for Vibration-Based Robotic Terrain Classification
Previous Article in Special Issue
Time-Encoding-Based Ultra-Low Power Features Extraction Circuit for Speech Recognition Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fully-Integrated Analog Machine Learning Classifier for Breast Cancer Classification

1
Department of Electrical Engineering, University at Buffalo, Buffalo, NY 14260, USA
2
Department of Biomedical Informatics and the Department of Radiology, Emory University, Atlanta, GA 30322, USA
3
Georgia Tech, Atlanta, GA 30322, USA
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(3), 515; https://doi.org/10.3390/electronics9030515
Submission received: 11 February 2020 / Revised: 13 March 2020 / Accepted: 14 March 2020 / Published: 20 March 2020

Abstract

:
We propose a fully integrated common-source amplifier based analog artificial neural network (ANN). The performance of the proposed ANN with a custom non-linear activation function is demonstrated on the breast cancer classification task. A hardware-software co-design methodology is adopted to ensure good matching between the software AI model and hardware prototype. A 65 nm prototype of the proposed ANN is fabricated and characterized. The prototype ANN achieves 97% classification accuracy when operating from a 1.1 V supply with an energy consumption of 160 fJ/classification. The prototype consumes 50 μ W power and occupies 0.003 mm 2 die area.

1. Introduction

Conventional artificial intelligence (AI) and machine learning systems are implemented on remote “cloud” servers comprising graphical processing units (GPUs). However, with the rapid growth of edge devices, there is a need for condensing and implementing AI algorithms in energy-constrained edge devices. While AI algorithms are conventionally realized in hardware using digital circuits, analog implementation of AI algorithms can potentially reduce energy consumption by several orders of magnitude [1] by eliminating data movement from the central processing unit (CPU) to the memory. However, existing analog AI implementations [1,2,3,4,5] have only demonstrated fundamental multiply-accumulate capabilities [2,3,4] or implemented sub-circuits for linear classification [5,6].
In this work, we propose a fully-integrated artificial neural network (ANN) implemented entirely using analog circuits comprising of custom non-linear activation function and current-domain multiply-and-accumulate (MAC). The proposed CMOS implementation of the ANN can be considered as a multi-layer fully connected neural nets that consist of an input layer, a non-linear hidden layer and an output layer. The hidden layer of the ANN applies non-linear transformation of input from one vector space to another and such hierarchical non-linear transformation provides a unique way to increase the separation between the decision boundaries and ultimately improve the efficiency of the machine learner over any standard linear machine algorithm (e.g., logistic regression [7], SVM [8]).
The performance of our analog machine learning (ML) classifier is validated with the Wisconsin Breast Cancer Dataset (WBCD) [9] which is a widely used dataset. The WBCD dataset is obtained through digital pathology which utilizes virtual microscopy to digitize biological specimen slides with computer-based technologies. The digitized slides can then be used to perform an accurate analysis of the specimen under observation. The WBCD dataset comprises biopsy information from 699 patients with nine attributes each and is grouped into two classes: benign (458 patients) and malignant (241 patients). The WBCD dataset has been chosen for this work since it is a popular dataset and classification accuracy metrics on the dataset are widely reported. Since the classification complexity, and hence accuracy, varies significantly with the dataset, using a popular dataset is useful for benchmarking. Typically cancer classification using the WBCD dataset is performed using digital AI algorithms implemented on power hungry GPUs [10,11,12,13]. Jouni [14] presented an optimal activation function for a CMOS based breast cancer tumor classifier implementation which minimizes classification error as well as area consumed by the classifier. The work Zhao [15] presented CMOS based x-ray imagers with low electronic noise for digital breast tomosynthesis (DBT). The motivation of our work is to demonstrate the feasibility of implementing a fully integrated AI chip that is capable of making an accurate classification while consuming low area and power. The proposed classifier comprises 1 hidden layer with a common-source (CS) amplifier based non-linear activation function [16]. The entire ANN is fully-integrated on-chip and implemented in 65 nm CMOS process. The prototype consumes 50 μ W power from a 1.1 V supply leading to a 160 fJ/classification energy efficiency which is several orders of magnitude better than existing CMOS classifiers [1,5,17]. While post-layout simulation results on the proposed cancer classifier architecture was presented in [18], chip measurement results from a 65 nm CMOS prototype are presented in this paper. The rest of this paper is organized as follows: the proposed architecture is discussed in Section 2, measurement results are presented in Section 3, future research directions are discussed in Section 4, and the paper is concluded in Section 5.

2. Proposed Architecture

Figure 1 depicts architecture of the proposed classifier. The classifier comprises one hidden layer with nine neurons and an output layer with one neuron. The nine attributes of the tumor are extracted from biopsy data and fed into the classifier. The nine attributes recorded from the tumor are as follows: (i) clump thickness, (ii) uniformity of cell size, (iii) uniformity of cell shape, (iv) marginal adhesion, (v) single epithelial cell size, (vi) bare nuclei, (vii) bland chomatin, (viii) normal nucleoli and (ix) mitoses. Output from the last layer is then sent to an argmax layer for deciding the class of tumor, i.e., benign or malignant. The argmax layer is realized using a comparator whose outputs are “0” or “1” depending on whether the tumor is benign or malignant.
Fundamentally, an ML classifier performs arithmetic operation which can be defined as f Σ W i · x j where f · is a non-linear or linear activation function. W i denotes the i-th element of the weight vector and x j is the j-th element of the input vector to the activation function. The proposed classifier architecture implements activation function in the analog domain thus leveraging the full physical properties of the MOS device. This ensures a higher area and energy efficiency in comparison to digital implementation of the activation function which operates MOS device as a switch. Figure 2 depicts the neuron employed in the proposed cancer classifier which is a single-stage CS amplifier. The CS amplifier comprises an NMOS input transistor with a diode-connected PMOS load. The inputs are fed to the gate of the NMOS and the AI model weights are encoded as widths of the NMOS transistors. The drain current ( i d ) of the NMOS is a function of the width (W) and input voltage ( v g ). Hence, the  i d of the NMOS is modulated depending on v g and W giving rise to a transfer function as illustrated in Figure 2 We leverage the natural non-linearity of the CS amplifier to implement the activation function for the hidden and output layers. Our approach simplifies the hardware complexity by using an approximate transfer function depicted in Figure 2 instead of designing circuits that match the commonly used activation functions like tanh [19].
To get a good classification accuracy with the proposed approximate activation function, we use a hardware-software co-design methodology. Figure 3 illustrates the proposed design paradigm. First, a CS amplifier is designed and characterized in SPICE. The V-I characteristics of the CS amplifier are then incorporated into the ANN training model using a lookup table. The ANN is then trained using Matlab. The trained weights are then acquired from Matlab and incorporated into the ANN SPICE model as widths of NMOS transistors in the activation function circuit. Finally, the ANN SPICE model is simulated with the test dataset to validate the classifier.
Algorithm 1 describes the pseudo-code snippet used for training the ANN in Matlab from the CS amplifier parameters extracted from SPICE simulations. The ANN weights are initialized with random values and the back-propagation algorithm updates the weights as the AI model iterates through each input from the training set. Stochastic gradient descent [20] computes the derivative of the mean squared error with respect to the current weights and computes the new weights as a function of the learning rate, error derivative, and current weights. The WBCD dataset is split randomly into a training set with 419 samples, and a test set with 280 samples. Each sample has nine attributes with integer values in the range 0–10. The attributes are scaled to fit into a 0.27 V–0.6 V range before feeding it into the ANN circuit. The weights are constrained to {0,1} which helps significantly reduce area costs associated with implementing the classifier on-chip.
Algorithm 1 ANN Training pseudo-code.
1:
W 1 Weight vector for hidden layer
2:
W 2 Weight vector for output layer
3:
N u m b e r C o r r e c t = 0
4:
for i<Number of Training Iterations do
5:
        for j< Size(Train set) do
6:
             I n p u t T r a i n s e t ( j )
7:
             H i d d e n O u t p u t f ( B i a s ; W 1 , I n p u t )
8:
             O u t p u t g ( B i a s ; W 2 , H i d d e n O u t p u t )
9:
            if P r e d i c t i o n = Trainlabel ( j ) then
10:
                N u m b e r C o r r e c t + = 1
11:
           end if
12:
            delta 1 ( O u t p u t t r a i n l a b e l ( j ) ) * ( 1 O u t p u t 2 )
13:
            delta 2 ( W 2 * delta 1 ) * ( 1 H i d d e n O u t p u t 2 )
14:
            W 1 W 1 a l p h a * ( I n p u t * delta 2 )
15:
            W 2 W 2 a l p h a * ( h i d d e n o u t p u t * delta 1 )
16:
       end for
17:
        a c c u r a c y N u m b e r C o r r e c t / N
18:
end for
19:
if W1 > 0 then
20:
       W1 1
21:
else
22:
       W1 0
23:
end if
24:
if W2 > 0 then
25:
       W2 1
26:
else
27:
       W2 0
28:
end if
Figure 4 depicts transistor-level schematic of the proposed classifier. The classifier comprises one input layer with nine inputs, one hidden layer with five neurons and one output layer with one neuron. As mentioned earlier, weights of the hidden and output layers are encoded into the width of each NMOS transistor as illustrated in Figure 4. MAC functionality is implemented in the current domain and mathematically represented as i d = g m · v g where i d denotes the drain current, g m denotes transconductance, and v g denotes the input of each NMOS transistor. Encoding the ANN weights into the width of each NMOS transistor effectively changes the g m of each transistor, thus, scaling the drain current. The drain currents of the NMOS transistors are summed using the diode-connected PMOS load thus realizing MAC operation. The large output swing of the pseudo-differential amplifier is fed to the comparator thus allowing for direct classification of the output as “0/1” indicating benign or malignant tumors. Implementing the activation function of the classifier in the current domain ensures immunity against charge injection errors, unlike switched-capacitor implementations. The proposed classifier circuit allows for a high-speed implementation while maintaining low power consumption. The proposed classifier is also resilient to random mismatches and supply voltage fluctuations which is further discussed in Section 3.
Figure 5 illustrates the receiver operating characteristic (ROC) of the proposed classifier from the post-layout simulation. The ROC plot contains information of true positive rate (TPR)versus false positive rate (FPR) for different classification thresholds which are defined as:
T P R = T P T P + F N ; F P R = F P F P + T N
where TN is the true negative and FN is the false negative. In the WBCD dataset, the number of samples the classifier correctly identifies as benign is TN and the number of samples correctly classified as malignant is TP. On the other hand, the number of benign samples incorrectly classified as malignant is FP and the number of malignant samples incorrectly classified as benign is FN. The measure of separability of the two classes, i.e., benign and malignant can be computed from the area under the ROC curve (AUC). A higher AUC implies that the classifier is accurate at correctly distinguishing between the two aforementioned classes. Figure 5 yields an AUC of 0.989 thus validating that the proposed classifier can accurately distinguish between benign and malignant samples irrespective of sample distribution in the test data. Figure 6 illustrates the training accuracy versus the number of iterations. The ANN achieves a high accuracy within 40 iterations.

3. Measurement Results

A prototype of the proposed cancer classification ANN was fabricated in 1P9M TSMC 65 nm process. Figure 7 depicts the chip micro-photograph of the proposed cancer classifier. The classifier chip occupies a core area of 0.003 mm 2 .
Figure 8 depicts the test setup used for validating the proposed classifier and Figure 9a depicts the printed circuit board (PCB) for testing the classifier. A National Instruments (NI) off-chip digital-to-analog converter (DAC) is used to convert the 9 digital attributes of each sample to analog voltages which are then fed to the classifier chip. The NI DAC is programmed with a PC using Matlab. The NI DAC comprises a 16 channel 16-bit DAC which generates the analog inputs to the classifier while functioning as a zero-order hold. A logic analyzer is used to record the digital outputs of the classifier chip. The logic analyzer receives the clock signal from the function generator and captures the data at the rising edges of the clock. A DC power supply is used to provide a 1.1 V supply to the chip. An oscilloscope is used for debugging purposes when analog/digital signal, going to or coming off the chip, needs to be probed. Similarly, a digital multimeter is used to probe the DC voltages on the PCB.
Figure 9b depicts the measured confusion matrix of the proposed classifier. The confusion matrix illustrates the performance of the classifier when tested with biopsy data from 280 patients. The rows and columns of the confusion matrix indicate the predicted and actual classes of the test data. The classifier achieves a 97% accuracy with only one false positive classification which is very important for clinical decisions. Figure 10 depicts the classifier accuracy versus supply voltage (VDD) variation. The classifier achieves an accuracy greater than 96.8% when VDD ≥ 0.9 V and falls to 95.4% for VDD < 0.9 V. The classifier achieves an average accuracy of 96.9% while consuming 50 μ W power from a 1.1 V supply thus achieving energy efficiency of 160 fJ/classification. Figure 11 depicts histogram of measured classification accuracy from four chips. It can be observed that the classifier has an average accuracy of 96.9% with a standard deviation of 1.4%. Hence, the proposed classifier architecture is robust to random mismatches. In addition to noise contribution of the classifier circuit itself, the off-chip DAC used to convert the digital samples to analog voltages contributes a 500 μ V input-referred noise. Even in the presence of both noise sources, the chip achieves average classification accuracy of 97% which shows robustness to noise.
Table 1 provides a comparison of the performance of the proposed classifier with existing WBCD classifiers. Typically, WBCD classifiers are implemented on GPUs with neural network implementations in Python or Matlab with energy consumption in the range of mJ/classification. In contrast, the proposed classifier achieves similar accuracy while consuming energy of only 160 fJ/classification.

4. Discussion & Future Work

This work has presented a fully integrated machine learning classifier using a custom analog activation function and a hardware-software co-design methodology that incorporates device knowledge into the ANN training phase to ensure that the fabricated prototype closely matches simulation results with ANN model. The current version of the classifier is based on class-A circuits which are always turned on. To improve energy efficiency, we will use switched-capacitor circuits to implement the MAC operations and also use a switched-capacitor amplifier to realize the custom activation functions. Another direction of future research would be to integrate sensing capability into the same die as the ANN. We will start with an image sensor since imaging is a popular modality and there are many popular image datasets, such as MNIST [22] and CIPHER [23], which can be used for benchmarking our ANN. Integration of image sensor with ANN also necessitate some form of image cleaning/pre-processing and we will employ similar techniques as described in [24]. Further down the road, we also intend to integrate wireless communication capability [25] on-chip.
Another research aspect we will pursue is adding an ease of explanation to our ANNs. Typically a complex network as ANN with a lot of trainable parameters requires large volume of data with targeted labels for proper training which in real life, is difficult to obtain for every targeted task. In addition, given multiple non-linear transformation of the input, the ANN is limited by its black box architecture with less necessary mechanism for explanation of the decision. However, there has recently been a surge of work in explanatory artificial intelligence (XAI) for the digital ANN to increase transparency (e.g., linear proxy model, silent mapping, role representation). In the future work, we will explore CMOS XAI for adding transparency to our architecture.

5. Conclusions

In this work we present a fully integrated analog machine learning classifier comprising a CS amplifier based neurons. The classifier was designed using hardware-software co-design methodology to incorporate a custom non-linear activation function into the neurons. The classifier performance was validated using the WBCD dataset. The proposed classifier prototype achieves a mean accuracy of 96.9% while consuming 50 μ W power, thus achieving an energy efficiency of 160 fJ/classification which is several orders of magnitude better than existing CMOS ML classifiers.

Author Contributions

Conceptualization, R.H. and A.S.; Data curation, I.B.; Funding acquisition, A.S.; Investigation, I.B. and A.S.; Methodology, R.H. and A.S.; Software, R.H. and I.B.; Supervision, A.S.; Validation, S.T.C.; Writing—original draft, S.T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Z.; Verma, N. A low-energy machine-learning classifier based on clocked comparators for direct inference on analog sensors. IEEE Trans. Circuits Syst. I Regul. Pap. 2017, 64, 2954–2965. [Google Scholar] [CrossRef]
  2. Lee, E.H.; Wong, S.S. A 2.5 GHz 7.7 TOPS/W switched-capacitor matrix multiplier with co-designed local memory in 40 nm. In Proceedings of the IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 31 January–4 February 2016; pp. 418–419. [Google Scholar]
  3. Wang, Z.; Zhang, J.; Verma, N. Realizing Low-Energy Classification Systems by Implementing Matrix Multiplication Directly Within an ADC. IEEE Trans. Biomed. Circuits Syst. 2015, 9, 825–837. [Google Scholar] [CrossRef] [PubMed]
  4. Buhler, F.N.; Mendrela, A.E.; Lim, Y.; Fredenburg, J.A.; Flynn, M.P. A 16-channel noise-shaping machine learning analog-digital interface. In Proceedings of the IEEE Symposium on VLSI Circuits (VLSI-Circuits), Honolulu, HI, USA, 15–17 June 2016; pp. 1–2. [Google Scholar]
  5. Zhang, J.; Wang, Z.; Verma, N. A machine-learning classifier implemented in a standard 6T SRAM array. In Proceedings of the IEEE Symposium on VLSI Circuits (VLSI-Circuits), Honolulu, HI, USA, 15–17 June 2016; pp. 1–2. [Google Scholar]
  6. Solomatine, D.P.; Shrestha, D.L. AdaBoost. RT: A boosting algorithm for regression problems. Neural Netw. 2004, 2, 1163–1168. [Google Scholar]
  7. Dreiseitl, S.; Ohno-Machado, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef] [Green Version]
  8. Li, Y.; Bontcheva, K.; Cunningham, H. SVM based learning system for information extraction. In International Workshop on Deterministic and Statistical Methods in Machine Learning; Springer: Berlin/Heidelberg, Germany, 2004; pp. 319–339. [Google Scholar]
  9. Breast Cancer Wisconsin (Original) Data Set. 1995. Available online: https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original) (accessed on 11 February 2020).
  10. Übeyli, E.D. Implementing automated diagnostic systems for breast cancer detection. Expert Syst. Appl. 2007, 33, 1054–1062. [Google Scholar] [CrossRef]
  11. Abonyi, J.; Szeifert, F. Supervised fuzzy clustering for the identification of fuzzy classifiers. Pattern Recognit. Lett. 2003, 24, 2195–2207. [Google Scholar] [CrossRef] [Green Version]
  12. Karabatak, M.; Ince, M.C. An expert system for detection of breast cancer based on association rules and neural network. Expert Syst. Appl. 2009, 36, 3465–3469. [Google Scholar] [CrossRef]
  13. Marcano-Cedeño, A.; Quintanilla-Domínguez, J.; Andina, D. Breast cancer classification applying artificial metaplasticity algorithm. Neurocomputing 2011, 74, 1243–1250. [Google Scholar] [CrossRef]
  14. Jouni, H.; Issa, M.; Harb, A.; Jacquemod, G.; Leduc, Y. Neural Network architecture for breast cancer detection and classification. In Proceedings of the 2016 IEEE International Multidisciplinary Conference on Engineering Technology (IMCET), Beirut, Lebanon, 2–4 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 37–41. [Google Scholar]
  15. Zhao, C.; Kanicki, J.; Konstantinidis, A.C.; Patel, T. Large area CMOS active pixel sensor x-ray imager for digital breast tomosynthesis: Analysis, modeling, and characterization. Med. Phys. 2015, 42, 6294–6308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Jayaraj, A.; Banerjee, I.; Sanyal, A. Common-Source Amplifier Based Analog Artificial Neural Network Classifier. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  17. Wang, Z.; Lee, K.H.; Verma, N. Overcoming computational errors in sensing platforms through embedded machine-learning kernels. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2015, 23, 1459–1470. [Google Scholar] [CrossRef]
  18. Hua, R.; Sanyal, A. 39fJ Analog Artificial Neural Network for Breast Cancer Classification in 65 nm CMOS. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 436–439. [Google Scholar]
  19. Carrasco-Robles, M.; Serrano, L. A novel CMOS current mode fully differential tanh (x) implementation. In Proceedings of the IEEE International Symposium on Circuits and Systems, Seattle, WA, USA, 18–21 May 2008; pp. 2158–2161. [Google Scholar]
  20. Amari, S.I. Backpropagation and stochastic gradient descent method. Neurocomputing 1993, 5, 185–196. [Google Scholar] [CrossRef]
  21. Selvathi, D.; Nayagam, R.D. FPGA implementation of on-chip ANN for breast cancer diagnosis. Intell. Decis. Technol. 2016, 10, 341–352. [Google Scholar] [CrossRef]
  22. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  23. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, USA, 2009. [Google Scholar]
  24. Na, T.; Ko, J.H.; Mukhopadhyay, S. Noise-robust and resolution-invariant image classification with pixel-level regularization. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  25. Talaśka, T. Components of Artificial Neural Networks Realized in CMOS Technology to be Used in Intelligent Sensors in Wireless Sensor Networks. Sensors 2018, 18, 4499. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Proposed cancer classifier architecture.
Figure 1. Proposed cancer classifier architecture.
Electronics 09 00515 g001
Figure 2. CS amplifier and its transfer function.
Figure 2. CS amplifier and its transfer function.
Electronics 09 00515 g002
Figure 3. Design methodology employed for proposed cancer classifier.
Figure 3. Design methodology employed for proposed cancer classifier.
Electronics 09 00515 g003
Figure 4. Schematic of ANN for breast cancer classification.
Figure 4. Schematic of ANN for breast cancer classification.
Electronics 09 00515 g004
Figure 5. ROC curve of proposed classifier.
Figure 5. ROC curve of proposed classifier.
Electronics 09 00515 g005
Figure 6. Training accuracy versus iterations.
Figure 6. Training accuracy versus iterations.
Electronics 09 00515 g006
Figure 7. Chip micro-photograph of proposed cancer classifier.
Figure 7. Chip micro-photograph of proposed cancer classifier.
Electronics 09 00515 g007
Figure 8. Test setup used to validate proposed classifier.
Figure 8. Test setup used to validate proposed classifier.
Electronics 09 00515 g008
Figure 9. (a) PCB for validating proposed classifier (b) Measured confusion matrix.
Figure 9. (a) PCB for validating proposed classifier (b) Measured confusion matrix.
Electronics 09 00515 g009
Figure 10. Accuracy versus supply voltage.
Figure 10. Accuracy versus supply voltage.
Electronics 09 00515 g010
Figure 11. Accuracy variation across multiple chips.
Figure 11. Accuracy variation across multiple chips.
Electronics 09 00515 g011
Table 1. Comparison with state-of-the-art WBCD classifiers.
Table 1. Comparison with state-of-the-art WBCD classifiers.
Accuracy(%)Platform
Ubeyli et al. [10]99.54GPU
Abonyi et al. [11]95.57GPU
Karabatak et al. [12]97.40GPU
Marcano et al. [13]99.58GPU
Selvathi et al. [21]90.8FPGA
This work96.9CMOS IC

Share and Cite

MDPI and ACS Style

T. Chandrasekaran, S.; Hua, R.; Banerjee, I.; Sanyal, A. A Fully-Integrated Analog Machine Learning Classifier for Breast Cancer Classification. Electronics 2020, 9, 515. https://doi.org/10.3390/electronics9030515

AMA Style

T. Chandrasekaran S, Hua R, Banerjee I, Sanyal A. A Fully-Integrated Analog Machine Learning Classifier for Breast Cancer Classification. Electronics. 2020; 9(3):515. https://doi.org/10.3390/electronics9030515

Chicago/Turabian Style

T. Chandrasekaran, Sanjeev, Ruobing Hua, Imon Banerjee, and Arindam Sanyal. 2020. "A Fully-Integrated Analog Machine Learning Classifier for Breast Cancer Classification" Electronics 9, no. 3: 515. https://doi.org/10.3390/electronics9030515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop