Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map
Abstract
:1. Introduction
2. Data and Methods
Network Architecture
3. Results
4. Discussion
5. Conclusions
Supplementary Materials
Funding
Acknowledgments
Conflicts of Interest
Appendix A
Algorithm A1. Initialization of basic constants, types, arrays, variables and functions. |
const N_test = 10,000;//The number of elements in the test data array from the MNIST-10 database N_train = 60,000;//The number of elements in the training data array from the MNIST- 10 Y_max = 784;//The number of dots in the image of a handwritten digit P_max = 25;//Number of neurons in the hidden layer N_max = 9;//Numbering limit of output neurons 0..9 type InputData = array [0..Y_max] of byte;//Input data type InputData2 = array [0..Y_max] of real;//Input data type after normalization OutputData = array [0..N_max] of real;//Output data type Hiddenlayer = array [0..P_max] of real;//Data type of the hidden layer var Train_data: array [1..N_train] of InputData;//Training data array from the MNIST-10 database Test_data: array [1..N_test] of InputData;//Test data array from the MNIST-10 database Test_label: array [1..N_test] of byte;//Array of test data labels Y: InputData2;//Array of input image data Sh: Hiddenlayer;//Array of hidden layer data Sout: OutputData;//Array of output layer data W1: array [0..Y_max, 1..P_max] of real;//Array of weights W1 W1_alg2: array [0..Y_max] of real;//An auxiliary array for calculating the weights W1 for Algorithm 2 W1_alg1: real;//Auxiliary variable for calculating the weights W1 for Algorithm 1 W2: array[0 .. P_max,0..N_max] of real;//Array of weights W2 i, j, k, i_max: integer;//Auxiliary variables of integer type. Sh_max, Sh_min, Usre: Hiddenlayer;//Auxiliary arrays for normalizing the hidden layer data Accuracy: real;//Accuracy of network classification (in percent). functions Function VM_transform (YYY: InputData): InputData;//Function for T-pattern transformation Function Normalization (XX: InputData): InputData2;//Function to normalize input data Function Fout (R: real): real;//Logistic activation function for output neurons begin Fout: = 1/(1 + exp (-R)); end; |
Algorithm A2. Loading data files and setting initial conditions. |
Zeroing used arrays. Loading training Train_data, test data Test_data (procedure TForm1.Load_mnist_buttonClick in Supplementary Materials). Loading T-Pattern files (procedure TForm1.Load_Tpattern_buttonClick in Supplementary Materials). Entering r, A, B values and the number of epochs to train the network. |
Algorithm A3. Filling in arrays W1 and W2. |
//The initial values of the elements of the first row W1[i][1] using Equation (4) j: = 1; for i: = 0 to Y_max do begin W1[i,j]: = A*sin((i/Y_max)*Pi/B); end; //Applying Equation (2) to the elements of the weight matrix W1 for j: = 2 to P_max do begin for i: = 0 to Y_max do begin W1[i,j]: = (1-r*W1[i,j-1]*W1[i,j-1]); end; end; //The initial values of the weights W2 are set randomly between −0.5 and 0.5. for i:=0 to P_max do for j:=0 to N_max do W2[i,j]:=(0.5-Random); |
Algorithm A4. Calculation of auxiliary data for normalizing the values of the hidden layer. |
|
Algorithm A5. Training the array W2 using the backpropagation of error method. |
For training the array W2 using the backpropagation of error method, see procedure TForm1.EpochTrainingClick and procedure TForm1.BackPropagationClick in Supplementary Materials. |
Algorithm A6. Testing the network. |
The calculation of the classification accuracy on the MNIST-10 test data can be performed using one of three algorithms: Algorithm 1, Algorithm 2 or Algorithm 3. Correct_test:=0;//Counter of correct classifications of the network using test data for Nom_tren:=1 to N_test do//Loop over MNIST-10 test data begin //Applying T-Pattern transformation and normalizing the input image of the test data Y:=Normalization(VM_transform(Test_data[Nom_tren])); |
Algorithm A6.1. Algorithm 1. |
for j:=1 to P_max do//Loop over the neurons of the hidden layer begin Sh[j]:=0; for i:=0 to Y_max do begin W1_alg1:= A*sin((i/Y_max)*Pi/B); for k:=2 to j do W1_alg1:=(1-r*W1_alg1*W1_alg1) ; Sh[j]:=Sh[j]+Y[i]*W1_alg1; end; Sh[j]:=((Sh[j]-Sh_min[j])/(Sh_max[j]-Sh_min[j])-0.5)-Usre[j];//Normalizing the hidden layer end; |
Algorithm A6.2. Algorithm 2. |
for j:=1 to P_max do//Loop over the neurons of the hidden layer begin Sh[j]:=0; for i:=0 to Y_max do begin if j=1 then W1_alg2[i]:= A*sin((i/Y_max)*Pi/B) else W1_alg2[i]:=(1-r*W1_alg2[i]*W1_alg2[i]); Sh[j]:=Sh[j]+Y[i]*W1_alg2[i]; end; Sh[j]:=((Sh[j]-Sh_min[j])/(Sh_max[j]-Sh_min[j])-0.5)-Usre[j];//Normalizing the hidden layer end; end; |
Algorithm A6.3. Algorithm 3. |
for j:=1 to P_max do//Loop over the neurons of the hidden layer begin Sh[j]:=0; for i:=0 to Y_max do Sh[j]:=Sh[j]+Y[i]*W1[i,j];//W1 values are pre-filled, see A3. Sh[j]:=((Sh[j]-Sh_min[j])/(Sh_max[j]-Sh_min[j])-0.5)-Usre[j];//Normalizing the hidden layer end; |
Algorithm A7. Calculation of network output and classification accuracy. |
Sh[0]:=1;//Determining the value of the bias neuron of the hidden layer for j:=0 to N_max do//Loop over the neurons of the output layer begin Sout[j]:=0; for i:=0 to P_max do Sout[j]:=Sout[j]+Sh[i]*W2[i,j]; Sout[j]:=Fout(Sout[j]); end; i_max:=1;//Index of the output neuron with the maximum value For i:=0 to N_max do if Sout[i]>Sout[i_max] then i_max:=i; //Counting the number of correct recognitions if i_max=Test_label[Nom_tren] then Correct_test:=Correct_test+1; end;//End of the cycle for test data MNIST-10 Accuracy:=(Correct_test/N_test)*100;//Calculation of classification accuracy |
References
- Merenda, M.; Porcaro, C.; Iero, D. Edge Machine Learning for AI-Enabled IoT Devices: A Review. Sensors 2020, 20, 2533. [Google Scholar] [CrossRef] [PubMed]
- Abdel Magid, S.; Petrini, F.; Dezfouli, B. Image classification on IoT edge devices: Profiling and modeling. Clust. Comput. 2020, 23, 1025–1043. [Google Scholar] [CrossRef][Green Version]
- Li, S.; Dou, Y.; Xu, J.; Wang, Q.; Niu, X. mmCNN: A Novel Method for Large Convolutional Neural Network on Memory-Limited Devices. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; Volume 1, pp. 881–886. [Google Scholar]
- Gerdes, S.; Bormann, C.; Bergmann, O. Keeping users empowered in a cloudy Internet of Things. In The Cloud Security Ecosystem: Technical, Legal, Business and Management Issues; Elsevier Inc.: Amsterdam, The Netherlands, 2015; pp. 231–247. ISBN 978-0128-017-807. [Google Scholar]
- Korzun, D.; Balandina, E.; Kashevnik, A.; Balandin, S.; Viola, F. Ambient Intelligence Services in IoT Environments: Emerging Research and Opportunities; IGI Global: Hershey, PA, USA, 2019; pp. 1–199. [Google Scholar]
- El-Haii, M.; Chamoun, M.; Fadlallah, A.; Serhrouchni, A. Analysis of Cryptographic Algorithms on IoT Hardware platforms. In Proceedings of the 2018 2nd Cyber Security in Networking Conference, CSNet 2018, Paris, France, 24–26 October 2019. [Google Scholar]
- Fernández-Caramés, T.; Fraga-Lamas, P. A Review on the Use of Blockchain for the Internet of Things. IEEE Access 2018, 6, 32979–33001. [Google Scholar] [CrossRef]
- Ghosh, A.; Chakraborty, D.; Law, A. Artificial intelligence in Internet of things. CAAI Trans. Intell. Technol. 2018, 3, 208–218. [Google Scholar] [CrossRef]
- Meigal, A.; Korzun, D.; Gerasimova-Meigal, L.; Borodin, A.; Zavyalova, Y. Ambient Intelligence At-Home Laboratory for Human Everyday Life. Int. J. Embed. Real-Time Commun. Syst. 2019, 10, 117–134. [Google Scholar] [CrossRef][Green Version]
- Qian, G.; Lu, S.; Pan, D.; Tang, H.; Liu, Y.; Wang, Q. Edge Computing: A Promising Framework for Real-Time Fault Diagnosis and Dynamic Control of Rotating Machines Using Multi-Sensor Data. IEEE Sens. J. 2019, 19, 4211–4220. [Google Scholar] [CrossRef]
- Bazhenov, N.; Korzun, D. Event-Driven Video Services for Monitoring in Edge-Centric Internet of Things Environments. In Proceedings of the Conference of Open Innovation Association (FRUCT), Helsinki, Finland, 5–8 November 2019; pp. 47–56. [Google Scholar]
- Kulakov, K. An Approach to Efficiency Evaluation of Services with Smart Attributes. Int. J. Embed. Real-Time Commun. Syst. 2017, 8, 64–83. [Google Scholar] [CrossRef][Green Version]
- Marchenkov, S.; Korzun, D.; Shabaev, A.; Voronin, A. On applicability of wireless routers to deployment of smart spaces in Internet of Things environments. In Proceedings of the 2017 IEEE 9th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2017), Bucharest, Romania, 21–23 September 2017; Volume 2, pp. 1000–1005. [Google Scholar]
- Korzun, D.; Varfolomeyev, A.; Shabaev, A.; Kuznetsov, V. On dependability of smart applications within edge-centric and fog computing paradigms. In Proceedings of the 2018 IEEE 9th International Conference on Dependable Systems, Services and Technologies (DESSERT 2018), Kiev, Ukraine, 24–27 May 2018; pp. 502–507. [Google Scholar]
- Korzun, D.; Kashevnik, A.; Balandin, S.; Smirnov, A. The smart-M3 platform: Experience of smart space application development for internet of things. In Internet of Things, Smart Spaces, and Next Generation Networks and Systems; Springer Verlag: Berlin/Heidelberg, Germany, 2015; Volume 9247, pp. 56–67. [Google Scholar]
- Types of Artificial Neural Networks—Wikipedia. Available online: https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks (accessed on 22 July 2020).
- Kumar, A.; Goyal, S.; Varma, M. Resource-Efficient Machine Learning in 2 KB RAM for the Internet of Things. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1935–1944. [Google Scholar]
- Kusupati, A.; Singh, M.; Bhatia, K.; Kumar, A.; Jain, P.; Varma, M. FastGRNN: A Fast, Accurate, Stable and Tiny Kilobyte Sized Gated Recurrent Neural Network. In Proceedings of the Advances in Neural Information Processing Systems 2018, Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
- Friedman, J. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
- Gupta, C.; Suggala, A.; Goyal, A.; Simhadri, H.; Paranjape, B.; Kumar, A.; Goyal, S.; Udupa, R.; Varma, M.; Jain, P. ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices. In Proceedings of the 34th International Conference on Machine Learning; Precup, D., Teh, Y.W., Eds.; International Convention Centre: Sydney, Australia, 2017; Volume 70, pp. 1331–1340. [Google Scholar]
- Gural, A.; Murmann, B. Memory-Optimal Direct Convolutions for Maximizing Classification Accuracy in Embedded Applications. In Proceedings of the 36th International Conference on Machine Learning; Chaudhuri, K., Salakhutdinov, R., Eds.; PMLR: Long Beach, CA, USA, 2019; Volume 97, pp. 2515–2524. [Google Scholar]
- Zhang, J.; Lei, Q.; Dhillon, I. Stabilizing Gradients for Deep Neural Networks via Efficient {SVD} Parameterization. In Proceedings of the 35th International Conference on Machine Learning; Dy, J., Krause, A., Eds.; PMLR: Stockholm, Sweden, 2018; Volume 80, pp. 5806–5814. [Google Scholar]
- Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef]
- Velichko, A.; Ryabokon, D.; Khanin, S.; Sidorenko, A.; Rikkiev, A. Reservoir computing using high order synchronization of coupled oscillators. IOP Conf. Ser. Mater. Sci. Eng. 2020, 862, 52062. [Google Scholar] [CrossRef]
- Yamane, T.; Katayama, Y.; Nakane, R.; Tanaka, G.; Nakano, D. Wave-Based Reservoir Computing by Synchronization of Coupled Oscillators BT—Neural Information Processing; Arik, S., Huang, T., Lai, W.K., Liu, Q., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 198–205. [Google Scholar]
- Velichko, A. A Method for Evaluating Chimeric Synchronization of Coupled Oscillators and Its Application for Creating a Neural Network Information Converter. Electronics 2019, 8, 756. [Google Scholar] [CrossRef][Green Version]
- Donahue, C.; Merkel, C.; Saleh, Q.; Dolgovs, L.; Ooi, Y.; Kudithipudi, D.; Wysocki, B. Design and analysis of neuromemristive echo state networks with limited-precision synapses. In Proceedings of the 2015 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), Verona, NY, USA, 26–28 May 2015; pp. 1–6. [Google Scholar]
- Larger, L.; Baylón-Fuentes, A.; Martinenghi, R.; Udaltsov, V.; Chembo, Y.; Jacquot, M. High-Speed Photonic Reservoir Computing Using a Time-Delay-Based Architecture: Million Words per Second Classification. Phys. Rev. X 2017, 7, 11015. [Google Scholar] [CrossRef]
- Ozturk, M.; Xu, D.; Príncipe, J. Analysis and Design of Echo State Networks. Neural Comput. 2006, 19, 111–138. [Google Scholar] [CrossRef] [PubMed]
- Wijesinghe, P.; Srinivasan, G.; Panda, P.; Roy, K. Analysis of Liquid Ensembles for Enhancing the Performance and Accuracy of Liquid State Machines. Front. Neurosci. 2019, 13, 504. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
- Azarpour, M.; Seyyedsalehi, S.; Taherkhani, A. Robust pattern recognition using chaotic dynamics in Attractor Recurrent Neural Network. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–6. [Google Scholar]
- Wang, T.; Jia, N. A GCM neural network using cubic logistic map for information processing. Neural Comput. Appl. 2017, 28, 1891–1903. [Google Scholar] [CrossRef]
- Tan, J.P.L. Simulating extrapolated dynamics with parameterization networks. arXiv 2019, arXiv:1902.03440. [Google Scholar]
- Margaris, A.; Kotsialos, E.; Kofidis, N.; Roumeliotis, M.; Adamopoulos, M. Logistic map neural modelling: A theoretical foundation. Int. J. Comput. Math. 2005, 82, 1055–1072. [Google Scholar] [CrossRef]
- MNIST Handwritten Digit Database, Yann LeCun, Corinna Cortes and Chris Burges. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 16 August 2020).
- Callan, R. Essence of Neural Networks; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1998; ISBN 013-9087-32X. [Google Scholar]
- Luque, B.; Lacasa, L.; Ballesteros, F.; Robledo, A. Feigenbaum Graphs: A Complex Network Perspective of Chaos. PLoS ONE 2011, 6, e22411. [Google Scholar] [CrossRef][Green Version]
- Neural Networks: Tricks of the Trade, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7700.
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef][Green Version]
- Schaetti, N.; Salomon, M.; Couturier, R. Echo State Networks-Based Reservoir Computing for MNIST Handwritten Digits Recognition. In Proceedings of the 2016 IEEE Intl Conference on Computational Science and Engineering (CSE) and IEEE Intl Conference on Embedded and Ubiquitous Computing (EUC) and 15th Intl Symposium on Distributed Computing and Applications for Business Engineering (DCABES), Paris, France, 24–26 August 2016; pp. 484–491. [Google Scholar]
- Simard, P.; Steinkraus, D.; Platt, J. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, Edinburgh, UK, UK, 6–6 August 2003; pp. 958–963. [Google Scholar]
- Han, S.; Mao, H.; Dally, W. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. arXiv 2015, arXiv:1510.00149. [Google Scholar]
- Tsuchiya, T.; Yamagishi, D. The Complete Bifurcation Diagram for the Logistic Map. Zeitschrift für Naturforsch. A 1997, 52, 513–516. [Google Scholar] [CrossRef]
- Krishnagopal, S.; Aloimonos, Y.; Girvan, M. Similarity Learning and Generalization with Limited Data: A Reservoir Computing Approach. Complexity 2018, 2018, 6953836. [Google Scholar] [CrossRef][Green Version]
- Lu, L.; Li, C.; Zhao, Z.; Bao, B.; Xu, Q. Colpitts Chaotic Oscillator Coupling with a Generalized Memristor. Math. Probl. Eng. 2015, 2015, 249102. [Google Scholar] [CrossRef][Green Version]
- Tchitnga, R.; Fotsin, H.; Nana, B.; Louodop Fotso, P.; Woafo, P. Hartley’s oscillator: The simplest chaotic two-component circuit. Chaos Solitons Fractals 2012, 45, 306–313. [Google Scholar] [CrossRef]
- List of Datasets for Machine-Learning Research—Wikipedia. Available online: https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research#cite_note-76 (accessed on 23 August 2020).
- CIFAR-10 and CIFAR-100 Datasets. Available online: http://www.cs.utoronto.ca/~kriz/cifar.html (accessed on 23 August 2020).
- The Chars74K image dataset—Character Recognition in Natural Images. Available online: http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/ (accessed on 23 August 2020).
- Livingstone, S.; Russo, F. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 2018, 13, e0196391. [Google Scholar] [CrossRef][Green Version]
- Ismail, A.; Abdlerazek, S.; El-Henawy, I.M. Development of Smart Healthcare System Based on Speech Recognition Using Support Vector Machine and Dynamic Time Warping. Sustainability 2020, 12, 2403. [Google Scholar] [CrossRef][Green Version]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In International Workshop on Ambient Assisted Living; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7657, pp. 216–223. [Google Scholar]
- Kocić, J.; Jovičić, N.; Drndarević, V. An End-to-End Deep Neural Network for Autonomous Driving Designed for Embedded Automotive Platforms. Sensors (Basel) 2019, 19, 2064. [Google Scholar] [CrossRef][Green Version]
- Murshed, M.G.S.; Murphy, C.; Hou, D.; Khan, N.; Ananthanarayanan, G.; Hussain, F. Machine Learning at the Network Edge: A Survey. arXiv 2019, arXiv:1908.00080. [Google Scholar]
- Sharma, R.; Biookaghazadeh, S.; Li, B.; Zhao, M. Are Existing Knowledge Transfer Techniques Effective for Deep Learning with Edge Devices? In Proceedings of the 2018 IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 2–7 July 2018; pp. 42–49. [Google Scholar]
- Li, H.; Ota, K.; Dong, M. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing. IEEE Netw. 2018, 32, 96–101. [Google Scholar] [CrossRef][Green Version]
Classifier | MNIST-10 Accuracy | Memory MeW |
---|---|---|
LogNNet-784:25:10 | 80.3% | (785 + 26·10) ·4 B = 4 kB (Alg. 2) or 1 kB (Alg. 1) |
LogNNet-784:100:10 | 89.5% | (785 + 101·10) ·4 B = 7 kB (Alg. 2) or 4 kB (Alg. 1) |
LogNNet-784:200:10 | 91.3% | (785 + 201·10) ·4 B = 10.9 kB (Alg. 2) or 7.8 kB (Alg. 1) |
LogNNet-784:100:60:10 | 96.3% | 7455·4 B = 29 kB (Alg. 2) or 26 kB (Alg. 1) |
Lin. 1-Layer 784:10 | 90.6% | 785·10·4 B = 30.7 kB |
Lin. 2-Layer 784:10:10 | 91.9% | 7960·4 B = 31 kB |
Lin. 2-Layer 784:30:10 | 95.6% | 23,860·4 B = 93.2 kB |
LeNet-1 [40] | 98.3% | ≈ 2500·4 B = 9.7 kB |
LeNet-5 [40] | 99.05% | ≈ 60,000·4 B = 234 kB |
ESN [41] | 79.43% | ≈ 41,000·4 B = 160 kB |
Bonsai 16 kB [17] | 90.4% | ≈ 16 kB |
Bonsai 84 kB [17] | 97.01% | ≈ 84 kB |
ProtoNN [20] | 93.8% | ≈ 16 kB |
CNN 2 kB [21] | 99.15% | ≈ 2 kB |
FastGRNN [18] | 98.2% | ≈ 6 kB |
Spectral-RNN [22] | 97.7% | ≈ 6 kB |
NeuralNet Pruning [43] | 81% | ≈ 9 kB |
GBDT [17] | 97.90% | ≈ 5859 kB |
P | 25 | 45 | 75 | 100 |
---|---|---|---|---|
talg1, ms | 5.18 | 13.65 | 33.30 | 56.44 |
talg2, ms | 0.74 | 1.17 | 1.85 | 2.38 |
talg3, ms | 0.41 | 0.76 | 1.26 | 1.78 |
talg1/talg3 | 12.63 | 17.96 | 26.43 | 31.70 |
talg2/talg3 | 1.80 | 1.54 | 1.47 | 1.34 |
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Velichko, A. Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map. Electronics 2020, 9, 1432. https://doi.org/10.3390/electronics9091432
Velichko A. Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map. Electronics. 2020; 9(9):1432. https://doi.org/10.3390/electronics9091432
Chicago/Turabian StyleVelichko, Andrei. 2020. "Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map" Electronics 9, no. 9: 1432. https://doi.org/10.3390/electronics9091432