A Method for Medical Data Analysis Using the LogNNet for Clinical Decision Support Systems and Edge Computing in Healthcare

Edge computing is a fast-growing and much needed technology in healthcare. The problem of implementing artificial intelligence on edge devices is the complexity and high resource intensity of the most known neural network data analysis methods and algorithms. The difficulty of implementing these methods on low-power microcontrollers with small memory size calls for the development of new effective algorithms for neural networks. This study presents a new method for analyzing medical data based on the LogNNet neural network, which uses chaotic mappings to transform input information. The method effectively solves classification problems and calculates risk factors for the presence of a disease in a patient according to a set of medical health indicators. The efficiency of LogNNet in assessing perinatal risk is illustrated on cardiotocogram data obtained from the UC Irvine machine learning repository. The classification accuracy reaches ~91% with the~3–10 kB of RAM used on the Arduino microcontroller. Using the LogNNet network trained on a publicly available database of the Israeli Ministry of Health, a service concept for COVID-19 express testing is provided. A classification accuracy of ~95% is achieved, and~0.6 kB of RAM is used. In all examples, the model is tested using standard classification quality metrics: precision, recall, and F1-measure. The LogNNet architecture allows the implementation of artificial intelligence on medical peripherals of the Internet of Things with low RAM resources and can be used in clinical decision support systems.


Introduction
The Internet of Things (IoT) consists of intelligent devices that have limited resources and that are capable of collecting, recognizing, and processing data as well as exchanging processed data between network participants [1]. IoT is the backbone technology in various areas, including in smart healthcare [2,3], smart homes [4], smart urbanization [5]. The concept of smart healthcare is actively developing in different countries, and the global market for IoT medical devices is growing every year [6]. The constantly emerging new threats to public health, such as the new coronavirus disease 2019 (COVID-19) pandemic [7], crate continuous stimulus for the development of new technologies. As intelligent data processing requires the use of neural networks and intelligent algorithms, the concept of edge computing is actively developing [8][9][10]. In edge computing, part of the computing load is distributed between local devices with the connected sensors ( Figure 1). Next, the processed information goes to the edge node for additional processing, and it then goes to the fog and cloud servers for global registration and processing. If the Internet connection with the fog and cloud servers fails, local intelligent processing helps to make a decision to solve the problem on the spot. The number of approaches to organize computing include the edge-cloud IoT model, the local edge-cloud IoT model [11], nanoEdge [12], and the software-defined network controller in the edge server [10]. Edge computing increases the The key point is the presence of artificial intelligence on devices with limited resources. Therefore, the development of new neural architectures and algorithms capable of working on constrained devices is an important task.
Machine learning helps to create data processing rules for the purposes of clustering, classification, and regression, where unsupervised, supervised, and reinforced learning approaches are used [14]. Popular machine learning algorithms for analyzing medical data are multilayer perceptron (feedforward neural networks with multiple layers, linear classifier) [15], support vector machines [16], K-nearest neighbors [17], the random forest method [18], logistic regression [19], and decision trees [20,21]. Clinical decision support systems often use boosting methods, for example, AdaBoost [22] and the XGBoost classifier [23], when the algorithms are organized into assemblies to increase predictive accuracy. The disadvantage of this approach for edge computing is the lack of a clear understanding by users of how these algorithms are arranged, how to install them on operating systems with truncated libraries, how to reduce the memory and processing power they use, and how to implement them on microcontroller programming languages. Often, a task that can be solved on several neurons is approached using special libraries in the Python language and complex architectures with an excessive number of neurons and connections between them. This calls for the development of simple algorithms without Intelligent devices with limited resources [13], such as devices with a small amount of RAM (2-32 kB), a weak processor, and small dimensions, can receive various information about a patient's condition from different sensors. These sensors can be wearable sensors, which can measure heart rate, blood pressure, body temperature, and glucose levels, and external sensors are installed at control points. In addition, diagnostic information can be obtained as registered symptoms via mobile questionnaires, mobile devices, or public touch panels. In edge computing, the information that is received is pre-processed by artificial intelligence (AI) on the peripheral device, generating local healthcare services. Local data processing improves the confidentiality, security, and reliability of systems operating on the edge architecture [8].
The key point is the presence of artificial intelligence on devices with limited resources. Therefore, the development of new neural architectures and algorithms capable of working on constrained devices is an important task.
Machine learning helps to create data processing rules for the purposes of clustering, classification, and regression, where unsupervised, supervised, and reinforced learning approaches are used [14]. Popular machine learning algorithms for analyzing medical data are multilayer perceptron (feedforward neural networks with multiple layers, linear classifier) [15], support vector machines [16], K-nearest neighbors [17], the random forest method [18], logistic regression [19], and decision trees [20,21]. Clinical decision support systems often use boosting methods, for example, AdaBoost [22] and the XGBoost classifier [23], when the algorithms are organized into assemblies to increase predictive accuracy. The disadvantage of this approach for edge computing is the lack of a clear understanding by users of how these algorithms are arranged, how to install them on operating systems with truncated libraries, how to reduce the memory and processing power they use, and how to implement them on microcontroller programming languages. Often, a task that can be solved on several neurons is approached using special libraries in the Python language and complex architectures with an excessive number of neurons and connections between them. This calls for the development of simple algorithms without complexity drawbacks, while having an efficiency comparable to that of known algorithms.
Ways to reduce the memory consumption of neural networks in edge computing [24] include pruning connections after learning [25,26], online learning with sparse networks [27], and quantization-aware training, which uses reduced bit precision per weight [28]. An additional method is proposed by the author of this paper for recalculating weights through chaotic mappings [29]. The study from [29] describes a classifier based on the neural network LogNNet using the example of handwritten digit recognition from the MNIST database. LogNNet has a simple structure and can operate on devices with low RAM (2-32 kB).
The operation principle of LogNNet is based on a matrix reservoir, which transforms the feature vector from the first multidimensional space into the second multidimensional space followed by the classification by a linear classifier. This transformation is the key for most machine learning methods, and the difference between the methods lies in the transformation algorithms, the number of algorithms, and its sequence. A distinctive attribute of LogNNet is the chaotic mixing of input features in various combinations in the reservoir, similar to the operation of reservoir neural networks [30] with the "kernel trick" effect [31]. This effect consists of increasing the efficiency of the linear classifier when the dimension of the feature space changes. Chaotic mapping allows an effective combination of features by selecting the optimal parameters in a vast area of chaotic states. Due to an effective combination of features in the reservoir and their subsequent classification, LogNNet has promising opportunities for application in clinical decision support systems.
The objective of this study is to create an effective method for analyzing medical data using the LogNNet neural network. The implementation possibilities of LogNNet on the peripheral devices of the medical IoT with low computing resources are demonstrated. This paper has the following structure: Section 2 describes the basic LogNNet architecture followed by sections describing the method for using the neural network LogNNet for medical data analysis. The training, testing, and application steps for patient data analysis are detailed in flowcharts with text commentary. The final subsection provides an estimate of the RAM occupied by the neural network LogNNet for application in edge computing. Section 3 describes two examples using the methodology for assessing perinatal risk and presents the service for assessing the risk of COVID-19 disease by means of the calculation of basic classification metrics. Section 4 discusses the results and compares them with known developments. In the Conclusion, a general description of the study and its scientific significance is given.

Materials and Methods
One of the most important tasks that needs to be solved by artificial intelligence is the classification of an object represented as a vector of features. Almost any object or phenomenon can be associated with several features, which can be used to predict its behavior or relation to any class. For example, for classification problems using photographs, the feature vector is a vector of the pixels of the image color. For assessing the health of a patient, it is a vector of a set of symptoms and other medical indicators.
This section describes the general principle of operation of the LogNNet neural network and the method of using LogNNet in the classification of medical data.
There are two types of medical databases designed for training neural networks: Type 1 is divided into testing and training sets, and Type 2 is not divided into test and training sets. This paper provides examples of the use of both types of medical databases. Clinical decision support system models were tested using the standard metrics for quality classification: precision, recall, and F1-measure, which is the harmonic mean between accuracy and completeness. For Type 1 databases, the neural network training was conducted on a training set, and testing was done on a test set. For Type 2 databases, the K-fold cross-validation technique was used [32] when all of the data was divided into K parts (in this work, K = 5), and one of the parts was used as a test set, and the remaining K-1 parts were taken as the training set. Then, each of the K parts of the database became a test set, and the average value of these metrics was calculated for all K cases. parts (in this work, K = 5), and one of the parts was used as a test set, and the remaining K-1 parts were taken as the training set. Then, each of the K parts of the database became a test set, and the average value of these metrics was calculated for all K cases. The input object in the form of a feature vector, denoted as d, enters the LogNNet classifier. The feature vector contains N coordinates (d1, d2… dN), where the number N is determined by the user. For example, in study [29] on the handwritten digit recognition of MNIST database, the value N equals 784, and it corresponds to the number of pixels in a 28 × 28 pixel image. At the output of the classifier, the object class of the input feature vector d is determined. Let us denote the number of possible classes as M. In [29], the value M corresponded to M = 10, as the number of classes was determined by the value of the figure in the image from "0" to "9". Inside the classifier, there is a reservoir with a matrix, which is designated as W. First, vector d is transformed into a vector Y of dimension N + 1 with an additional coordinate Y0 = 1, and each component is normalized by dividing by the maximum value of this component in the training base. Second, matrix W of dimension (N + 1) × P is multiplied by a vector Y. The result is a vector S' with P coordinates, which is normalized [29] and translated into a vector Sh of dimension P + 1 with zero coordinate Sh [0] = 1. Zero coordinate acts as an offset element. Therefore, there is a transformation of the feature vector d into the (P + 1)-dimensional space. Next, the vector Sh is fed to a two-layer linear classifier with the number of neurons H being fed into the hidden layer Sh2 and the number of outputs M being fed into the output layer Sout.

The Operation Principle of the Neural Network LogNNet
A block diagram of the filling in the matrix W is shown in Figure 2b. In [29], a filling method uses two equations. Based on the sine function, the first line of the W matrix is filled according to the equation The input object in the form of a feature vector, denoted as d, enters the LogNNet classifier. The feature vector contains N coordinates (d 1 , d 2 . . . d N ), where the number N is determined by the user. For example, in study [29] on the handwritten digit recognition of MNIST database, the value N equals 784, and it corresponds to the number of pixels in a 28 × 28 pixel image. At the output of the classifier, the object class of the input feature vector d is determined. Let us denote the number of possible classes as M. In [29], the value M corresponded to M = 10, as the number of classes was determined by the value of the figure in the image from "0" to "9". Inside the classifier, there is a reservoir with a matrix, which is designated as W. First, vector d is transformed into a vector Y of dimension N + 1 with an additional coordinate Y 0 = 1, and each component is normalized by dividing by the maximum value of this component in the training base. Second, matrix W of dimension (N + 1) × P is multiplied by a vector Y. The result is a vector S' with P coordinates, which is normalized [29] and translated into a vector S h of dimension P + 1 with zero coordinate S h [0] = 1. Zero coordinate acts as an offset element. Therefore, there is a transformation of the feature vector d into the (P + 1)-dimensional space. Next, the vector S h is fed to a two-layer linear classifier with the number of neurons H being fed into the hidden layer S h2 and the number of outputs M being fed into the output layer S out .
A block diagram of the filling in the matrix W is shown in Figure 2b. In [29], a filling method uses two equations. Based on the sine function, the first line of the W matrix is filled according to the equation where parameter i varies from 0 to N, parameter A equals 0.3, and parameter B equals 5.9. Subsequent matrix elements are filled in according to the logistic mapping equation where j ranges from 1 to P, and parameter r ranges from 0 to 2. The value of parameter r affects the classification accuracy of LogNNet [29], and the highest image classification accuracy is achieved when r corresponds to the region of chaotic behavior of the logistic mapping.
The training of the linear classifier LogNNet is performed by the back propagation method of the error [29].

Method for Using the Neural Network LogNNet for Medical Data Analysis
In the presented method, it is assumed that all of the objects from the training and test sets as well as the user data have the same dimension of the feature vector, that the coordinates of the vectors correspond to the same set and order of medical parameters, and that they do not contain missing values. A feature vector that does not meet these requirements must be removed from the database.

LogNNet Training
A block diagram of the LogNNet training process is shown in Figure 3. Training begins by retrieving a training set from a Type 1 or Type 2 database. Then, the training set is balanced. The balancing stage implies equalizing the number of objects for each class, supplementing the classes with copies of already existing objects, and sorting the training set in sequential order. The balancing process can be illustrated using an example. Let us suppose the training set consists of 10 objects. Each object is assigned a feature vector d_z, where z is the object number z = 1 . . . 10. All of the objects are divided into three classes. For example, we have five objects of Class 1 (d_1, d_2, d_4, d_7, d_10), three objects of Class 2 (d_3, d_8, d_9), and two objects of Class 3 (d_5, d_6). We find the maximum number of objects (MAX) in the classes, and in our example, MAX equals 5 for Class 1. We supplement the remaining groups with copies of the already existing objects (duplication) to equalize the number to MAX. Therefore, for Class 2, we acquire the group (d_3, d_8, d_9, d_3, d_8), and for Class 3-(d_5, d_6, d_5, d_6, d_5). Then, we compose a balanced training data set, choosing one object from each group in turn. As a result, we achieve the following training set: (d_1, d_3, d_5, d_2, d_8, d_6, d_4, d_9, d_5, d_7, d_3, d_6, d_10, d_8, d_5), which consists of 15 vectors and has the same number of objects in every class.
At the next stage, the values of the constant parameters of the model are set: value P determines the dimension of the vectors S' and S h , the number of layers in the linear classifier, the number of training epochs by the backpropagation method, and the number of neurons in the hidden layer of the classifier in the case of a two-layer classifier.
In addition, it is necessary to select a list of optimized parameters, which are the chaotic mapping parameters and other equations for the row-wise filling of the matrix W.
In the basic version of LogNNet [29], these are parameters A, B and r of Equations (1) and (2). In this study, additional chaotic mappings to fill the matrix W are used (see Table 1). In particular, the Henon map and its modification [33] were applied.

Modified Henon
LogNNet/Henon2 x 0 (0.01 to 1.5) y 0 (0.01 to 10) r 1 , r 2 , r 3 , r 4 (0 to 1.5) The training of the LogNNet network begins with two nested iterations. The internal iteration trains the output LogNNet classifier by using the backpropagation method on the training set. The external iteration optimizes the model parameters and uses the particle swarm method. The variation limits of the optimized parameters (see Table 1), the constants of the optimization method, the weight fraction of inertia, and the local and global weight fractions are set. After setting the constants, the particle swarm algorithm generates the values of the model parameters, and the matrix W is filled in. The filling is performed line by line, as shown in Figure 2b. The higher the entropy of the numerical series filling the special matrix, the better the classification accuracy [33,34]. Therefore, the procedure for optimizing the parameters of chaotic mapping plays an important role in the presented method for analyzing medical data using the LogNNet neural network.
After training the linear classifier, the classification metrics are determined based on the validation set, which, in general, is a training set.
Next, we check for exiting the optimization cycle of parameters. The exit occurs either when the desired values of the classification metrics are reached or when a given number of iterations in the particle swarm method is reached. If the condition is not satisfied, the next iteration occurs, and new model parameters are generated by the particle swarm method. If the condition is satisfied, the training algorithm ends.
As a result, we obtain the optimized parameters of the model (parameters of the chaotic mapping) at the output, which make it possible to obtain the highest classification accuracy possible on the validation set.

LogNNet Testing
The testing algorithm is shown in Figure 4. System testing begins with the operation of retrieving a test set from a Type 1 or Type 2 database. A prerequisite is that the test data should not participate in the training process described in the previous paragraph. The constant parameters of the model are set, corresponding to the same values as they do during training. Next, the parameters obtained after training the model are set: the parameters of the chaotic mapping, the equations for filling the matrix W, and the weight of the output classifier. A matrix W is filled in line by line, and the LogNNet network is tested on the test data. The classification metrics are defined, and the algorithm ends.
The testing algorithm is shown in Figure 4. System testing begins with the operation of retrieving a test set from a Type 1 or Type 2 database. A prerequisite is that the test data should not participate in the training process described in the previous paragraph. The constant parameters of the model are set, corresponding to the same values as they do during training. Next, the parameters obtained after training the model are set: the parameters of the chaotic mapping, the equations for filling the matrix W, and the weight of the output classifier. A matrix W is filled in line by line, and the LogNNet network is tested on the test data. The classification metrics are defined, and the algorithm ends.
After the test data has been verified and the classification metrics meet the criteria for implementing the model in clinical practice, the model can be used to process patient data.

Algorithm for Processing Patient's Medical Data Using LogNNet
The algorithm for processing a patient's data is shown in Figure 5. At the beginning, a feature vector of a patient's medical data is obtained. If the vector misses some data, this data is added. Next, a check is made for the compliance of the feature vector with the format that was used during training and testing. If the vector format corresponds to the model, then the analysis process begins, and if not, the vector is corrected.  After the test data has been verified and the classification metrics meet the criteria for implementing the model in clinical practice, the model can be used to process patient data.

Algorithm for Processing Patient's Medical Data Using LogNNet
The algorithm for processing a patient's data is shown in Figure 5. At the beginning, a feature vector of a patient's medical data is obtained. If the vector misses some data, this data is added. Next, a check is made for the compliance of the feature vector with the format that was used during training and testing. If the vector format corresponds to the model, then the analysis process begins, and if not, the vector is corrected.
Before the classification, constant values and optimal parameters are obtained from the training, the values of the classifier weights are set, and the matrix W is filled in. The analyzed vector is fed to the neural network LogNNet, and the object class is determined at the output. Based on the result, the risk factors are assessed for the presence of a disease in the patient, and the algorithm ends. Figure 5. Algorithm for analyzing a patient's medical data using LogNNet.

Estimation of the RAM Occupied by the Neural Network LogNNet for Application in Edge Computing
To implement neural networks in edge computing, they should work on small microcontrollers with limited computing resources. The LogNNet network can effectively operate on boards of the Arduino family with a RAM size of up to 2 kb, and the successful  Before the classification, constant values and optimal parameters are obtained from the training, the values of the classifier weights are set, and the matrix W is filled in. The analyzed vector is fed to the neural network LogNNet, and the object class is determined at the output. Based on the result, the risk factors are assessed for the presence of a disease in the patient, and the algorithm ends.

Estimation of the RAM Occupied by the Neural Network LogNNet for Application in Edge Computing
To implement neural networks in edge computing, they should work on small microcontrollers with limited computing resources. The LogNNet network can effectively operate on boards of the Arduino family with a RAM size of up to 2 kb, and the suc-cessful results of recognizing handwritten digits from the MNIST database have been demonstrated [29,35]. Table 2 demonstrates the RAM consumption values on the Arduino microcontroller when implementing the LogNNet N:P:H:M network with a two-layer classifier. The parameters N, P, H and M are described in Section 2.1. The arrays used can be of "real" type (occupying 4 B) or "integer" type (occupying 2 B). Array Y contains (N + 1) "real" type elements. The "integer" array has the advantage of taking up less memory. The weights obtained from LogNNet training can be stored in an "integer" array [35].
RAM saving is achieved by not storing the elements of the array W in RAM, but by calculating each element during the operation of the network using the processor and the chaotic mapping equations from Table 1. The pseudo code for calculating S'= W · Y is given in Algorithm 1, where the elements of the matrix W i,j are replaced by the values x n+1 of the chaotic mapping. Although this method reduces the speed of the neural network during object classification, it is not critical for practical implementation, as modern microprocessors are very fast. Algorithms without RAM saving (Algorithm 2) would allocate additional memory to store the array W [24]. In Algorithm 2, line-byline filling of the matrix W with the chaotic mapping time series is performed before calculating S'.
The function F(xn) is one of the functions of the logistic mapping in Table 1, xn denotes x n , xnp1 denotes x n+1 , and x0 is the initial value.
The LogNNet operational method allows the reduction of the size of the used RAM by the amount of memory allocated to the array W equal to (N + 1) × P × 4 B. For the configuration of the LogNNet 784:100:60:10 neural network described in [24], the amount of saved RAM can reach (784 + 1) × 100 × 4 ≈ 306 kB.

Results
This section presents the results of LogNNet application to two models: a perinatal risk assessment model and a risk assessment model for COVID-19 disease caused by the SARS-CoV-2 virus.

Perinatal Risk Assessment Model
Complications during childbirth are one of the main causes of perinatal mortality [36][37][38]. A fetal cardiotocogram (CTG) can be used as a monitoring tool to identify high-risk women during childbirth [23]. In this example, the goal was to study the accuracy of the machine learning method based on the LogNNet neural network on CTG data when identifying women from high-risk groups. The CTG data of 2126 pregnant women were obtained from the UC Irvine Machine Learning Repository [39]. The database contains a set of features for each patient, presented in Table 3. The output values for each patient are categorized into three risk categories: (1) "N" (Normal); (2) "S" (Suspicious); (3) "P" (Pathology).
For the study, 25 features were selected: features 3-27. The first two field-files, name and date, did not participate in the network training process.
The presented database is Type 2 database. Therefore, during training and testing, the K-fold cross-validation method was used with K = 5.
Balancing was done separately for the training set, while the test set remained unchanged.

Model for Assessing the Risk of COVID-19 Disease Caused by the SARS-CoV-2 Virus
The new coronavirus disease 2019 (COVID-19) pandemic caused by SARS-CoV-2 continues to pose a serious threat to public health [40]. The ability to make clinical decisions quickly and the ability to use health resources efficiently is essential in the fight against the pandemic. One of the modern testing methods for coronavirus infection is PCR analysis using polymerase chain reaction [41]. Analysis availability has long been difficult in developing countries, which has contributed to increased infection rates. Therefore, the development of effective screening methods that can quickly diagnose COVID-19 and that can help reduce the burden on health systems is an important direction in the development of medical diagnostic methods. Even with available PCR analysis, not many people go to diagnostic laboratories due to the general workload and poor awareness of the signs of the disease. It is important to develop predictive models for the COVID-19 test results. The models are designed to help medical personnel in patient triage, especially in conditions with limited health resources, and to promote the development of mobile services for self-diagnosis at home.
The Israeli Ministry of Health has published data on individuals who have been tested for SARS-CoV-2 using a nasopharyngeal swab by means of PCR [42]. These data are actively used by scientists to create forecasting models [43]. In addition to the date and result of the PCR test, various information is available in the initial database, including clinical symptoms, gender, information as to whether the person is over 60 years of age, and whether the person has had contact with an infected person. The list of fields is given in Table 6. Information can be presented in the form of answers (Yes or No) to the questions posed or in binary form (0 or 1). Clinical symptoms can be obtained during the initial examination of the patient. The procedure does not require significant medical center resources. The patient can be interviewed at home, or self-examination is used. Based on this data, a LogNNet model was developed to predict COVID-19 test results using the eight binary characteristics presented in Table 6 under numbers 1-8.
The database is classified as a Type 1 database, where the training and test sets are defined, similar to [43]. The training set consisted of 46,872 records of individuals who had been tested, where 3874 cases were COVID-19 (positive) and 42,998 cases were not confirmed (negative) for the period from 22 March 2020 to 31 March 2020. The test set contained data for the next week, from 1-7 April 2020, and consisted of 43,916 who had been individuals tested, where 3370 cases were confirmed to have COVID- 19. In accordance with the algorithm (Figure 3), the training set was balanced, and the sample size increased to 85,996. The number of confirmed and unconfirmed COVID-19 diagnoses leveled off at 42,998. Next, the constant parameter values of the model were set: the number of layers in the linear classifier was equal to 2, the number of training epochs was 50, and the value P and the number of neurons in the hidden layer H of the linear classifier were determined by two architectures 8:16:10:2 (P = 16, H = 10) and 8:6:4:2 (P = 6, H = 4). All of the chaotic mappings presented in Table 1 were tested. Further, the parameters were optimized by the particle swarm method. After finding the optimal values, testing was performed as in algorithm in Figure 4. The testing results for the 8:16:10:2 and 8:6:4:2 architecture models are presented in Tables 7 and 8.  Tables 7 and 8  with chaotic sine mapping. The fact that the model with fewer neurons performed better is not a general rule, but rather an exception to the rule, since it is more difficult to retrain a system with fewer neurons. In addition, the input data are presented in binary form, and the amount of data is much less than in the first example. All of these factors can lead to the result that a small number of neurons can optimally solve this problem. Therefore, for each individual practical task, it is necessary to test several LogNNet architectures with a different number of neurons in the reservoir and in the hidden layer of the classifier and to choose the best architecture or include these parameters to be optimized along with the parameters of the chaotic mapping.

Estimation of RAM Occupied for the Arduino Microcontroller
For the different LogNNet architectures discussed above, the occupied RAM was estimated for implementation on Arduino microcontrollers. Table 9 demonstrates the RAM values for Algorithm 1 with RAM saving and for Algorithm 2 without RAM saving. The LogNNet 25:50:20:3 model for assessing perinatal risk uses about 8 kB of RAM, and a more detailed RAM distribution is shown in Figure 6a. A significant part of the memory,~5 kB, is occupied by the matrix W. When using Algorithm 1, this memory can be freed, and the algorithm will use only about 3 kB. The second biggest memory consumer (2 kB) is the array of weights between the Sh/Sh2 layers, as it contains the weights obtained after training the neural network. This neural network can be implemented on microcontrollers with 16-32 kB of memory, for example, Arduino Nano. For the different LogNNet architectures discussed above, the occupied RAM was estimated for implementation on Arduino microcontrollers. Table 9 demonstrates the RAM values for Algorithm 1 with RAM saving and for Algorithm 2 without RAM saving.  Figure 6a. A significant part of the memory, ~5 kB, is occupied by the matrix W. When using Algorithm 1, this memory can be freed, and the algorithm will use only about 3 kB. The second biggest memory consumer (2 kB) is the array of weights between the Sh/Sh2 layers, as it contains the weights obtained after training the neural network. This neural network can be implemented on microcontrollers with 16-32 kB of memory, for example, Arduino Nano.   (Figure 6b). As the matrix W occupies ~216 B, if Algorithm 1 is used, this memory can be freed, and the algorithm will use ~600 B. Therefore, the model can be placed on microcontrollers with a RAM size of 1-2 kB, for example, Arduino Uno. The   (Figure 6b). As the matrix W occupies~216 B, if Algorithm 1 is used, this memory can be freed, and the algorithm will use~600 B. Therefore, the model can be placed on microcontrollers with a RAM size of 1-2 kB, for example, Arduino Uno. The RAM savings are not as significant as they are in the LogNNet 25:50:20:3 configuration because of the small number of neurons in the model.

Discussion
The algorithm for processing patient data is shown in Figure 5. However, not all patients have the required number of health indicators. In the absence of a particular indicator, the classifiers of the systems can malfunction, and it can lead to critical classification errors and errors in assessing the risk of a disease. The simplest method, when the missing data are replaced by the average value for a given indicator from the training database, leads to classification errors. The patent proposed in [44] proposes a method, where, in the absence of certain coordinates in the input feature vector, a classifier is used that is specifically trained for the features that are present. Therefore, it is necessary to either prepare several classifiers in advance and apply them depending on the presence of certain health indicators or to re-train the classifier on the fly, leaving only those features of health that are present for the patient in the training database. Testing a similar method on LogNNEt can be a topic for future research.
An increase in the space dimension of the input features occurs in the LogNNet reservoir. For example, in the LogNNet 25:100:40:3 configuration, the (25 + 1) -dimensional vector Y is transformed into the (100 + 1) -dimensional vector S h , which is then classified by the linear classifier. The more complex the chaos, the more diverse the Y transformation in the reservoir, and the better the linear classifier classifies S h in the 101 dimensional space. In the LogNNet 8:6:4:2 configuration, the space dimension of the of input features in the reservoir is reduced, and the (8 + 1) -dimensional vector Y is transformed into the (6 + 1) -dimensional vector S h . This approach provides good results for tasks such as assessing the risk of COVID-19 (see Section 3.2) or MNIST image recognition [29]; as the main features are distinguished in the reservoir, the minor features are erased, and the output classifier is trained more efficiently.
Previous studies revealed [29,34] that the higher the Lyapunov exponent or the entropy of a chaotic mapping, the higher the accuracy of the LogNNet classification. Despite this finding, the present results call for the optimizing of the chaotic mapping parameters in the LogNNet reservoir to increase the efficiency of transforming the space of the input features. For each specific network configuration and classification problem, it is necessary to perform the separate optimization and selection of the mapping type to find the best solution. Under the same initial conditions, the chaotic mapping generates repeating time series. It is more advantageous than using a random number generator, as the chaotic dynamics in the optimization process can be varied using the control parameters of the mapping. The study of the role of chaotic mappings and chaos parameters in the transformation of input features in reservoir neural networks can be a topic for future research.
The results of the model analysis for assessing the risk of COVID-19 disease showed that the use of almost all chaotic mappings led to a good prediction accuracy of about 95%, while the precision, recall, and F1 indicators were higher in the Negative class. The test results were at the level of [43], where the gradient-boosting predictor trained with the LightGBM was used. The best performance was demonstrated by the LogNNet/Sine 8:6: 4:2 (A = 95.46%) model, with a chaotic sine mapping and 600 B of RAM occupied. This model can be placed on Arduino Uno microcontrollers with a RAM size of 2 kB. A service concept for the advanced medical diagnosis of COVID-19 can be proposed (see Figure 7). The technical part contains an Arduino Uno or Arduino nano board, with a connected temperature sensor and a touch panel. The patient is asked questions 1-8 from Table 6, which are displayed on a touch panel, and the patient's temperature is measured. At the output, the system evaluates the presence of the risk of COVID-19 disease. Currently, for example, in Thailand, the service of installing temperature sensors in public places is widespread. It produces a preliminary screening method for the temperatures of visitors. Such temperature modules can be equipped with artificial intelligence based on LogNNet and can offer better express testing for COVID-19. The results of the model analysis for assessing the risk of COVID-19 disease showed that the use of almost all chaotic mappings led to a good prediction accuracy of about 95%, while the precision, recall, and F1 indicators were higher in the Negative class. The test results were at the level of [43], where the gradient-boosting predictor trained with the LightGBM was used. The best performance was demonstrated by the LogNNet/Sine 8:6: 4:2 (A = 95.46%) model, with a chaotic sine mapping and 600 B of RAM occupied. This model can be placed on Arduino Uno microcontrollers with a RAM size of 2 kB. A service concept for the advanced medical diagnosis of COVID-19 can be proposed (see Figure 7). The technical part contains an Arduino Uno or Arduino nano board, with a connected temperature sensor and a touch panel. The patient is asked questions 1-8 from Table 6, which are displayed on a touch panel, and the patient's temperature is measured. At the output, the system evaluates the presence of the risk of COVID-19 disease. Currently, for example, in Thailand, the service of installing temperature sensors in public places is widespread. It produces a preliminary screening method for the temperatures of visitors. Such temperature modules can be equipped with artificial intelligence based on LogNNet and can offer better express testing for COVID-19. The results of the present study are in line with the concept of nanoEdge [12], where low-power devices at the nodes can provide collective services and process information together, communicating with each other using low-power wireless radios such as BLE, ZigBee, LoRa, or other similar technologies. In the field of health care, special attention is paid to low-energy and communication devices [50], and this highlights the need to develop neural network algorithms for constrained devices.
Endeavors to create resource-efficient algorithms and neural networks are actively being undertaken in the scientific community. For example, the algorithms Bonsai [51], ProtoNN [52], CNN [53], FastGRNN [54], Spectral-RNN [55], and NeuralNet Pruning [56] can be run for MNIST recognition tasks on devices with RAM in the 2-16 kB range. However, these algorithms have increased coding complexity, and there is no information on their effectiveness in analyzing medical data.
The results of this study open opportunities for the use of neural LogNNet models for mobile diagnostics in clinical decision support systems and in patient self-diagnostics. The described technique is universal and can be tested on a wide variety of medical databases. Further theoretical developments and the practical implementation of mobile health services and edge computing open wide perspectives for future research.

Conclusions
This paper presents a new algorithm for the implementation of a neural network based on the LogNNet architecture, which has shown its effectiveness in solving problems The results of the present study are in line with the concept of nanoEdge [12], where low-power devices at the nodes can provide collective services and process information together, communicating with each other using low-power wireless radios such as BLE, ZigBee, LoRa, or other similar technologies. In the field of health care, special attention is paid to low-energy and communication devices [50], and this highlights the need to develop neural network algorithms for constrained devices.
Endeavors to create resource-efficient algorithms and neural networks are actively being undertaken in the scientific community. For example, the algorithms Bonsai [51], ProtoNN [52], CNN [53], FastGRNN [54], Spectral-RNN [55], and NeuralNet Pruning [56] can be run for MNIST recognition tasks on devices with RAM in the 2-16 kB range. However, these algorithms have increased coding complexity, and there is no information on their effectiveness in analyzing medical data.
The results of this study open opportunities for the use of neural LogNNet models for mobile diagnostics in clinical decision support systems and in patient self-diagnostics. The described technique is universal and can be tested on a wide variety of medical databases. Further theoretical developments and the practical implementation of mobile health services and edge computing open wide perspectives for future research.

Conclusions
This paper presents a new algorithm for the implementation of a neural network based on the LogNNet architecture, which has shown its effectiveness in solving problems related to the classification of medical data. The role of chaos is highlighted, and the optimization of chaos properties in the process of transforming the space of input features in the reservoir affects the efficiency of the LogNNet classification. The algorithm paves the way for the development of edge computing in healthcare. The different types of algorithms and neural architectures are dedicated to effectively solve a certain class of problems. The presented algorithm is an advanced classification algorithm for clinical decision support systems, operating on low power microcontrollers with small memory size.
A method for medical data analysis using the LogNNet neural network is presented to calculate risk factors for the presence of a disease in a patient based on a set of medical health indicators. The algorithm illustrates the diagnosis of COVID-19 after training the LogNNet network using a publicly available database from the Israeli Ministry of Health. This database publishes data on the patients who have been tested for SARS-CoV-2 using a nasopharyngeal swab analysis by means of the PCR method. In addition, the LogNNet network assesses perinatal risk based on cardiogram data of 2126 pregnant women obtained from the machine learning repository of the University of California, Irvine. In all examples, the model was tested by evaluating standard classification quality metrics: precision, recall and F1-measure.
The results of this study can help to implement artificial intelligence on medical peripheral devices of the Internet of Things with low RAM resources, including clinical decision support systems, remote Internet medicine, and telemedicine.

Patents
An application for invention No. 2021117058 "Method for analyzing medical data using the neural network LogNNet" has been filed.
Funding: This research received no external funding.
Institutional Review Board Statement: Ethical review and approval were waived for this study. The Tel-Aviv University review board (IRB) determined that the Israeli Ministry of Health public dataset used in this study does not require IRB approval for analysis. Therefore, the IRB determined that this study is exempted from an approval.

Informed Consent Statement: Not applicable.
Data Availability Statement: All the data used in this study were retrieved from the Israeli Ministry of Health [42] and the UC Irvine machine learning repository [39] websites.