Assessment of Machine Learning Techniques in IoT-Based Architecture for the Monitoring and Prediction of COVID-19

: From the end of 2019, the world has been facing the threat of COVID-19. It is predicted that, before herd immunity is achieved globally via vaccination, people around the world will have to tackle the COVID-19 pandemic using precautionary steps. This paper suggests a COVID-19 identiﬁcation and control system that operates in real-time. The proposed system utilizes the Internet of Things (IoT) platform to capture users’ time-sensitive symptom information to detect potential cases of coronaviruses early on, to track the clinical measures adopted by survivors, and to gather and examine appropriate data to verify the existence of the virus. There are ﬁve key components in the framework: symptom data collection and uploading (via communication technology), a quarantine/isolation center, an information processing core (using artiﬁcial intelligent techniques), cloud computing, and visualization to healthcare doctors. This research utilizes eight machine/deep learning techniques—Neural Network, Decision Table, Support Vector Machine (SVM), Naive Bayes, OneR, K-Nearest Neighbor (K-NN), Dense Neural Network (DNN), and the Long Short-Term Memory technique—to detect coronavirus cases from time-sensitive information. A simulation was performed to verify the eight algorithms, after selecting the relevant symptoms, on real-world COVID-19 data values. The results showed that ﬁve of these eight algorithms obtained an accuracy of over 90%. Conclusively, it is shown that real-world symptomatic information would enable these three algorithms to identify potential COVID-19 cases effectively with enhanced accuracy. Additionally, the framework presents responses to treatment for COVID-19 patients.


Introduction
Nearly 185,390,254 cases of COVID-19 have been confirmed in 222 countries as of July 2021 since its discovery in late December 2019 (Source: https://www.worldometers.info/ coronavirus (accessed on 30 May 2021)). Furthermore, an increase of 2% every day has been shown [1]. In these, nearly 4,009,218 deaths have been registered, which represents a 4% mortality rate [2,3]. The novel coronavirus was classified by the World Health Organization (WHO) as a pandemic in March 2020 [4,5]. As of July 2021, the vaccination procedure is ongoing, and it will take a long time to induce herd immunity globally [6,7]. Currently, by following procedures including regular hand washing, adhering to social distancing, and wearing face masks, the control of COVID-19 is achieved by reducing the progress of new mutants such as Delta and Delta+ [8,9].
The purpose of this research is to investigate the impact of IoT-based technologies and machine learning in tracking and battling COVID-19 in three phases: early diagnosis, evaluation, and recovery. Early identification and diagnosis can result in fewer infections and, as a result, better health care for those who are sick. Isolating sick persons from others, quarantining proven or suspected cases, and imposing lockdowns can help reduce the number of COVID-19 infections. While this disease has a significant risk of spreading quickly when compared to other coronavirus infections, there are several ongoing initiatives and extensive research to limit the transmission of the virus. IoT technology has been demonstrated to be a safe and effective means of dealing with the COVID-19 pandemic in this setting. Following up on COVID-19 patients after they have recovered will aid in the monitoring of recurrence of symptoms and the infectivity of these recovered individuals.

Theoretical Background
Technology for the real-time detection, identification, and tracking of novel events could help delay the spread of disease [10,11]. These tools include in-depth analysis at the fog-cloud level, as well as the incorporation of captured remote tracking information, namely through mobile healthcare, and the continued monitoring of the health condition of patients [12,13]. Figure 1 shows the layered architecture of IoT-fog-cloud computing. This paper suggests a tracking and monitoring system for COVID-19 that would gather information from IoT sensors in a time-sensitive manner [14,15]. The current research presents the incorporation of eight data prediction models-Neural Network, Support Vector Machine (SVM), Deep Neural Network, K-Nearest Neighbor (K-NN), OneR, Naive Bayes, Decision Table, and Long Short-Term Memory (LSTM)-to rapidly classify possible coronavirus instances from real-time data [16]. An IoT framework that can track all potential and confirmed outbreaks, along with the recovery statuses of COVID-19 sufferers who have recovered from the infection, may be introduced using this identification and tracking system [17,18]. This method will lead to a better understanding of the existence of COVID-19 by capturing, reviewing, and storing data in addition to conducting timesensitive monitoring [19,20]. The suggested framework comprises five important aspects: (1) Data collection of real-time symptoms (using IoT devices); (2) Quarantine/isolation center medication and result information; (3) Data processing center using Artificial Intelligent techniques; (4) Health caregivers and doctors; and (5) Cloud visualization.

Novel Contribution
The goal of the presented model is to minimize death rates using early diagnoses, follow-up information from recovered patients, and improved knowledge of the disease. Specifically, the main contributions of the presented study are as follows:

1.
Incorporating an IoT-fog-cloud platform for the analysis of COVID-19 cases over geographical distribution patterns; 2.
Presenting a fog computing environment for the prediction of the disease spread of COVID-19; 3.
Analyzing state-of-the-art prediction techniques for the assessment of the disease spread of COVID-19 in real-time with a fog computing platform; 4.
Delivering real-time information to relevant doctors and caregivers for time-sensitive precautionary decision-making; 5.
Validating the proposed model to assess the overall performance enhancement in comparison to the state-of-the-art prediction models.
Specifically, this paper experiments on a real dataset to validate these eight machine learning algorithms. The findings indicate that more than 90% accuracy was achieved by five of these eight algorithms. Based on time-sensitive data values, the use of these five algorithms enables the efficient and reliable prediction and detection of possible COVID-19 events. Figure 3 demonstrates the conceptual overview and layered architecture of the proposed IoT system. This article is arranged as follows. The related literature is discussed in Section 2. The suggested structure is detailed in Section 3. Section 4 focuses on the detection and identification of new events. Finally, the work ends with Section 5.

Related Work
There are numerous significant works on the utilization of the IoT to provide healthcare; several works are presented here that indicate the incorporation of the IoT in the healthcare industry. Moreover, a subsection presents the machine learning approaches used by researchers in the healthcare field.

IoT in Healthcare
A comprehensive literature analysis of the utilization of IoT in healthcare services was performed by Usak et al. [21]. The work also included a focus on the key difficulties of incorporating IoT for provisioning healthcare delivery with categorization presented in the literature. A hybrid IoT protection and health control scheme was suggested by Wu et al. [22]. The aim was to increase safety outdoors. The framework comprises two modules: the first module captures patient information and the second module is utilized over the Internet to compile the collected data. To acquire reliable attributes in the ambient environment and patient healthcare data, wearable devices were used. To guarantee the privacy and protection of health records, Hamidi [23] analyzed the authorization of healthcare data values. The research suggested the use of technology for biometric-based authentication. In urban areas, Rath and Pattanayak [24] suggested a smart health care hospital utilizing smart appliances. Concerns regarding the hygiene, safeness, and prompt care of COVID-19 sufferers were addressed in the VANET region. Simulators such as NS2 and NetSim were used to test the proposed method. A Cloud-IoT-Health model that combined cloud computing and IoT technology for healthcare, based on the related literature, was suggested by Darwish et al. [25]. As well as emerging developments in Cloud-IoT-Health, the paper addressed the complexities of the incorporation of this technology. These challenges can be split into three layers: infrastructure, connectivity and networking, and intelligence. During physical exercises, Zhong et al. [26] researched the tracking of graduates in the university. The proposed model concentrated on a model of social events identification and tracking, requiring the pre-assessment of data. Several classifiers were evaluated and debated including Decision Trees, Neural Networks, and SVM. An IoT-based smart health tracking and control architecture was suggested by Din and Paul [27]. The architecture consisted of three layers: (1) generating data from and processing battery-powered bio-devices, (2) processing Hadoop, and (3) device layers. The work focused on the energy efficacy technique by utilizing piezoelectric instruments connected physically to a human, because of the poor ability of batteries to fuel the sensors. An IoT-inspired framework for the time-sensitive regulation of diabetes was developed by Otoom et al. [5]. Statistical models based on ARIMA and Markov were used to assess sufficient doses of insulin.

Machine Learning in Healthcare
An IoT-empowered gadget for detecting heart-related disease was introduced by Alshraideh et al. [28]. Many machine learning algorithms have been used for disease detection. Nguyen [29] provided a review of the techniques of machine learning used in coronavirus analysis. These approaches were categorized into many groups, incorporating the utilization of IoT technology. Maghdid et al. [30] recommended that sensors on smartphones be used to gather health data, such as temperature. To classify potential COVID-19 cases, Rao and Vazquez [31] suggested the utilization of deep-learning mechanisms. Learning is carried out on user data obtained from literature on the Internet that is accessible from smart devices. Allam and Jones [32] addressed the requirement, inspired by the outbreak of COVID-19, to create common protocols to exchange data between smart cities in pandemics. For example, to classify potential COVID-19 cases, artificial intelligence techniques are extended to matrices obtained from thermal cameras deployed in intelligent regions. An IoT-based technique for the detection of coronavirus cases was suggested by Fatima et al. [33]. The method was inspired by the fuzzy inference technique. A distinction was made between MERS, SARS, and COVID-19 by Peeri et al. [34], using the available literature. They proposed the use of the IoT to track the distribution of infections. To our knowledge, a full system for detecting and tracking COVID-19 using IoT technology has not been established. Table 1 shows the comparative analysis of the presented study with related works.

Proposed Approach
To turn appliances into intelligent devices, the IoT incorporates networking and sensing technology, as well as widespread computing [35,36]. This helps smart programs to be provided to consumers to boost the quality of their lives. There are primarily three levels of IoT architecture: physical, network, and application layers [37]. To capture heterogeneous data, physical structures are fitted with sensors [38]. There is restricted computing power and a limited lifespan for these sensors [11]. The more knowledge they gather, the more helpful the options provided [39]. The difficulty of data processing, however, is becoming a bottleneck. Connectivity can be used to work with these sensors' restricted computing capacity. Several various networking systems, including Bluetooth, RFID, IEEE 802.15.4, 6LoWPAN, and Near Field Communication, have been used [11]. In research, networking is not only utilized to upload accumulated information but is utilized at the physical level to promote contact between heterogeneous IoT objects. In doing so, as the number of artifacts grows, the network layer can enable scalability with interface exploration and context awareness. More importantly, it can provide IoT devices with protection and privacy. It is possible to thoroughly analyze the data uploaded from the IoT units to allow effective decision-making. Deep/machine learning techniques are utilized, and such effective techniques are replacing more conventional methods. There are a wide variety of areas, including power grids, agriculture, healthcare, and smart houses, in which the IoT can be used efficiently. In healthcare, the Internet of Medical Things (IoMT) is often referred to as the IoT. Orthodox approaches based on ICT, such as telemedicine or telehealth, have been increasingly replaced. More sophisticated functionality than these conventional approaches can be provided by the IoMT. For instance, while conventional approaches can allow patients to communicate remotely with medical doctors, the IoMT also supports machine-human and machine-machine interaction, such as AI-based diagnosis. The balance between data privacy/security and patient protection is one significant concern in the design of the IoMT. Eavesdropping on contact networks (to sell the gathered data), interference, interruption, or even the alteration of the service are examples of cyber attacks that threaten those designs. Nevertheless, in contexts such as those of COVID-19 patients, it might be appropriate to breach such security precautions to assess IoT data to provide effective curative services. To aid in the assessment of this equilibrium, deep/machine learning approaches can be used.

Fundamentals of Deep/Machine Learning Approaches
To build a predictive model, this work used a pre-processed dataset to test our method of detection (or prediction). This model aims to estimate the probability that COVID-19 would infect a given organism. There have been many learning techniques used for this reason, which can be categorized into various categories. The WEKA software categorizes classifiers into six classes: (1) Dense Neural Network, (2) Decision Tables, LSTM and OneR, (3) K-Nearest Neighbors, (4) Support Vector Machines, (5) Naïve Bayes, and (6) Neural Networks. This work directly contrasts the efficiency of eight algorithms. The WEKA program was utilized in this work to execute prediction models. For each of the eight algorithms, the default attribute measures were utilized. A brief overview of the eight algorithms is given below.

Support Vector Machine (SVM)
The Support Vector Machine is an effective method of supervised learning. With given instances of labeled data, with each data point, the instance is associated with a positive and negative class, and the SVM predicts the separation of the instances from every category and enhances the margin between the hyperplane and the measured value. This is utilized to associate a predicted class with a novel data class.

Artificial Neural Network (ANN)
The Artificial Neural Network is a methodology of learning with supervised functions. The prediction mechanisms seek to replicate the prediction capacity of the human brain. Several node layers are bound by layers for this purpose. Numerical weights are represented as the edges that link between the nodes. Each node's output is evaluated in terms of the computed input-sum. The ANN predicts mathematical measures that best identify the instances of each category with training data instances that are classified as positive and negative. For the association of a category, the presented framework is utilized. The test example drives the inputs to the initial layer. The threshold value is added to the final layer outputs to decide the mark for that instance of the test.

Naive Bayes
The Naive Bayes approach is a tool for guided prediction. A probabilistic approach parallels the learning process; it utilizes the probability theorem to measure parameters for the framework. The Naive Bayes approach calculates several parameters, including the likelihood of each category mark existing, given a series of training instances that are labeled. To predict a category for any specific test case, these parameters are then used. This is achieved by measuring the probability of each of the potential class labels to be applied to the test case. The mark of that test instance is determined by the highest value of these probabilities.

K-Nearest Neighbors (K-NN)
K-NN is a guided method of learning based on instances. A lazy approach is accompanied by the learning process. K-NN measures the distances between a given test data sets and all the training data sets, given a series of training illustrations that are labeled. These distances are utilized to associate class marks for the test data set. This is achieved by applying the class labels to the evaluation instance. Table   The Decision Table is a supervised learning strategy. This approach develops a model by creating a Decision Table, given a series of training instances that are labeled. A set of criteria and subsequent acts comprise the table. If any feasible accumulation of input examples is considered for the circumstances, the table is complete and prescribes the necessary behavior for each of them.

Dense Neural Network
A Dense Neural Network is a supervised learning technique. This approach computes a model by constructing a neural network, with a unique node, given a set of training illustrations that are labeled. In other words, using only one function of that case, a prediction for specific data is estimated. This role is calculated by determining the knowledge gain for all functions over all training cases, choosing the one with the highest value to obtain information.

One Rule (OneR)
OneR is a methodology for supervised learning. This approach creates a model by generating a specific function in the data set, given a set of training illustrations that are labeled. It then chooses the illustration with the minimum total error.

Long Short-Term Memory (LSTM) Technique
LSTM is a methodology of deep learning. This procedure creates a model by using only the goal function (i.e., class) while ignoring all other attributes, given a series of training examples that are labeled. It is known as the easiest method of classification. It assigns the plurality class to every new test case. Typically, it is used to assess baseline efficiency as a benchmark.

Research Methodology
With the advantages of the IoT and machine learning technologies in data analysis, the prediction of the spread of COVID-19 can be determined. However, the assessment of several techniques in real-time remains an open question that needs to be addressed. Moreover, the detection of an optimal technique for COVID-19 data analysis is another important concern that has to be explored. This segment introduces and addresses our planned IoT-based system that could be used in real-time to detect and classify (or predict) possible cases of coronaviruses. This equally significant structure may be used to forecast the therapeutic response of reported cases and understand the effect of the coronavirus. Table 2 shows some of the popular symptoms of COVID-19 patients in comparison to West Nile and Japanese Virus. It comprises five key modules, including the compilation and transmission of symptom data, a quarantine center, review repository, and a mobile application for doctors for visualization, in which each module is linked via a cloud platform. Specifically, the presented model uses an IoT-fog-cloud-based framework for the assessment of COVID-19 data. Fog computing acts as the intermediate for the real-time analysis of data segments for the prediction of disease spread. The visualization module is appended to display the result to relevant doctors or caregivers. Figure 4 presents a layered view of the proposed methodology. The detailed functionality is presented below.

Data Accumulation
The purpose of the data accumulation module is to gather data on symptoms in real-time through a series of wearable sensors on the body of the patient. In our earlier study, based on COVID-19 data instances, the most important symptoms of COVID-19 were identified, including sore throat, fever, cough, shortness of breath, and exhaustion. Many biosensors are available to detect these symptoms. For example, for the identification of fever, temperature-based sensors may be used. Utilizing voice-inspired devices with aerodynamics and different techniques, cough and acoustic classifications for various ages can be used for symptom identification. For fatigue detection, movement-focused and heartbeat sensors have been utilized. An image-based classification method can be used to diagnose a sore throat. Finally, it is possible to use oxygen-based sensors to identify respiration rates. Additionally, data can be gathered via ad-hoc networks of mobile applications with a travel history of 28 days.

Quarantine Center
A quarantine center gathers information on patients in a healthcare center that have been quarantined or segregated. Both essential and non-essential details are included in these documents. Each data point contains temporal patterns of the aforementioned parameters for essential data, while every instance includes travel history over the past 28 days and non-essential details including chronic illnesses, gender, age, and related information, including illness in the family. The treatment response for each case would ultimately also require each record.

Data Analytics
Data processing and machine learning algorithms are hosted by the data center. These techniques have been utilized to construct a framework for coronavirus detection and display the processed data with a real-time dashboard. Based on time-sensitive information gathered and submitted by users, the presented framework can then be used to rapidly classify or forecast future COVID-19 events. The model can also forecast the therapeutic response of a patient. Over time, valuable knowledge about the nature of the disease can be provided by the proposed framework, which is built from the acquired information. For data analysis, a fog computing platform has been incorporated. The detailed data flow over the fog computing platform is depicted in Figure 5. The various components interact in a coordinated manner with the fog computing node in the proposed paradigm. Initially, IoT data are collected indiscriminately from sensors placed in the body-area network of the individual. Based on the acquired data, the parametric data are delivered to the linked Raspberry Pi (fog computing device). The fog device is equipped with an ARM Cortex-A53 quadcore CPU running at 2.1 GHz and with 2 GB LPDDR2 SDRAM; it is equipped with the Raspbian Stretch operating system and Apache HTTP server 2.4.34. The data are sent via the ZigBee protocol, which is based on the IEEE 802.15.4 data communication standard. The fog node performs real-time local computations on the obtained data and sends a warning alert signal to doctors for susceptible ambient factors that might affect healthcare. Furthermore, the fog computing node uses HTTP RESTful APIs to connect with cloud services. It uploads input data and downloads results using the HTTP POST method. The data transfer is carried out utilizing the IEEE 802.11 WiFi protocol, which is widely available and easy to implement. Furthermore, Microsoft Network Monitor 3.4 is used to track network bandwidth use. Amazon EC2 cloud with 1vCPU, 2 GB RAM, 3 GB SSD, and Windows Server 2016 is used for cloud-based data processing. Monitoring authorities can conduct two key functions at the cloud level: first, regularized ambient environment monitoring may be done in real-time from remote locations to analyze data. Software APIs created with the OS-SDK supplied with the device can be used to create the visualization. Second, prediction is implemented over a large number of datasets, with an automatic assessment of healthcare.

Medical Doctors
Doctors can track suspicious data that are uploaded in real-time and suggest a potential infection by our proposed identification/prediction model focused on machine learning. By conducting the additional clinical examination required to validate the situation, the doctors would then be able to respond rapidly to these suspicious cases. This will help confirmed cases to be isolated and for proper health care to be provided.

Cloud Data Repository
The cloud technology is integrated through the Internet, which (1) allows each user to upload real-time symptom data, (2) holds personal health information, (3) communicates forecast reports, (4) communicates doctor advice, and (5) preserves data.

1.
Via wearable devices and sensors, the framework seamlessly captures time-sensitive information. A sore throat, cough, fever, exhaustion, and a low respiration rate are the most significant symptoms. The users also submit information about living in (or commuting to) contaminated areas through a smartphone application, as well as their potential interactions with people infected with COVID-19. The quarantine center presents information daily from its segregated patients. The context of the information is identical to the data obtained by users in real-time; 2.
Via the cloud infrastructure, intercepted COVID-19 data are submitted to the information processing module with the aid of smart devices. Via the cloud platform, automated documents from the hospitals can be periodically submitted to the data processing center. Deep learning techniques are used that constantly refine the models using the data obtained from the health care center. Based on the time-sensitive information acquired from each individual, the models are then used to classify possible events. The data are processed and presented on a time-sensitive dashboard. The dashboard can provide insights about the existence of the virus for doctors; 3.
The relevant specialists are contacted to check up with the patient if a possible case is found. For medical examinations with a Polymerase Chain Reaction, used to detect positive cases, the patient will be advised to attend the medical care center. The patient will be separated if the case is confirmed, and all connections will be contacted and quarantined.
The use of the same smartphone program to advise consumers is a parallel and integral part of this structure, providing practical knowledge about how users can mitigate disease and how to prevent exposure to the virus.

Visualization
For real-time decision modeling, this layer serves as a presentation layer. This layer's major role is to show the expected outcome on a user's hand-held device. The result is displayed to the user in real-time on a liquid crystal panel (LCD) or a smartphone for efficiency. In addition, the Self-Organized Mapping (SOM) approach enhances the visualization with an alert warning rather than presenting the numeric value of the parameters. SOM is a useful technique for locating disease hotspots. The goal of this layer is to use Geographical Information System (GIS) tools to investigate the spatial distribution of disease-endemic regions and find COVID-19 hot-spots utilizing spatial cluster analysis techniques such as Getis-Ord Gi* and SOM. Mapping is done using ArcGIS 10.2 software and the SOM approach to enable a dynamic display based on the color-coding scheme to undertake a GIS-based study of the geographical distribution of COVID-19. A cell phone is used to register each user with the system. Upon registration, the user is assigned a unique user identification number that will be used for all future interactions. It contains each COVID-19 diagnosis, as well as the categorization findings and medical data gathered. The findings and information recorded here are accessible to registered medical staff, users, hospitals, and healthcare providers. Users and approved agencies can access medical records at any time and from any location. Similarly, an alarm message regarding a person's presence is kept on cloud storage for expert analysis so that fast action and preventative measures may be taken.

Case Prediction Analysis
The deep/machine learning techniques used in the data processing module of the presented IoT-inspired framework are further explored in this section. Specifically, a simulation was carried out to explore the feasibility of incorporating machine learning techniques to easily classify (or predict) possible infections with COVID-19. This experimental setup is described in the remainder of this section.

Data Instances
In total, 15,324 data instances of confirmed COVID-19 cases were acquired from the CORD-19 repository. There were various forms of details about each case in the data. The current research focuses on healthcare signs including fever, headache, cough, chills, and other symptoms concerning COVID-19 patients. The data were acquired in the form of (0,1), where 0 means non-vulnerable and 1 indicates vulnerable. However, or several of the instances documented in the database, some of this material was incomplete. The data were also not well organized for use by machine learning algorithms.

Data Pre-Processing
The data were pre-processed and organized in the current work to be the best fit for machine learning. The acquired data culminated in an 80-symptom list. Many of these signs are redundant. The number of symptoms was, however, limited to 19. The synthesis of COVID-19 parameters was carried by two medical doctors in an ad-hoc fashion. For some, the symptoms "anorexia" and "loss of appetite" were mixed. The relative value of these 19 signs was also calculated. The following six separate, statistically dependent feature selection algorithms were used to rate the 19 signs based on their respective significance: inter-quartile range, spectral measure, Pearson correlation, knowledge measure, and weighted variance-based feature. In addition to ranking symptoms, weights were associated with each of them. The five most significant symptoms (ordered from most important to least important) were cough, fever, tiredness, sore throat, and respiration rate. This study used the five most significant signs. Moreover, two new attributes were included: sense and touch. The first attribute reflected whether the user could sense, traveled to, or went through a potentially contaminated environment or not. The second attribute referred to whether the individual was considered to be in communication with an infectious individual or not. This culminated in total data records from a pre-processed dataset.

Performance Assessment
Four performance measurements were used to assess the effectiveness of the eight techniques: root mean square error, accuracy, ROC area, and estimation. Uncertainty matrix and cross-validation approaches could be used to compute these steps.

Confusion Matrix
By generating a two-by-two matrix, the confusion matrix was used to visualize the output of a binary (two-class) supervised learning problem. Each row in the matrix represented the instances in the expected (or computed) class, while the instances in the real class were presented in each column. Four values made up the resulting matrix.

1.
True positive (TP): Total instances defined as positive and currently positive (using the statistical model); 2.
False positive (FP): Total instances labeled as positive (using the statistical model) but that are negative; 3.
False negative (FN): Total instances defined as negative (using the statistical model) that are positive; 4.
True negative (TN): Total instances defined as negative (using the predictive model) that are negative.

Cross-Validation
Cross-validation is a mathematical approach used to assess learning efficiency and methods of classification. This is achieved by separating the available instances of guided information into k-folds. One of those folds was used for research, and the other was used for preparation. Ten-fold cross-validation was used in this work. Instances of data were broken into 10 folds. One fold was used for 11 iterations for verification and 10 folds for preparation, such that testing was conducted in each iteration. Accuracy A classifier's accuracy was measured as the number of instances correctly identified out of the total instances. Accuracy = TP + TN TP + TN + FP + FN

Root Mean Square Error
The root mean square error (RMSE) was measured as the average square root of square differences between the expected and the real groups (or labels).
The F-measure was determined by integrating the two accuracy and recall tests:

ROC Curve
Another means of calculating the output of a classifier was the receiver operational characteristics (ROC). The true positive rate was plotted against the false positive rate to achieve this. Then, the region under the resulting ROC curve was used to calculate the classifier's accuracy. The nearer the region was to 1, the more accurate the classifier was.
True Positive = TP TP + FN and False Positive Rate = FP FP + TN

Confusion Matrix
The uncertainty matrices arising from the implementation of 10-fold cross-validation for the eight chosen classifiers are seen in Figure 6. High numbers of these matrices represent positive scores in the upper-left and lower-right boxes. Large numbers of these matrices represent poor scores in the lower-left and upper-right boxes.  Figure 7 displays the ROC curves that resulted from the eight chosen classifiers after 10-fold cross-validation and shows the comparison of the algorithms' results. The figure displays each algorithm's precision, root mean square error, F-measure, and ROC field, which was measured using the well-known 10-fold cross-validation method. The findings of Table 3 and Figure 8 indicate that the models developed using the SVM, Neural Network, Naïve Bayes, K-NN, and Decision Table algorithms were successful in predicting verified and possible COVID-19 events. This indicates that a mixture of these five powerful frameworks may be used by our suggested IoT-based system. This may be achieved by aggregating, depending on plurality votes, the outcomes of these five learned models.

Conclusions
An IoT-fog-cloud-based prediction and data analysis framework to reduce the spread of COVID-19 diseases has been presented. The proposed framework incorporates machine and deep learning-based predictive models for COVID-19 monitoring, as well as providing information to doctors with knowledge about possible COVID-19 cases and clinical data of reported COVID-19 cases. Moreover, the presented framework enables the real-time classification of infected patients for the detection of the spread of COVID-19. An experiment was performed on a real COVID-19 dataset to test eight machine/deep learning algorithms: (1) Support Vector Machine, (2) Neural Network, (3) Naive Bayes, (4) K-Nearest Neighbor (KNN), (5) Decision Table, (6) Dense Neural Network, (7) OneR, and (8) LSTM. The results show that, except for Dense Neural Network, OneR, and LSTM, all algorithms obtained less than 90% accuracy. Thus, the efficient and reliable detection of possible cases of COVID-19 can be provided with the use of deep/machine learning techniques. With early case identification, the use of the proposed real-time fog platform could theoretically reduce the effects of the spread of COVID-19 as well as mortality rates. In the future, research can be performed considering the security aspects of the proposed model.