End-To-End Deep Learning Framework for Coronavirus (COVID-19) Detection and Monitoring

: Coronavirus (COVID-19) is a new virus of viral pneumonia. It can outbreak in the world through person-to-person transmission. Although several medical companies provide cooperative monitoring healthcare systems, these solutions lack o ﬀ ering of the end-to-end management of the disease. The main objective of the proposed framework is to bridge the current gap between current technologies and healthcare systems. The wireless body area network, cloud computing, fog computing, and clinical decision support system are integrated to provide a comprehensive and complete model for disease detection and monitoring. By monitoring a person with COVID-19 in real time, physicians can guide patients with the right decisions. The proposed framework has three main layers (i.e., a patient layer, cloud layer, and hospital layer). In the patient layer, the patient is tracked through a set of wearable sensors and a mobile app. In the cloud layer, a fog network architecture is proposed to solve the issues of storage and data transmission. In the hospital layer, we propose a convolutional neural network-based deep learning model for COVID-19 detection based on patient’s X-ray scan images and transfer learning. The proposed model achieved promising results compared to the state-of-the art (i.e., accuracy of 97.95% and speciﬁcity of 98.85%). Our framework is a useful application, through which we expect signiﬁcant e ﬀ ects on COVID-19 proliferation and considerable lowering in healthcare expenses.


Introduction
Real-time monitoring systems have a substantial impact on developing healthcare-monitoring and therapeutic interventions. Patient-monitoring systems are considered to be the most important mobile health service that helps patients to perform their daily life activities, by monitoring their vital signs [1,2]. In the e-healthcare sector, several technologies, such as the Internet of Things (IoT), cloud computing, and fog computing, are recently being integrated to provide elastic real-time clinical solutions. These solutions are not only enriching business by reducing costs, but also by addressing various clinical challenges, by facilitating a set of operations such as the sensing, processing, and communication of various clinical challenges, by facilitating a set of operations such as the sensing, processing, and communication of physical and biomedical parameters [3]. The IoT-based wireless body area network (WBAN) (IoT-WBAN) is mainly constructed of a wireless sensor network of wearable embedded computing devices. The term of IoT-WBAN offers new opportunities for real-time monitoring of patient health status and momentary controlling quick patients' reactions. Manirabona et al. [4] introduced WBAN as a promising solution for patient monitoring in medical environments. They made maximized the utilization of data by sharing the information generated by sensors placed on the patient's body or under the skin. These sensors are used to collect certain body parameters or vital signs (e.g., electrocardiogram (ECG), electroencephalogram (EEG), body movement, temperature, blood pressure, blood glucose, heartbeat, and respiration rate levels). The collected data are then used for different purposes such as in-hospital monitoring, remote diagnosis, ambulatory patient monitoring, and trigger emergency services. Every WBAN node has a way for implementation within the body and a role in the network, as suggested by IEEE 802.15.6 taxonomy. WBAN could have a significant impact on the management of the spread of COVID-19 infection.
Coronavirus disease  is considered one of the most critical viruses on earth. It can spread quickly among humans, as well as animals [5]. One Harvard professor stated that around 70% of the global population might be infected with COVID-19 during the incoming years. The distribution of COVID-19 cases for the most infected ten counties between 22 January 2020 and 5 August 2020 is shown in Figure 1. As a result, the need for monitoring systems is highly increasing [6]. COVID-19 confirmed cases need an isolation room with continuous monitoring, and 6% of them need ICU beds to save their lives. Most developing countries suffer from a serious shortage in hospital rooms, as well as ICU beds. As a result, patients whose lives are at risk are being turned away from the ICU units. Four in five intensive care units are having to send patients to other hospitals due to the shortage in care units [7]. On the other hand, high-risk patients outside hospitals need to be monitored frequently by checking their vital signs, like temperature, blood pressure, respiratory rate, drip level, etc. The admission time to ICU is a critical decision, and various studies demonstrated that the late identification of clinical deterioration will lead to delay of therapeutic intervention and result in increasing of mortality rate [8]. With the rapid growing of number of COVID-19 cases, it is impossible for the non-expert physicians to take the most accurate decisions in the right time. Therefore, various studies have been conducted based on COVID-19 infection statistics and patient's metadata. Some studies tried to find relation between the risk of being infected and other factors, such as age [9] and diseases (e.g., diabetes) [9]. Other studies used patients' chest X-ray images or CT scans to build deep learning models for detecting patients with sepsis [10,11]. In this paper, we propose a complete mobile clinical-decision-support system (CDSS) that could continuously monitor patients inside and outside hospitals. The system automatically collects patient data and provides an online communication link between a patient and healthcare professional. This will contribute to improve the quality of patient care. Our system perspective is based on a set of sensors including temperature, heart rate, respiratory rate, SPO2 saturation, blood pressure, etc. In this paper, we propose a complete mobile clinical-decision-support system (CDSS) that could continuously monitor patients inside and outside hospitals. The system automatically collects patient data and provides an online communication link between a patient and healthcare professional. This will contribute to improve the quality of patient care. Our system perspective is based on a set of sensors including temperature, heart rate, respiratory rate, SPO2 saturation, blood pressure, etc.
Smart devices, such as smart phones, remotely receive the information sensed by these sensors and act as bridges between patients and healthcare personnel [12]. Collected information is aggregated in an electronic health record (EHR) database stored on the cloud [13]. In addition to that, in the hospital side, a deep neural network is proposed based on a patient's X-ray images for chest scan, to help non-expert physicians to automatically detect COVID-19 cases. The contributions of this paper can be summarized as follows: (a) This is the first work that provides an end-to-end (E2E) deep-learning-assisted communication framework for COVID-19 disease management. (b) It provides a monitoring system that could limit the spread of infection through the continuous monitoring for all suspected and infected patients. (c) The proposed scheme is the first attempt in the COVID-19 domain that integrates both fog and cloud computing paradigms to solve the problems related to power consumption, transmission issues, data analysis, etc. (d) Unlike existing works, the present work focuses on building complete data from patients' data, to help clinicians in diagnosis. (e) The proposed classification model could detect and classify COVID-19 patients based on the chest X-ray images with promising results.
The reminder of this paper is organized as follows. Section 2 presents the related work. The challenges of a remote patient-monitoring system (RPMS) are discussed in Section 3. The proposed framework is presented in Section 4. Section 5 contains a dataset descript. Classification model discussed in Section 6. Results are discussed in Section 7. Section 8 provides the limitations of our contribution. Finally, a conclusion is provided in Section 9.

COVID-19 Pandemic
Coronaviruses are non-segmented positive RNA viruses that belong to the Coronaviridae family, which is widely distributed among humans and other mammals. Although coronavirus infections for humans are mild, the epidemics of severe acute respiratory syndrome coronavirus (SARS-CoV) [14] and Middle East respiratory syndrome coronavirus (MERS-CoV) [15] have resulted in more than 10,000 cumulative cases in the past wo decades (with mortality rates of 10% for SARS-CoV [16] and 37% for MERS-CoV [17]). In 2019, many unknown cases of pneumonia appeared in Wuhan, China. Sequence of deep studies on respiratory tract samples revealed a novel coronavirus, which called is COVID-19. To date, more than 19,193,980 patients have been reported in 210 different countries and territories around the world. Therefore, various medical and non-medical studies have been conducted to understand the nature of virus.
First, several studies have been conducted to find a correlation between COVID-19 progression and other factors, such as age, heart diseases, etc. In References [18][19][20][21], the authors reported that age is one of the main risk factors for diseases complication. Among 44,000 infected cases in China, the fatality rate between elderly >80 was 14.8%; 70-78 years was 8.0%; and 60-69 years was 3.6%. The epidemiologic data in the USA also reported that the fatality rate between people aged of more than 85 years ranged between 10% and 27%, followed by people aged between 65 and 84, with 3% to 11%. Other studies reported case fatality of 10.5% between patients with cardiovascular disease, 7% for patients with diabetes, and approximately 6% for patients with respiratory diseases [9,19,22]. Huang et al. [23] aggregated metadata of COVID-19 patients and used data-mining techniques to analyze these data, such as extracting relations between smoking and infection.
Second, many studies tried to forecast the number of infected cases based on the currently collected statistics. Id et al. [24] used time-series forecasting techniques to predict the number of deaths and recoveries. Li et al. [25] tried to find propagation rule of the virus. First, they developed a dynamic model for infected cases; second, they built a statistical model based on time-series analysis and used data mining to find an epidemic law of infection. Jia et al. [26] used three machine learning (ML) models (logistic model, Bertalanffy model, and Gompertz model) to analyze the collected number of infections and to predict the expected number of infections.
Third, various studies depending on ML and deep learning models have been used to predict, identify, and classify patients with COIVD-19 cases [27]. For example, Ozturk el al. [28] provided a deep learning model that was based on Dark Net and convolution neural network (CNN) trained on 125 chest X-ray; they achieved 87.02% in terms of multiclass classification (COVID-19 vs. pneumonia vs. no finding), and they achieved an accuracy of 98.08% for COVID-19 vs. no-finding problem. The same idea has been done in References [29,30]. Barstugan el al. [31] utilized 150 CT images for the detection of COVID-19. First, they used wavelet transform (DWT) to extract features, and then they used a support vector machine (SVM) for classification. They achieved 99.68% classification accuracy with 10-fold cross-validation. Pathak el al. [10] used the ResNet (Residual Neural Network) transfer-learning technique to classify CT-scan images.
To the best of our knowledge, none of the existing studies focuses on the deep-learning-aided E2E communication system to limit the infection of COVID-19. Therefore, a complete tracking system is highly required, which is the main objective of this study.

Healthcare Monitoring Systems
In this subsection, we concentrate on the role of mobile monitoring systems in the management of different diseases, including heart diseases, disability diseases, etc. [32,33]. These systems aim to make a patient's life easier through introducing various supportive systems (e.g., telemedicine, teleconsultations, etc.). A comprehensive review of mobile health-monitoring systems (MHMs) is presented by El-Sappagh et al. [34] and Sruthi et al. [35]. Researches in MHMs divided the development of monitoring systems into three main steps: (1) patient's data and vital signs acquisition, (2) data transmission across various network systems, and (3) backend system (database system), usually located in the hospital or on the cloud. In this subsection, various studies are analyzed, at different levels of details, to provide complete image of monitoring systems. Pandey et al. [36] proposed a monitoring system for cardiovascular diseases. They use wearable sensors to collect vital signs (e.g., blood pressure, galvanic skin response (GSR), and electrocardiogram (ECG)), then Bluetooth is used to transmit WSN data to a mobile device. The aggregated data are stored on patient's mobile device and then used to classify the patient's health status, to determine patient condition "continued risk" or "no longer risk", using support vector machine (SVM). Sahoo et al. [37] proposed a mobile monitoring system that used the noncontact method to automatically measure patient's biomedical signals. The idea of this research is to monitor patient ECG on various locations (e.g., office, car, etc.) through fabric sensor electrodes embedded in a patient's chair. The Bluetooth is used to transmit signals from the sensor to a patient's mobile device, and the system provides feedback to the patient when any risk is detected. Such types of systems that depend only on a mobile device in the patient-monitoring process are not suitable for chronic disease patient, as they have many limitations relating to limited storage and reliance on incomplete patient personal and medical data. These limitations prevented their acceptance by the medical community. Therefore, building effective and efficient RPM systems is highly required, especially with the health organizations' inability to provide health services due to the COVID-19 pandemic.
Using cloud service added many capabilities and solved many limitations in RPM. Cloud computing (CC) provided services that can be divided into three layers of services: infrastructure as a service (IaaS), which provided high-volume storage, such as in Amazon storage service S3; platform as a service (PaaS), which offers both storage and service, such as Google App; and software as a service (SaaS). Cloud computing combined with WSN provided promising monitoring systems that enhance the quality of service. It allows real-time access to patient data at any time and from anywhere. It also permits processing, analyzing, and sharing data; moreover, it provides high-volume storage at a low cost and reduces the burden of in-hospital data management. Pandey et al. [38] proposed a system that integrates mobile computing and cloud computing to analyze ECG data. The authors provided a cloud environment to collect people's health data, such as ECG data, and disseminated them to a cloud-based information repository, which facilitated the analysis of the data by using software services hosted in the cloud. Limitations and challenges of cloud computing in RPM systems are confined to the power and time consumption, privacy, and security of the transmitted data. The proposed system tried to handle some RPM challenges and used COVID-19 as a case study. Table A1 in Appendix A provides a summarization of patient-monitoring systems.

The General Architecture of RPMS
Most patient-monitoring systems consist of three main layers: data acquisition layer, storage layer, and backend layer. Figure 2 depicts the general architecture of RPMs.
Electronics 2020, 9, x FOR PEER REVIEW 5 of 25 by using software services hosted in the cloud. Limitations and challenges of cloud computing in RPM systems are confined to the power and time consumption, privacy, and security of the transmitted data. The proposed system tried to handle some RPM challenges and used COVID-19 as a case study. Table A1 in Appendix A provides a summarization of patient-monitoring systems.

The General Architecture of RPMS
Most patient-monitoring systems consist of three main layers: data acquisition layer, storage layer, and backend layer. Figure 2 depicts the general architecture of RPMs.

Data-Acquisition Layer
Sensors play a key role in RPMs. A sensor acts as a bridge between the physical world and the digital domain [39]. An RPMS uses various sensors to collect data about a patient's health status (i.e., ECG, SPO2, HR, etc.) and context data (i.e., room oxygen level, gas leakage, room temperature, light level, etc.). Actuators can respond to feedback from decision support systems (for example, adjust

Data-Acquisition Layer
Sensors play a key role in RPMs. A sensor acts as a bridge between the physical world and the digital domain [39]. An RPMS uses various sensors to collect data about a patient's health status (i.e., ECG, SPO2, HR, etc.) and context data (i.e., room oxygen level, gas leakage, room temperature, light level, etc.). Actuators can respond to feedback from decision support systems (for example, adjust insulin dose delivered to a patient based on collected sensors data). A WBAN is an IoT-based wireless network used for remote and real-time monitoring. In healthcare applications, wireless sensor nodes are classified into two main types based on implementation technique: (1) implanted sensors, which are implanted either in the patient's body or under the skin, and (2) external sensors, which are directly attached to the patient's skin or separated from the patient by 2-5 cm [40,41]. These sensors continuously measure the patient's vital signs and transmit the collected data to a remote location, using a wireless communication protocol (i.e., LoRa, WLAN, WiMAX, LTE, and UMTS).
Wireless sensor nodes allow RPMSs to capture features and information about patients, such as aggregating real-time vital signs, tracking patient's symptoms, and the effect of therapy plan, monitoring patient communications and social activities, etc. Various healthcare systems depend on wireless sensors such as vital-sign-monitoring and health-assessment systems [42], reminder systems [43][44][45][46], and fall-detection-monitoring systems [47].

Cloud Computing
Cloud computing is a new paradigm that adds fruitful benefits to healthcare care systems [48]. Benefits could be summarized as the following points: (1) It increases speed and efficiency; cloud monitoring systems usually use a group of servers to manage patient status, which increases system speed and efficacy. Ahnn et al. [49] showed that CC increased the speed by 20 times and the energy efficiency by 10 times. In addition, a hardware failure in one server does not dramatically affect the performance of the overall system. (2) It provides enormous amounts of storage for saving big healthcare data files (e.g., imaging data, time series data, etc.). Moreover, it easily provides a means to share these files with other hospitals. (3) It helps in analyzing all patients' data (such as demographics, symptoms, therapy plans, and treatment) and makes all the data available, to support and improve decision-making processes [50,51]. (4) It provides more secure servers than do local servers for storing patient data, as it uses security layers to secure cloud data from theft or hacks.
Cloud computing in conjunction with a WSN enables promising monitoring systems that can enhance the quality of service (QoS). The combination offers physicians the ability to monitor all patient data sensed with biosensors, regardless of the type of patient data. The limitations and challenges of cloud computing in RPMS are confined to the power and time consumption, privacy, and security of the transmitted data. In the last decades, fog computing was developed to handle the challenges associated with CC.

Fog Computing
Fog computing extends the traditional cloud to edge network. The point of using fog computing is to transfer the processing of some sensitive application to the edge (near the end device), while others can be done over the cloud. Problems related to location awareness, reliability, latency, and many other challenges are resolved by using the fog-computing architecture [52]. Fog computing in RPMS is a new concept. It provides many advantages over cloud: • Data processed and analyzed locally instead of sending them to the cloud; this led to the consumption of less bandwidth and decreased the overall costs [53].

•
Processing data locally decreased the time-latency during transmission, which helps to avoid problems especially for time-sensitive applications (e.g., real-time monitoring, self-driving car, etc.).
• Providing better privacy to users, as patient's data can be analyzed locally instead of sending them to the cloud.

•
Deploying fog servers in RPMS decreased the required bandwidth for transmission, providing real-time data to doctors, without the need for an internet connection [54]. • Saves on power consumption while continuously transmitting to cloud servers [55].
Fog IoT systems are divided into three main layers: device layer, fog layer, and cloud layer. Dastjerdi et al. [56] provided a tutorial that discusses the differences between edge computing, fog computing and cloud computing. Stojmenovic et al. [57] discussed the role of fog computing in various domains. Figure 3 shows the basic architecture of fog computing model for RPMS. Table 1 provides in terms of many factors. Note: The comparison concentrates on factors related to real-time monitoring.
Electronics 2020, 9, x FOR PEER REVIEW 7 of 25 computing and cloud computing. Stojmenovic et al. [57] discussed the role of fog computing in various domains. Figure 3 shows the basic architecture of fog computing model for RPMS. Table 1 provides in terms of many factors. Note: The comparison concentrates on factors related to real-time monitoring.

Backend Layer
This section focuses on the database layer (third layer) and its role in remote patient monitoring. Medical data are varying in nature and characteristics, with uncertainty and highly missing data [58]. The early developments of artificial intelligence and medical informatics fields have culminated in what is known as the clinical-decision-support system (CDSS) [13]. CDSS is considered to be the brain of a healthcare system, and it is used to assist healthcare teams in the decision-making process [59].
The literature indicates that the use of CDSSs has an important impact on monitoring systems. CDSSs could provide the following: (1) a comprehensive healthcare view of the patient's medical history; (2) help to non-expert physicians by providing clinical guidelines, practice standards, and differential diagnoses; (3) and help to patients by offering several assistive tools, such as drug-schedule reminders, drug prescriptions, and drug-dose interactions. All these factors improve the importance of using CDSS in remote monitoring systems, especially in remote areas. Velickovski et al. [60] provided CDSS for chronic obstructive pulmonary disease (COPE). CDSSs have also been used for other diseases, such as Alzheimer's [61] and diabetes [62]. To build an efficient CDSS, various components must be integrated to provide a complete image of a patient's health status, such as patient EHR, knowledge base, clinical principle guidelines, etc. Furthermore, ML should continuously refresh the CDSS's knowledge base. For the convenience of reader Appendix B, Table A2 inlclude all terms with its abbreviation's.

The Proposed Framework
The main objective of our framework is to provide a complete system that limits the proliferation of COVID-19. It concentrates on early detection and isolation because these are the most important factors that could help in reducing the number of COVID-19 cases. There are many MHMs, but most of them focus on helping patients by monitoring their vital signs through wearable sensors. Other studies focus on other issues, including data transmission and integration. None of the previous research studies proposed an end-to-end solution. Therefore, in this study, a complete health-monitoring system is proposed. The system helps in early detection, continuous monitoring for suspected cases, and ensuring patient isolation for suspected and confirmed cases. The framework is divided into three main parts (i.e., patient side, cloud side, and hospital side). Figure 4 depicts the flow of data in the proposed framework.
Electronics 2020, 9, x FOR PEER REVIEW 8 of 25 of a healthcare system, and it is used to assist healthcare teams in the decision-making process [59]. The literature indicates that the use of CDSSs has an important impact on monitoring systems. CDSSs could provide the following: (1) a comprehensive healthcare view of the patient's medical history; (2) help to non-expert physicians by providing clinical guidelines, practice standards, and differential diagnoses; (3) and help to patients by offering several assistive tools, such as drug-schedule reminders, drug prescriptions, and drug-dose interactions. All these factors improve the importance of using CDSS in remote monitoring systems, especially in remote areas. Velickovski et al. [60] provided CDSS for chronic obstructive pulmonary disease (COPE). CDSSs have also been used for other diseases, such as Alzheimer's [61] and diabetes [62]. To build an efficient CDSS, various components must be integrated to provide a complete image of a patient's health status, such as patient EHR, knowledge base, clinical principle guidelines, etc. Furthermore, ML should continuously refresh the CDSS's knowledge base. For the convenience of reader Appendix B, Table  A2 inlclude all terms with its abbreviation's.

The Proposed Framework
The main objective of our framework is to provide a complete system that limits the proliferation of COVID-19. It concentrates on early detection and isolation because these are the most important factors that could help in reducing the number of COVID-19 cases. There are many MHMs, but most of them focus on helping patients by monitoring their vital signs through wearable sensors. Other studies focus on other issues, including data transmission and integration. None of the previous research studies proposed an end-to-end solution. Therefore, in this study, a complete healthmonitoring system is proposed. The system helps in early detection, continuous monitoring for suspected cases, and ensuring patient isolation for suspected and confirmed cases. The framework is divided into three main parts (i.e., patient side, cloud side, and hospital side). Figure 4 depicts the flow of data in the proposed framework.

Patient Side
The proposed system in this side depends on a lightweight mobile application that could continuously track the patient status. The following is the set of modules in our mobile system: (1) person identification-each person creates an account using his/her identification number; and (2) patients tracking-all suspected and infected patients are tracked, using a GPS geolocation function of his/her mobile device. Figure 5 shows the different stages of patients in our proposed system. In Figure 6, (1)-(4) show the screens of the mobile app for identifying and tracking. All of the patient's data updates include status, measurements, and movements, and they are updated frequently in the backend database. On the other hand, each building or public transportation takes a unique QR code, adding this code on print media (such as brochures or flyers) in front of them. People are not permitted to pass unless they scan the QR code. Figure 6 (5) shows the QR screen. Once a person scans the QR code, the person's status (normal, infected, etc.) is retrieved form the backend database. Healthy cases are permitted to normally pass, but suspected cases are passed through a special path, to control the spread of infection, and other infected cases are not permitted to pass. Tracking peoples' movements prevents all suspected and confirmed cases from walking around and spreading the infections. Additionally, if anyone is infected by COVID-19 at any time, all close contacts of him/her will be traced and followed, using these data.
In our proposed framework, we treat with patient at five stages (normal, suspected, highly suspected, confirmed, and recovered). For each of these stages, different steps are taken to ensure patient safety. Mobile application is used to manage cases at early stages of the disease.
Electronics 2020, 9, x FOR PEER REVIEW 9 of 25 The proposed system in this side depends on a lightweight mobile application that could continuously track the patient status. The following is the set of modules in our mobile system: (1) person identification-each person creates an account using his/her identification number; and (2) patients tracking-all suspected and infected patients are tracked, using a GPS geolocation function of his/her mobile device. Figure 5 shows the different stages of patients in our proposed system. In Figure 6, (1)-(4) show the screens of the mobile app for identifying and tracking. All of the patient's data updates include status, measurements, and movements, and they are updated frequently in the backend database. On the other hand, each building or public transportation takes a unique QR code, adding this code on print media (such as brochures or flyers) in front of them. People are not permitted to pass unless they scan the QR code. Figure 6 (5) shows the QR screen. Once a person scans the QR code, the person's status (normal, infected, etc.) is retrieved form the backend database. Healthy cases are permitted to normally pass, but suspected cases are passed through a special path, to control the spread of infection, and other infected cases are not permitted to pass. Tracking peoples' movements prevents all suspected and confirmed cases from walking around and spreading the infections. Additionally, if anyone is infected by COVID-19 at any time, all close contacts of him/her will be traced and followed, using these data.
In our proposed framework, we treat with patient at five stages (normal, suspected, highly suspected, confirmed, and recovered). For each of these stages, different steps are taken to ensure patient safety. Mobile application is used to manage cases at early stages of the disease. Normal cases: In case anyone starts feeling any symptoms, such as temperature, vomiting, infiltration, etc., he/she firstly makes an initial COVID-19 score. This score is based on the WHO guidelines [63]. We provide a graphical user interface (GUI) form for the patient to manually enter the list of symptoms which determine the initial state of the patient. In a study of 10,114 patients [64], fever was present as the first symptoms for COVID-19 cases. In another studies, authors reported that vomiting or diarrhea sometimes are presented prior to fever [65,66]. Table 2 contains all symptoms and their score. If the initial score equal to six or more. Patient status will be updated as a suspected patient and will be asked to make the lab tests.  Normal cases: In case anyone starts feeling any symptoms, such as temperature, vomiting, infiltration, etc., he/she firstly makes an initial COVID-19 score. This score is based on the WHO guidelines [63]. We provide a graphical user interface (GUI) form for the patient to manually enter the list of symptoms which determine the initial state of the patient. In a study of 10,114 patients [64], fever was present as the first symptoms for COVID-19 cases. In another studies, authors reported that vomiting or diarrhea sometimes are presented prior to fever [65,66]. Table 2 contains all symptoms and their score. If the initial score equal to six or more. Patient status will be updated as a suspected patient and will be asked to make the lab tests.

Frontend App
For users (patients, homeowners, and physicians), they deal with the android application with different profiles. A patient uses his/her app to register his/her basic information and collect data from wearable devices. The physician monitors patient data and sets up the personalized notification. Figure 6 shows the main functions of the android app.

Cloud Side
In this layer, we need what is called "real-time analysis". Most healthcare systems use cloud systems to store, analyze, and visualize all patients' data. Ideally, this scenario might be suitable for some healthcare systems, but with the current pandemic, where general population is increasingly being affected, the speed and the quality of care are the keys of the game. As a result, we need a layer that acts as a bridge between the cloud system and IoT devices, a layer that could analyze a patient's data quickly and does not affect neither with internet connection status nor bandwidth status. Fog computing can handle all of these challenges. As mentioned above, fog computing allows devices to make critical analysis on their own, without the need of a cloud storage process. Therefore, using fog nodes could be the key for making the system incredibly useful for home-care patients. The collected information from sensors is sent to the nearest fog node, using the LoRa network. LoRa is a remarkable protocol for short messages carried over a long range. It could send 12 bits every ten minutes, and that is exactly what we want when patients are out of range of cell phone towers.

Fog Architecture
The goal of this experiment was to investigate the effectiveness of fog computing, to provide a better business solution that could improve the overall performance of our monitoring framework. The setting of this experiment was conducted upon a personal computer (PC) with an Intel Core i5-   Suspected case: According to the score that results from the previous step, patients are asked to take the recommended plasma test if the initial score exceeded six. Depending on plasma returned to the shortage in of CAT Scan Machines and other kits used in the SWAP test, it is considered as a filtration step. According to Reference [67], WHO stated that there is a correlation between lactate dehydrogenase (LDH) and white blood cells to the COVIOD-19 infection. Several studies reported that lymphopenia is the common finding between more than 83% of the hospitalized patients. Lymphopenia, neutrophilia, aspartate aminotransferase, lactate dehydrogenase, high C-Reactive Protein (CRP), and high ferritin levels are associated with highly illness severity [19,61,64,68,69]. Table 3 includes all lab tests recommend by WHO. Highly suspected case: In this step, patients will do a SWAP test and CT scan, and all data on each of the previous steps are updated on patent's data.
Confirmed case: In this step, patients are categorized into two categories. The first is serve disease, where patients require hospitalization to manage COVID-19 complications, such as hypoxemic respiratory failure (ARDS), pneumonia, sepsis, etc. [20,[69][70][71]. The second is mild-to-moderate diseases, where patients may not require hospitalization, and their health status could be managed at home or through continuous monitoring system. This type of patient is the core of our work. The decision of the monitoring system (inpatient, outpatient) should be specified case by case. It depends on several factors, including the results of clinical examination, the potential of risk factors, and the ability to make home isolation. People with risk for serve diseases should have close monitoring to avoid the progression especially in the second week of infection [18,23,69].
To enable home monitoring systems, sensors and actuators are deployed in a patient's home. It includes wearable sensors, such as a temperature sensor and heart rate senor, and environmental sensors, such as motion-detection sensor. The homeowner logs into patient's mobile application installed on his/her device and authorizes the access to device data.
Recovered case: The incubation period for COVID-19 (the time between the initial infection and the onset of symptoms) is estimated to be 14 days, with an average time of four to five days [68,72]. Minaee et al. [73] reported that 97.5% of patients who develop COVID-19 symptoms will recover within 11.5 days. However, Ruiyun et al. [74] reported that 4% of COVID-19 patients may be able to spread infection even after all symptoms disappeared. Therefore, we depend on re-SWAP tests to ensure patient safety and limit the spread of infection.

Frontend App
For users (patients, homeowners, and physicians), they deal with the android application with different profiles. A patient uses his/her app to register his/her basic information and collect data from wearable devices. The physician monitors patient data and sets up the personalized notification. Figure 6 shows the main functions of the android app.

Cloud Side
In this layer, we need what is called "real-time analysis". Most healthcare systems use cloud systems to store, analyze, and visualize all patients' data. Ideally, this scenario might be suitable for some healthcare systems, but with the current pandemic, where general population is increasingly being affected, the speed and the quality of care are the keys of the game. As a result, we need a layer that acts as a bridge between the cloud system and IoT devices, a layer that could analyze a patient's data quickly and does not affect neither with internet connection status nor bandwidth status. Fog computing can handle all of these challenges. As mentioned above, fog computing allows devices to make critical analysis on their own, without the need of a cloud storage process. Therefore, using fog nodes could be the key for making the system incredibly useful for home-care patients. The collected information from sensors is sent to the nearest fog node, using the LoRa network. LoRa is a remarkable protocol for short messages carried over a long range. It could send 12 bits every ten minutes, and that is exactly what we want when patients are out of range of cell phone towers.

Fog Architecture
The goal of this experiment was to investigate the effectiveness of fog computing, to provide a better business solution that could improve the overall performance of our monitoring framework. The setting of this experiment was conducted upon a personal computer (PC) with an Intel Core i5-4210U 1.70 GHz (c/TB 2.70 GHz), 4 GB RAM/Disco Duro 500 GB, and a 64-bit PC (AMD64) Ubuntu operating system (Intel, Santa Clara, CA, USA). The Network Simulation version-2 (NS2.35) [75] was used as a prototype platform; being highly portable and an open-source, it was designed to handle the events continuously generated by sensor networks.
The proposed scenario to integrate fog computing with sensor network in NS2.35 was done through three main steps: (i) initializing the network topology by Tool command language (TCL) with varying network range (10-100) nodes, (ii) modifying a set of sensor node to act as a fog nodes (this modification follows the reference guide known for Fog Hierarchical Deployment Model from OpenFog Reference Architecture [76]), and (iii) the rest of sensor nodes work as the ordinary nodes, and therefore they can exchange data among themselves. All experiment settings are illustrated in Table 4. In Figure 7, an application of the proposed scenario is introduced. The TCL file simulates the sensor network through a set of ordinary sensor nodes and two fog nodes. As mentioned before, the fog nodes are configured into NS2.35 according to the Fog Hierarchical Deployment Model from OpenFog Reference Architecture. The code for these fog nodes was written in the C++ language program, to facilitate the event exchange over the network. X-dimension of topography was 1800, and Y-dimension topography was 840.
Electronics 2020, 9, x FOR PEER REVIEW 12 of 25 used as a prototype platform; being highly portable and an open-source, it was designed to handle the events continuously generated by sensor networks. The proposed scenario to integrate fog computing with sensor network in NS2.35 was done through three main steps: (i) initializing the network topology by Tool command language (TCL) with varying network range (10-100) nodes, (ii) modifying a set of sensor node to act as a fog nodes (this modification follows the reference guide known for Fog Hierarchical Deployment Model from OpenFog Reference Architecture [76]), and (iii) the rest of sensor nodes work as the ordinary nodes, and therefore they can exchange data among themselves. All experiment settings are illustrated in Table 4.  Figure 7, an application of the proposed scenario is introduced. The TCL file simulates the sensor network through a set of ordinary sensor nodes and two fog nodes. As mentioned before, the fog nodes are configured into NS2.35 according to the Fog Hierarchical Deployment Model from OpenFog Reference Architecture. The code for these fog nodes was written in the C++ language program, to facilitate the event exchange over the network. X-dimension of topography was 1800, and Y-dimension topography was 840. To figure out the overall efficiency of the proposed scenario, a comparison between a system with applying fog computing approach and another system without applying it was conducted. As seen in Figure 8, the proposed scenario with fog technology outperforms the traditional network without fog technology, due to its capability to reduce the total rate of data transmission, and To figure out the overall efficiency of the proposed scenario, a comparison between a system with applying fog computing approach and another system without applying it was conducted. As seen in Figure 8, the proposed scenario with fog technology outperforms the traditional network without fog technology, due to its capability to reduce the total rate of data transmission, and therefore the network bandwidth and power consumption are vastly improved.
Electronics 2020, 9, x FOR PEER REVIEW 13 of 25 Figure 8. A comparison between system with fog and system without fog technology.

Backend Cloud Database
To enable the centralized storage of all clinical features, we developed a backend cloud database that utilizes the red hat Linux 9, Apache, PHP, and MySQL. First, we set up Apache, MySQL, and PHP over Linux environment for Apache as web servicing, and MySQL as database management and storage system. PHP is the language allow server to interact with the application. We built a database that revolves around fog node and could easily engage with the database. It contains four main tables, namely patients, physicians, devices, and the extract information from patient data. Building a cloud database provides dedicated storage cluster, which provides many advantages, including synchronous data replication, scalability in adding and removing replicas, and reducing complexity by using CLI and API.
A patient's data and status from different sources, such as medical presentation, wearable devices, scan test, etc., are continuously updated in the patient's electronic health record (EHR), saved on the cloud, and sent to analytics module, to determine whether the potential risk is exceeding the predefined threshold. After determining that there is a problem, notifications are sent to both the patient and physician.

Hospital Side
For the in-hospital side, we concentrate on three main points: (1) build a classification model that could classify X-ray images to COVID-19 cases or normal cases; (2) build complete cloud-integrated EHR system that aggregates data from patient sensors, medical presentation, lab test, and X-ray test; and (3) build a CDSS system that depends on the distributed EHR, in addition to WHO's clinical practice guidelines (CPGs).

Dataset Description
The COVID-19 dataset was collected from different regions in the world and is publicly available from References [77,78]. It consists of 622 cases (122 for COVID-19, and 500 for normal). Due to shortage in training sample, and to make the data balanced, we performed a data augmentation process to increase the training sample size in COVID-19 class. We end up with 750 images (250 for COVID-19, and 500 for no-finding), and all images were reshaped to 192 × 192. Data were randomly selected to include 70% for training, 15% for validation, and 15% for testing.

Classification Model
According to the guidelines of WHO, the diagnosis of COVID-19 can be confirmed with either

Backend Cloud Database
To enable the centralized storage of all clinical features, we developed a backend cloud database that utilizes the red hat Linux 9, Apache, PHP, and MySQL. First, we set up Apache, MySQL, and PHP over Linux environment for Apache as web servicing, and MySQL as database management and storage system. PHP is the language allow server to interact with the application. We built a database that revolves around fog node and could easily engage with the database. It contains four main tables, namely patients, physicians, devices, and the extract information from patient data. Building a cloud database provides dedicated storage cluster, which provides many advantages, including synchronous data replication, scalability in adding and removing replicas, and reducing complexity by using CLI and API.
A patient's data and status from different sources, such as medical presentation, wearable devices, scan test, etc., are continuously updated in the patient's electronic health record (EHR), saved on the cloud, and sent to analytics module, to determine whether the potential risk is exceeding the predefined threshold. After determining that there is a problem, notifications are sent to both the patient and physician.

Hospital Side
For the in-hospital side, we concentrate on three main points: (1) build a classification model that could classify X-ray images to COVID-19 cases or normal cases; (2) build complete cloud-integrated EHR system that aggregates data from patient sensors, medical presentation, lab test, and X-ray test; and (3) build a CDSS system that depends on the distributed EHR, in addition to WHO's clinical practice guidelines (CPGs).

Dataset Description
The COVID-19 dataset was collected from different regions in the world and is publicly available from References [77,78]. It consists of 622 cases (122 for COVID-19, and 500 for normal). Due to shortage in training sample, and to make the data balanced, we performed a data augmentation process to increase the training sample size in COVID-19 class. We end up with 750 images (250 for COVID-19, and 500 for no-finding), and all images were reshaped to 192 × 192. Data were randomly selected to include 70% for training, 15% for validation, and 15% for testing.

Classification Model
According to the guidelines of WHO, the diagnosis of COVID-19 can be confirmed with either a SWAP test, CT-Scan, or X-ray chest scan. During the current situation, most health organizations' identification procedure for COVID-19 patients are not quick enough to limit the risk of infections to the larger population. Therefore, an automatic identification tool in highly required.
Our classifier is based on the deep convolution neural network (CNN/ConvNet) that could automatically identify COVID-19 cases based on chest X-ray images. CNN is a deep learning algorithm that could take images as input, assigning weights to various aspects/characteristics in a way that could distinguish one from another.
The following steps summarize the roles of CNN layers. (1) To track and evaluate potential features, CNN implements multiple convolutions and pooling layers. (2) The max pooling layer is used to minimize the spatial size of convolutional features and solves issues related to overfitting. It extracts the maximum region obtained from the previous convolutional layer. Figure 9a shows the max pooling layer, using stride 1 and kernel size of 3. (3) Rectified linear unit (ReLU) is an activation function used to map between an input and its target variable, using linear function. Figure 9b shows the calculation method of ReLU activation function. (4) Dropout layers are used to minimize overfitting in the training data. (5) Fully connected layers are used as a classifier that utilize features to classify objects. identification procedure for COVID-19 patients are not quick enough to limit the risk of infections to the larger population. Therefore, an automatic identification tool in highly required. Our classifier is based on the deep convolution neural network (CNN/ConvNet) that could automatically identify COVID-19 cases based on chest X-ray images. CNN is a deep learning algorithm that could take images as input, assigning weights to various aspects/characteristics in a way that could distinguish one from another.
The following steps summarize the roles of CNN layers. (1) To track and evaluate potential features, CNN implements multiple convolutions and pooling layers. (2) The max pooling layer is used to minimize the spatial size of convolutional features and solves issues related to overfitting. It extracts the maximum region obtained from the previous convolutional layer. Figure 9a shows the max pooling layer, using stride 1 and kernel size of 3. (3) Rectified linear unit (ReLU) is an activation function used to map between an input and its target variable, using linear function. Figure 9b shows the calculation method of ReLU activation function. (4) Dropout layers are used to minimize overfitting in the training data. (5) Fully connected layers are used as a classifier that utilize features to classify objects. Unfortunately, one of the biggest challenges that face researchers in the analysis of medical data is the shortage of the available dataset. Deep learning models mainly depend on the availability of lots of labeled data. Therefore, the transfer-learning method is considered the most suitable technique with a small number of datasets. Transfer learning is a method that allows training models with a small dataset to gain information (learning parameters) from a pre-trained model on a large dataset [79]. This technique has the benefit of decreasing the training time and the generalization error and increasing the overall performance of the model. Three popular models are used for transfer learning: VGG (VGG 16 or VGG 19), GOOGLENet (Inception V3), and Residual Network (e.g., RestNet50) [80].
ResNet (Residual Neural Network) is an effective pre-trained model that was developed in 2015; it is an improved version of CNN. It consists of 50-layer network that trained on the ImageNet dataset. ImageNet is an image dataset with 14 million image and 20 thousand categories [73]. The main advantage in RestNet-50 is the use of a new concept called "skip connection", which helps to mitigate the problem of vanishing gradient. Skip connection (identify mapping) is depicted in Figure  10. This shortcut connection helps in understanding the global features [74]. It also allows the network to skip the disposable layers, thus resulting in optimal tuning and faster learning by increasing the network capacity [73]. Therefore, in our study, we used a ResNet-50 pre-trained model to obtain higher prediction accuracy with a small number of X-ray chest datasets. Figure 11 shows the block diagram of the pre-trained ResNet model used in this study. Mathematically, if we consider the input to be , then the output, H, is defined as follows: (1) Unfortunately, one of the biggest challenges that face researchers in the analysis of medical data is the shortage of the available dataset. Deep learning models mainly depend on the availability of lots of labeled data. Therefore, the transfer-learning method is considered the most suitable technique with a small number of datasets. Transfer learning is a method that allows training models with a small dataset to gain information (learning parameters) from a pre-trained model on a large dataset [79]. This technique has the benefit of decreasing the training time and the generalization error and increasing the overall performance of the model. Three popular models are used for transfer learning: VGG (VGG 16 or VGG 19), GOOGLENet (Inception V3), and Residual Network (e.g., RestNet50) [80].
ResNet (Residual Neural Network) is an effective pre-trained model that was developed in 2015; it is an improved version of CNN. It consists of 50-layer network that trained on the ImageNet dataset. ImageNet is an image dataset with 14 million image and 20 thousand categories [73]. The main advantage in RestNet-50 is the use of a new concept called "skip connection", which helps to mitigate the problem of vanishing gradient. Skip connection (identify mapping) is depicted in Figure 10. This shortcut connection helps in understanding the global features [74]. It also allows the network to skip the disposable layers, thus resulting in optimal tuning and faster learning by increasing the network capacity [73]. Therefore, in our study, we used a ResNet-50 pre-trained model to obtain higher prediction accuracy with a small number of X-ray chest datasets. Figure 11 shows the block diagram of the pre-trained ResNet model used in this study. Mathematically, if we consider the input to be X, then the output, H, is defined as follows: Layers weights learn a residual mapping that is given by the following: where F(x) considers the stacked non-linear weight layers. ResNet-50 consists of several layers, as follows: (1) 7 × 7 convolutional layer with 64 kernels, (2) 3 × 3 max pooling layer with 16 residual block, (3) 7 × 7 average pooling layer with stride 7, (4) fully connected layer, and (5) finally, SoftMax layer that is set to 40, which is equal to the number of classes in the dataset [81].
Electronics 2020, 9, x FOR PEER REVIEW 15 of 25 where ( ) considers the stacked non-linear weight layers. ResNet-50 consists of several layers, as follows: (1) 7 × 7 convolutional layer with 64 kernels, (2) 3 × 3 max pooling layer with 16 residual block, (3) 7 × 7 average pooling layer with stride 7, (4) fully connected layer, and (5) finally, SoftMax layer that is set to 40, which is equal to the number of classes in the dataset [81].     The following steps summarize the steps for building COVID-19 classification model using the ResNet technique. (1) The fully connected layer of ResNet-50 was removed from the model, and the other convolutional layers were used as a base network for our classification model. (2) Batch normalization, dropout, and fully connected layer were added to the base network. Batch normalization was added to provide rapid training for the base model. Dropout layers were added to avoid the overfitting problem. (3) The SoftMax activation function was added in the last layer, Figure 11. The proposed classification model architecture.
The following steps summarize the steps for building COVID-19 classification model using the ResNet technique. (1) The fully connected layer of ResNet-50 was removed from the model, and the other convolutional layers were used as a base network for our classification model. (2) Batch normalization, dropout, and fully connected layer were added to the base network. Batch normalization was added to provide rapid training for the base model. Dropout layers were added to avoid the overfitting problem.
(3) The SoftMax activation function was added in the last layer, which was used to classify images in the two classes (COVID-19 and normal cases). Our experiment was developed based on 1824 X-ray image (122 for COVID-19, 500 for normal cases). All images were resized to 192 × 192. Figure 10 shows the architecture of the classification model based on ResNet-50.

Experiment Setup
The classification model was implemented by using python libraries (SKLearn, Keras, NumPy, and Pandas), under Windows 10 operating system, on an Intel Core i7 with 4 GB graphics cared and 16 GB-Ram. The dataset was randomly divided into training, validation, and testing data with 70%, 15%, and 15%, respectively. The adaptive learning method Adam was used to optimize parameters during training. Learning rate was randomly selected to be 10-2, and dropout was selected to be 0.30. For batch normalization, 10 −5 was chosen for epsilon, and 0.1 for momentum.

Evaluation Metrics
We used five criteria to evaluate our transfer-learning model: where true positive (TP) represents the positive samples (COVID-19 cases) that are correctly classified, and true negative (TN) represents the proportion of negative samples (normal cases) that are correctly classified. Samples that are COVID-19 cases and classified as normal represent the false negative ratio, and samples that are negative and classified as COVID-19 cases represent the false-positive (FP) ratio. COVID-19 images are limited in data source (Chest X-ray and CT scan).

Discussion
In this study, we depend on chest X-ray images to predict the infection of corona virus disease. A popular pre-trained model (i.e., ResNet50) was used to train and test our classification model. Dropout, batch normalization and fully connected layer were used after the ResNet model. Then, the final layer used the SoftMax activation to output the classification result (1 for COVID-19 class, and 0 for no finding). As we can see in Table 5, the model was trained using 25 epochs. We chose to build the model with only 25 epochs, to avoid overfitting in the training phase. The performance of the ConvNet is summarized in Table 6. The F1 score, precision, and recall are 0.972, 0.974, and 0.975, respectively. Table 6 shows the classification result using the test data.  [28] achieved an accuracy of 98.00%, which outperforms our results, they are built based on a very small dataset of 100 [50(+), 50(−)] patients. As a result, it is expected that the model could achieve higher results. Accordingly, our model achieved good accuracy (97.78%), as compared to the existing works of sample size around 622 [122(+), 500(−)]. On the other hand, our model did not show much difference between training and testing accuracy and shows consistency between true positive and true negative values. Therefore, the proposed model could be considered to be more robust and reliable, could be used as an alternative to the existing COVID-19 diagnosis testing, and could help radiologists in confirmation of the infection of COVID-19.

Limitations and Future Scope
Our study has the following limitations. The identification of COVID-19 cases based on the initial score may not be accurate in all cases. Several studies [86][87][88][89] documented infections among patients who never suffered from any symptoms (asymptomatic). The power consumption of the wireless sensors is a big challenge in our monitoring system, and the capacity is highly consuming during sensing and transmission. Additionally, not all smart devices permit the automatic transmission of the aggregated vital signs without human intervention. Despite the advantages of using fog nodes (i.e., privacy, security, productivity, etc.), using fog nodes may add more complexity to the network infrastructure. It also required more maintenance to the distributed storage nodes.
The work reported in this paper is part of an ongoing project, and we will be working to integrate the data-collection part within the framework. The paper primarily proposes an E2E framework for patient monitoring and diagnosis. The framework integrated the mobile monitoring process with hospital EHR. First, we detected the infected patient, in real time, based on the sensor data. Second, we managed the infected patient by using a chest-image-processing module via deep learning. The framework thus puts more emphasis on the image processing part because collected data for real-time COVID-19 monitoring are lacking. As such, the proposed framework is complete in terms of deep-learning-assisted E2E communication protocol and will, however, be investigated when the data become available to understand the real-world performances.
Another question may arise why ResNet was favored over VGG or GOOGLENet. Please note that ResNet is much more popular in the context of image processing. It achieved better results in many domains. We just followed the guidelines [73,81,82] that asserted the high performance of this network architecture. A comparison with other networks in the context of our framework would also be worth investigating in future.

Conclusions
The rapid increasing of the COVID-19 spread ratio made a glitch in healthcare sectors, due to the shortage of ICU units and medical staff. Therefore, most countries find that home isolation is the best practice to limit the spread of the virus. In this paper, we proposed an end-to-end framework for COVID-19 to enable a real-time monitoring of patient at home and in hospital. For the patient side, the framework aimed to provide early detection and isolation for infected patient, tracking his/her contacts to ensure safety. For the hospital side, we propose fog network architecture to achieve the effectiveness of using both cloud and fog in the monitoring system. For the hospital side, the framework aimed to provide a complete EHR system that could benefit from all patients' data and help non-expert physicians to take the right decision to save a patient's life. We proposed a deep learning model based on X-rays of the chest and transfer learning, to classify the patient as either infected or normal, and we achieved promising results.