Next Article in Journal
A Review of 3D Polymeric Scaffolds for Bone Tissue Engineering: Principles, Fabrication Techniques, Immunomodulatory Roles, and Challenges
Next Article in Special Issue
Comparison of CT- and MRI-Based Quantification of Tumor Heterogeneity and Vascularity for Correlations with Prognostic Biomarkers and Survival Outcomes: A Single-Center Prospective Cohort Study
Previous Article in Journal
Multibody Models of the Thoracolumbar Spine: A Review on Applications, Limitations, and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans

1
Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
2
Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
3
Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
4
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(2), 203; https://doi.org/10.3390/bioengineering10020203
Submission received: 16 January 2023 / Revised: 30 January 2023 / Accepted: 1 February 2023 / Published: 3 February 2023
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)

Abstract

:
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.

1. Introduction

It has been determined that the coronavirus (COVID-19) epidemic is one of the deadliest illnesses now affecting a large number of individuals and has a significant negative influence on their life [1]. COVID-19, causing severe acute respiratory syndrome (SARS), is affecting a significant portion of the world’s population [2,3]. The identification of COVID-19 remains one of our highest research priorities because it is highly contagious [4]. The most convenient form of diagnosis available right now is a nucleic acid test that involves taking swabs from the pharynx [5] and the nasopharynx [6]. However, the accuracy of the diagnostic results is compromised due to inaccuracies in the samples and a low viral load. On the other side, antigen tests may be completed in a shorter amount of time, but they are not as sensitive. Imaging techniques such as chest CT and chest X-ray (CXR) can help diagnose infections in patients, in addition to the findings of pathological investigations. To improve the accuracy of detection, a significant number of DL models have been suggested. These models examine CT and CXR images to determine the many types of infection that are present [7,8,9].
DL models are trained and enhanced by using just a limited number of easily available infected samples as their data source. However, sensitivity and accuracy are still impaired since there is an insufficient amount of training data [10]. The apparent answer to this problem is to use FL in its traditional form. FL trains a global model collaboratively over a decentralized network by combining locally learned models from a variety of sources [11,12,13]. The transmission of such personally identifiable information is not feasible since healthcare organizations do not have access to a technique that protects patients’ privacy [14,15,16]. Shokri and Shmatikov [17] established a distributed paradigm to handle this problem. Within this framework, users can exchange gradients while maintaining the data’s anonymity. Despite this, their method was susceptible to assault from even non-active adversaries [18,19]. The study [20] developed a method that safeguards users’ privacy while allowing for the secure aggregation of gradients through the utilization of the federated learning global model. To ensure the safety of the gradients, Zhang et al. [21] developed two new security methods: homomorphic encryption (HE) and threshold secret sharing (TSS). The shared approach does not provide any kind of guarantee about the credibility of its users. In other words, the lack of trust across a variety of sources continues to degrade the quality of the data, which results in poor training for the models.
The technology known as blockchain provides several different trust mechanisms [22,23] that can be used to address the problem of lack of trust in a decentralized network. The idea of privately shared data served as the foundation for the data source authentication system that was later developed. A method that ensures the authenticity of the data and is based on BCT was proposed by Qu et al. [24] for the aggregation of local DL models. These methods of data sharing increase the chance of data leakage since they neglect the confidentiality of gradients. In addition, these methods need a large number of rounds for each aggregation, which is incompatible with the decentralized network used by BCT.
This research is prompted by several significant underlying issues. COVID-19 is fast spreading, and people infected with it exhibit a broad spectrum of symptoms. Consequently, hospitals can exchange their data to aid in the accurate diagnosis of COVID-19 patients. The issue of securely exchanging data (without jeopardizing the privacy of the users) and developing a global model for the identification of positive situations is difficult. Additionally, the current study cannot jointly exchange data to successfully train the model. An important barrier that must be overcome to build AI-based methods is the collection of data from a wide variety of sources. It is not possible to make such personally identifiable information available [15,16,17,18] since there is no technique that healthcare facilities can use to protect patients’ confidentiality. A further challenge is posed by the necessity of training the DL model collaboratively across a public network.
The most recent study conducted by the World Health Organization (WHO) indicates that COVID-19 is an infectious illness that mostly affects the lungs, giving them a honeycomb-like appearance as a result. Even after making a full recovery from COVID-19, some people are nevertheless left with persistent lung impairment. The primary motivation of the present study is to locate tiny COVID-19-infected areas in the lungs. This identification has the potential to assist highly trained radiologists in more accurately locating sick regions. The second motivation for sharing data is to construct a more effective model for DL because data providers are understandably concerned about their personal information being shared. The sharing of data makes it easier to construct a viable model that is based on deep learning for automatically identifying COVID-19 patients.
To solve the challenges that have been outlined above, we present an approach that creates a precise collaborative model by utilizing data from five different databases to identify CT scans of COVID-19 patients. The blockchain-based FL architecture that has been suggested gathers data from many hospitals, each of which uses a different kind of CT scanner. Initially, we propose a process for data normalization to standardize the various sources of data. Then, to find COVID-19 patterns in lung CT scans, we make use of five different DL models, such as VGG-16, VGG-19, ResNet-101, DenseNet-169, and DenseNet-201. For CT scan image segmentation, we used a convolutional–deconvolutional capsule network, called SegCaps [25], and a CapsNet [26] with a combination of IELMs trained for improved generalization. While compared to the performance of other DL models, we found that the performance of CapsNet when using IELMs was significantly better. We eventually trained the global model using the FL technique, which allowed us to address and overcome the privacy challenge. The proposed architecture starts with a collection of data and then jointly trains an intelligent model. After that, the model is shared and spread over the public network. By utilizing FL, the weights from the many local models may be integrated without compromising the hospitals’ right to the privacy of their patient information. Only gradients are shared with the blockchain network by hospitals to maintain the secrecy of patient information. The FL system, which is powered by blockchain, compiles gradients and sends updated models to hospitals that have been verified. The decentralized design of blockchain for the exchange of data between several hospitals allows for the safe sharing of data without compromising the hospitals’ right to personal privacy.
The key contributions of the present work are given below:
1. In the study, spatial and signal normalization techniques are used for data normalization as the CT scans data are acquired from a variety of sources (i.e., hospitals and devices).
2. The suggested system can identify and classify COVID-19 patterns from CT scans obtained from a variety of sources by utilizing an ensemble of CapsNet with IELMs. The IELMs inside the capsule are used to enhance feature extraction and categorization.
3. This research describes the data collecting and sharing unit that is driven by BCT. This unit collects data from a variety of sources and utilizes FL to protect the anonymity of data shared across organizations while providing high-precision global model training.
4. The superiority of the proposed technique is proved by comparing its performance metrics to those of other existing DL algorithms making use of datasets derived from a variety of different sources.
The structure of the present study is as follows: In Section 2, relevant studies on the decentralized network are presented. Section 3 presents the methodology of the study. This section contains the CT scan image normalization techniques, dataset description, ensembling of CapsNet with IELMs, and performance evaluation metrics. Section 4 consists of results and discussions. Finally, this study is concluded in Section 5.

2. Literature Review

This section presents the recent literature for the classification of COVID-19 and other chest diseases using different DL, machine learning (ML), and FL. To make a prompt diagnosis of the illness, Chattu et al. [27] assessed the most recent techniques in medical imaging analytics in terms of prediction, electronic therapy, stage classifications, virtual monitoring, and data transmission. The supervised learning classifiers, such as SVM, DT, KNN, and ANN, are utilized in the field of medical imaging for the detection of chest diseases. In a similar vein, the BCT is indispensable for public access to medical data as well as worldwide data transfer that makes use of a dispersed network of many blocks. The author [27] concluded that the most recent technological developments that benefit the medical industry are the ones that control the transmission of modern medical images. In the study [28], they designed a “blockchain federated learning” architecture, which they referred to as BlockFL, for decentralized FL. The “BlockFL” design was successful in avoiding the need for a single point of failure by expanding the scope of its federated reach. The outputs of local training are included in the verification process so that devices can obtain access to public networks. To address issues regarding data privacy and security, Martinez et al. [29] make use of a cryptocurrency and FL. A systematic library of off-chain recordings is proposed by the authors as a means of scaling gradient recording and reward. He et al. [30] provided experimental research on automated FL (AutoFL), which seeks to improve the quality and productivity of local ML models that link their model updates by utilizing a neural architecture search (NAS) and a federated neural architecture search (FedNAS). They discovered that the default configurations of local ML models did not work well in a federated context for clients with non-IID (also known as non-unique ID). FLchain, as presented in [31] is a centralized, publicly verified, and healthy FL ecosystem in terms of trust and incentive. FLchain eliminates the requirement for a typical centralized FL coordinator by leveraging BCT.
When it comes to working with small datasets, Parnian et al. [32] developed a model called COVID-CAPS that is built on capsule networks as a solution to the limitations that CNN-based models present. This model has the capability of identifying COVID-19-positive cases in CXR. After making several adjustments to the parameters of the model, it was found that the COVID-CAPS model performed better than the conventional network. Studying the architecture of a blockchain network using global models is possible if one makes use of the concept of channels. In the study [33], they suggested the design of the blockchain be built on channel-specific distributed ledgers. To adhere to the decentralized data maintenance strategy, each locally relevant model parameter is saved as a block in a distributed ledger that is particular to the channel. If one applies the concept of channels, it is possible to investigate the architecture of a blockchain network using global models. Xuan et al. [34] proposed in their work that the blockchain should be constructed using channel-specific distributed ledgers as its underlying data structure. Each locally relevant model parameter is stored as a block in a distributed ledger that is specific to the channel to comply with the decentralized data maintenance strategy.
Salam et al. [35] proposed a new FL approach for patients with COVID-19. The recommended neural network, which has been pretrained using CXR photographs of abnormal patients, forecasts the severity of death and offers electronic therapy. Because of its limitations, the predictor that was developed is not suitable for use with big data sets. PriMIA is an open-source software framework for medical imaging that was developed by Kaissis et al. [36]. It is built on FL and works with many different sources of pediatric radiology with the goal of categorization. Utilizing a trained pediatric CXR image collection, DCNN which is a part of the proposed PriMIA is designed to classify the various phases of cardiac illness. A gradient-based model is used during the training of DCNN so that it may detect chest infections at an earlier stage. In a study [37], they designed an artificial intelligence (AI)-based model named EXR for the classification of COVID-19 patients using CXR images. Additionally, FL was also utilized to make the user data secure. Their proposed model achieved an area under the curve (AUC) of 0.92. Dou et al. [38] developed a novel model with the help of FL configurations and convolutional neural network (CNN) to detect lung abnormalities that occur due to COVID-19 using CT scans. They achieved a specificity of 95.27%. The availability of local annotations, as well as their level of quality, are often subject to a wide range of variation. This is mostly attributable to the fact that medical equipment and resources are distributed differently across the world. Because the major patterns differ in size, shape, and texture, this might potentially have a substantial impact on the process of identifying COVID-19. In their paper, Yang et al. [39] present a solution to this problem by making use of FL and semi-supervised learning. To study the performance gap that might occur when training a model on one dataset and then applying that model to another dataset, a multinational database was used. This database had 1704 scans from three different countries. Radiologists with years of experience manually highlighted 945 photos for the COVID-19 findings. The proposed structure produces accurate results that are also efficient to the extent of 71%. Kumar et al. [40] provide a method that trains a global DL model with blockchain-based FL using a limited amount of data that comes from a variety of sources (various hospitals). Their proposed model achieved an 83% accuracy. Podder et al. [41] proposed an LDDNet model for the classification of COVID-19 and pneumonia by using CXR and CT scan images. They used different optimizers such as Adam, Nadam, and SGD. In the end, the LDDNet model achieved significant results while using the Nadam optimizer. In the study [42], authors proposed an automated tool for COVID-19 diagnosis called RADIC. The proposed model RADIC combines time-frequency data with information collected via multiple radiomics to improve diagnostic performance. El-Bana et al. [43] proposed a multi-task model that was used for the segmentation and classification of COVID-19 from CXR and CT scan images. Shah et al. [44] proposed a CNN-based model named CTnet-10 and achieved a diagnostic accuracy of 82.1% for COVID-19 and non-COVID-19 cases using CT scans. Attallah [45] provides a deep-learning-based pipeline that is called CoviWavNet. This pipeline can automatically diagnose COVID-19. The OMNIAHCOV 3D Multiview dataset is utilized by the CoviWavNet algorithm. In the beginning, the CT slices are analyzed using multilevel discrete wavelet decomposition (DWT), and the heatmaps of the different approximation levels are used to train three ResNet CNN models. Using the spectral and temporal information included inside these photos, these ResNets can perform the categorization. After that, they investigate whether or not merging the geographical information with the spectral–temporal information could improve the diagnostic accuracy of COVID-19. Transfer learning is used to extract deep spectral–temporal characteristics from these ResNets, and then those characteristics are combined with deep spatial features that were extracted from the same ResNets that were trained on the initial CT slices. After that, they put these integrated features through a feature selection step to reduce the dimension of those features, and then they input those features into three support vector machine (SVM) classifiers. SARS-COV-2-CT-Scan, which is a benchmark dataset that is available to the public, was used to further validate the performance of CoviWavNet. According to the results of CoviWavNet, training the ResNets using the spectral–temporal information of the DWT heatmap photos is better than training them with the spatial information of the original CT scans. In the research [46], the authors describe a model for computer-assisted diagnosis that is founded on several DL and texture-based radiomics approaches. When training ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%), using texture-based radiomics images (gray-level covariance matrix, discrete wavelet transform) is more accurate than using the original CT scan images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. The study [47] designed a DL model for the classification of COVID-19, pneumonia, and healthy cases using CT scan images. Shankar and Perumal [48] presented a one-of-a-kind fusion model, which they referred to as the FM-HCF-DLF model. This model was designed particularly with deep learning features in mind, and it was supplied for the aim of diagnosis and categorization of COVID-19. The findings of the experiments demonstrated that the proposed model offered superior performance. It had a maximum sensitivity of 93.61%, a specificity of 94.56%, a precision of 94.85%, an accuracy of 94.08%, an F score of 93.2%, and a kappa value of 93.5%. Additionally, the model had a maximum accuracy of 94.08%. The DL method was developed by Liu et al. [49] to screen COVID-19-positive chest CT patients. The primary goal of the study was to provide reliable and timely information that could be used to enhance disease monitoring in medical imaging. The proposed model takes advantage of both spatiotemporal and dimensional properties. In addition, a segmentation model was developed in conjunction with a completely connected CRF to enhance the ROI input. With a mean dice percentage of 91.24 percent, the recommended method successfully segregated COVID-19 lesions. We summarize the recent work that uses FL and DL models for the identification of COVID-19 using CT scans and CXR in Table 1.

3. Materials and Methods

In this section, we discuss detailed information on the COVID-19 datasets. Afterward, the proposed methodology is presented, which is further divided into two sections: (1) local client model and (2) blockchain-based FL. The data normalization techniques have also been explained, which are used to normalize the CT scans. Additionally, the performance evaluation metrics have also been described in this section.

3.1. Dataset Description and Data Augmentation

AI has always occupied a key position in the field of clinical care. In environments as chaotic as these, AI can aid medical professionals in confirming the technique for sickness diagnosis. As a direct result of this, diagnostic techniques will become more accurate, which will ultimately save a great number of lives. At the moment, the most significant barrier that stands in the way of AI-based methods is the availability of relevant data. The advancement of AI is impossible without ready access to vast amounts of pertinent data. Diverse CT images with varied characteristics were gathered from data sources utilized to train the suggested model. Datasets 1, 2, 3, 4, and 5 are used to describe the COVID-19 databases used for assessing the model.

3.1.1. Dataset-1

Patients with COVID-19 infections that have been verified and who have had unenhanced chest CTs make up the first dataset [56]. Patients most frequently described having hypertension, diabetes, and pneumonia or emphysema as co-occurring conditions in their medical histories. Between March 2020 and January 2021, patients who had a positive RT-PCR test result for COVID-19 and accompanying clinical symptoms were photographed within an inpatient setting. During the CT exams, which were carried out on a NeuViz 16-slice CT scanner and conducted in the “Helical” mode, no intravenous contrast was administered. A dataset contains a total of 35,635 CT scan images including 9367 CT scans of normal cases. Every image is in the DICOM format and is formed of grayscale depictions that are 512 pixels wide and 512 pixels high.

3.1.2. Dataset-2

The second dataset, known as the CC-19 [57] dataset, has a total of 34,006 CT scan slices, which were contributed by 89 individuals from three hospitals. Of them, 28,395 CT scan slices belonged to patients who tested positive for COVID-19. The data, which consists of CT scan slices for 89 different people, have been scanned by three different scanners (such as Brilliance ICT, Samatom definition Edge, and Brilliance 16P CT) that are independent of one another. There were 89 participants examined, and 68 of those subjects tested positive for the COVID-19 virus. The remaining 21 persons tested negative for COVID-19.

3.1.3. Dataset-3

The third SARS-CoV-2 CT scans dataset was collected from [57,58]. The dataset has a total of 2482 CT scans, 1252 of which are positive and 1230 of which are from patients who are not infected with SARS-CoV-2 [59]. The information was gathered from actual patients who were treated at hospitals located in Sao Paulo, Brazil.

3.1.4. Dataset-4

The fourth database, Mosmed-1110 [60], contains 1110 three-dimensional CT scans of patients that were performed on them while they were being treated in hospitals in Moscow, Russia. There are five distinct kinds of 3D CT volumes included in this dataset. The labels CT0, CT1, CT2, CT3, and CT4 were given to these groups. The CT0 consists of 254 volumes of 3D CT, all of which are considered to be normal scans. If the person’s level is normal, it means that the CT scan did not reveal any symptoms of pneumonia or COVID-19. CT1 is comprised of 684 different 3D CT scans, and these scans reveal that 25% of the patient’s lungs are infected with COVID-19. CT2 consists of 125 different 3D CT scans, and the results of these scans suggest that the patient’s lungs were infected with COVID-19 between 25% and 50% of the time. In addition, CT3 consists of 45 3D CT scans, and these scans reveal that the patient’s lungs were infected with COVID-19 between 50% and 75% of the time. The CT4 category requires two 3D CT scans to be performed on lungs that are 75% to 100% infected with COVID-19. In addition, levels of mild (CT1) or moderate (CT2) indicate that the patient does not require any intensive care in a hospital and may, as a result, continue to live at home. In contrast, patients in the severe (CT3) and (CT4) critical phases are required to remain in the hospital throughout their treatment under intensive care. For this study, we only considered two categories CT0 and CT2 for classifying normal and COVID-19-infected individuals.

3.1.5. Dataset-5

In dataset 5 [60], there are a total of 349 COVID-19 CT scans taken from 216 individuals in addition to 463 non-COVID-19 CT photos. During an epidemic, patients whose presence of SARS-CoV-2 was verified by RT-PCR were recruited at the point of care to contribute to this dataset.
Table 2 presents the complete statistics of all five datasets and a few samples of CT scans of COVID-19 infected collected from these datasets are shown in Figure 1. Additionally, it has been observed that the number of images in each dataset is imbalanced. Therefore, the synthetic minority oversampling technique (SMOTE) [61] is utilized to balance the datasets.

3.2. Overview of FL and Blockchain

For the present work, we disseminated the data to five hospitals. The data from each hospital is added to the global model as shown in Figure 2. The major purpose of this study is to facilitate the sharing of data from a wide variety of sources and the collaborative training of a DL model. We create a normalization technique to handle the many different kinds of CT scanner data as a result of the fact that the data is obtained from a variety of sources. Following the completion of the data normalization process, the CT scans were segmented, and then a model for the classification of COVID-19 suspects was trained using CapsNet in conjunction with IELMs. We make use of an FL architecture that is built on BCT to train and disseminate a collaborative model. The weights of locally trained models are intended to be included in FL’s design. When all of the weights from the locally trained models have been combined, the global model is brought back to the hospitals.
The right of the data providers to maintain their privacy is the factor that has the highest weight. The blockchain is used to regulate both the privacy of sensitive data and its exposure. As a result, we make entries in the blockchain ledger for the two distinct types of transactions: (A) transactions involving data sharing and (B) transactions involving data retrieval. To control data accessibility while maintaining data privacy, this research makes use of a blockchain that requires permission to access. A permissioned blockchain’s ability to track all transactions and provide access to data derived from a global model is the fundamental advantage of using a blockchain. The second objective is to organize efforts on the data. The blockchain FL solution that has been proposed combines the weights of the local models and then updates its weights. We develop a local model for the data that is diverse or imbalanced by using a process called spatial normalization.
The model that has been proposed is made up of two different parts. (1) the local client model and (2) the FL framework based on BCT. We start by finding a solution to the problem of the CT scan data being of varying types. Then, to segment the data, we make use of SegCaps [25] and train a local model to identify COVID-19 pattern sequences by using an ensemble of CapsNet with IELMs. In the last step, the blockchain network is provided with the weights of the local models so that they may be used to train the global model.

3.3. Local Client Model

In this section, we discuss the normalization approaches, segmentation of the data, and the DL model i.e., CapsNet with IELMs.

3.3.1. Data Normalization

The fact that FL must deal with input data coming from a variety of sources and machines that each have their unique parameters is a significant challenge. The majority of the currently available approaches are wholly inadequate to solve this problem for FL. To overcome this impediment, we have devised a method of normalization that applies to a wide variety of CT scan modalities and aligns the images to the same level of quality. By performing this phase of normalization, FL can deal with the unpredictability of the dataset and construct a more efficient learning model. The process of normalizing the COVID-19 CT scans is divided into two stages: the spatial normalization stage, and the signal normalization stage. The size and resolution of the CT scan are taken into consideration by spatial normalization. The term “signal normalization” refers to the process by which CT scanners adjust the intensity of each voxel based on the lung window.
  • Spatial Normalization of COVID-19 CT scans
The sizes and resolutions of CT scan images serve as the primary determinants of the spatial normalization of the images. As discussed in the studies [40,61], we used a standardized volume of 332 × 332 × 512 mm3 for human lung CT scans. This technique of normalization is used for all of the obtained datasets, and all CT scan image formats are transformed into a common format [62] to make FL possible. As a consequence, both learning and performance are enhanced.
  • Signal Normalization of COVID-19 CT scans
During signal normalization, the intensity of each voxel is determined by computing it relative to the lung window. Window level (Wlevel) and window width (Wwidth) are two common types of windows that are used in medical procedures. This is because every CT scan requires Hounsfield units (HU). Using Equation (1), we can obtain the normalized value using this window size as the parameter.
C n o r m a l i z e d = C o r i g i n a l W l e v e l W w i d t h  
The original CT scan image is denoted by Coriginal, while the image that has to have its intensity normalized is denoted by Cnormalized. In this particular experiment, we select the lower-limit window size ranges between −0.05 and 0.5.

3.3.2. CT Scans Based Segmentation and Classification for COVID-19

As discussed in [25], we used SegCaps to perform the segmentation process of COVID-19-infected CT scan images. In addition, segmented CT scan images are used throughout the training process for COVID-19 recognition by the CapsNet in combination with IELMs. When it comes to lung segmentation, the suggested approach uses 2D slices as the input. For the segmentation of human lungs, a volume with the dimensions 334 × 334 × 512 mm3 is employed. Every three-dimensional (3D) CT scan volume consists of three planes: CX, CY, and CZ. We define these planes so that lung infection may be distinguished more readily. In addition, Equation (2) is used to measure these planes of 3D CT scans.
P I = D   R C X I ,   R C Y I ,   R C Z I
where PI stands for the probability of infection and I refer to the site of the infection. The method that is used to define voxel views in 3D is denoted by the letter D. The RCX, RCY, and RCZ voxels may all be predicted with the use of the function I. Because the traditional approach requires a lot of time, we have modified Equation (2) to read as follows:
P ^ I = D   R ^ C X I ,   R ^ C Y I ,     R ^ C Z I
P ^ I = D   f C X I   R C X I ,   f C Y I   R C Y I ,   f C Z I   R C Z I

3.3.3. Ensembling of CapsNet with IELMs

In recent years, the DL framework has seen an increase in popularity because of the powerful feature extraction layers and classification procedures that it provides. In recent years, CNN has seen an increased amount of use for CT scans image classification analysis [56,57,58,59,60,61]. This leads to a large increase in the amount of computing complexity and even affects the level of performance achieved by the classifier. This is because the pooling layers of a CNN do not examine the spatial link between the features in an image [59]. The method that was suggested combines the potent properties of CapsNet with IELMs to get around the issues that were discussed earlier to achieve improvements in classification accuracy and diagnosis. CapsNet is used in this technique to produce robust feature maps, and IELMs have been used as substituted for conventional dense classification layers to improve COVID-19 prediction.
  • CapsNet Architecture
CapsNet, in contrast to Artificial Neural Networks (ANNs) [60], conducts computations on its inputs before encapsulating the findings in a compact vector of highly informative outputs. A CapsNet might be considered an alternative to ANNs in certain contexts. A comprehensive analysis between CapsNet and the traditional ANN is shown in Table 3.
The architecture of CapsNet includes a separate encoder and decoder for each of the network’s tiers. The convolutional layer (ConvL), PrimaryCaps layer (PCL), and DigitCaps layer (DCL) make up the encoder, whereas the decoder is made up of three fully-connected layers (FCL) [60]. Figure 3 presents the proposed CapsNet with IELMs used for the classification of COVID-19. The initial ConvL of the proposed CapsNet is made up of 512 kernels that are 5 × 5 × 1 in size and have a ReLU activation function. Additionally, the value of stride is set to 1. This layer is in charge of converting the intensity values of individual pixels into the activity levels of nearby feature detectors. The results of this translation are then sent to the PCL. A convolutional layer is known as PCL, which has 32 channels of 8D convolutional capsules (each capsule contains 8 convolutional units with a 5 × 5 kernel and a stride of 2. PCL is responsible for the generation of original CT scan images, for which it does inverse graphics [63,64,65,66,67,68] and reverse engineering [69,70,71]. A 6 × 6 eight-dimensional output tensor is produced as a consequence of the capsule’s application of eight 5 × 5 256 kernels to an input volume of 20 × 20 × 256. The output would have the dimensions 6 × 6 × 8 × 32 given that there are 8D capsules in the system. Each of the 16D capsules that make up a class in the DCL receives its input from the capsule that comes before it in the stack. To perform an affine transformation on each 8D capsule, the 8 × 16 weight matrix (M(a,b)) is used. Additionally, Equation (5) was applied to encode (E(a,b)) the critical spatial link that exists between low-and high-level features within the CT scan image.
E a , b = M a , b   V a , b   I b
whereas Ib is the input vector, M(a,b) shows the weight matrix, and V(a,b) is the component vector.
To obtain the current capsule “C”, we use Equation (6) to find the sum of the input vectors with their respective weights.
G k = b E a , b   C p , q
Equation (7) is derived by putting the value of Equation (5) into Equation (6) which is used to measure the weighted sum of the input vector.
G k = b ( M a , b   V a , b   I b )   C p , q
where C(p,q) can be represented as a routing softmax function (RSF). Equation (8) is used to measure the RSF.
C p , q = e b p , q L e b p , L  
Equation (7) is further modified by adding the value of C(p,q) from Equation (8). The resultant value for measuring the weighted sum of the input vector is shown in Equation (9).
G k = b ( M a , b   V a , b   I b ) e b p , q L e b p , L
Scaling the probability from 0 to 1 is accomplished by using a squashing function, which is computed by Equation (10).
S K = | | S K | | 2 1 + | | S K | | 2   S K | | S K | |
The ratio of “low-level capsule to high-level capsule” is altered incrementally in response to the results, to achieve an appropriate ratio. By employing Algorithm 1, we can implement agreement-based routing of the CapsNet model. The goal of a routing algorithm is to go through several iterations, during which the distribution of the output of the low-level capsule to the output of the high-level capsule is gradually modified based on the output of the high-level capsule, and an optimal distribution is eventually achieved.
Algorithm 1: Routing Algorithm for Processing all Capsule
Input Parameters: Capsule = C; Layers = L; Weighted Sum = Ws
Output: Distributing the output of low-level to high-level capsule
1. Foreach C in L:
2.  C (L + 1) do Ws = 0
3. While K = 1:
4. C in L:
5.  do C(p,q)    // see Equation (8)
6. Foreach C in L + 1:
7.  do GK, SK      // see Equation (7)
8. Foreach C of j in L + 1:
9.  do E(a,b)     // see Equation (5)
10. Foreach C of K in L AND j in L + 1
11.  do b(p,q) ← b(p,q) + A ^ j |k
12. Return b(p,q)
2.
IELMs
The proposed research makes use of CapsNet for enhanced feature extraction; this might be of assistance in reaching the highest possible classification accuracy. In addition, these features are then sent through to the IELMs, who are the ones responsible for classifying the CT scan images. IELM is a kind of neural network with just one hidden layer and operation based on the auto-tuning feature seen in Figure 3. The neural network utilizes a single hidden layer, which does not need any adjustments to be made to it. IELM makes use of the kernel function to obtain high accuracy to improve its overall performance. Because IELMs use autotuning of weight biases in conjunction with non-zero activation functions, the key advantages it offers are enhanced approximation and reduced training error. The studies [72,73] provide an in-depth analysis of the IELM’s operating mechanism.
The following Equations (11) and (12) are used to measure the output of IELMs:
F E = i = 1 E B i M i X
F E = i = 1 E B i M W i   X K + V i ,   K = 1 , 2 , 3 , 4 , ,   N .
Whereas the variable X is used for the input vector, V represents the vias vector, N represents the total number of training samples, the value of K shows the weight vector between the hidden and output layer, E and M represent the number of hidden units and activation functions, respectively. Additionally, W is the weight vector between the input and hidden layer. It is quite comparable to regular ANN with backpropagation, but if you look carefully, you will notice that the weight between the hidden layer and output is referred to as Beta. For this study, this Beta matrix is peculiar because it is used as a pseudo-inverse. Thus, the above Equation (12) can be written as:
G = H B
The Equations (14) and (15) are used to measure the value of G, H, and B.
H = M W 1   X 1 + V 1 g   W E   X 1 + V E M W i   X N + V N g   W E   X N + V E N E
B = B 1 T B E T E M   ,   G = G 1 T G N T N M  
where H represents the hidden layer output matrix. M and G represent the number of outputs and training data target matrix, respectively. The infectious impact of COVID-19 on the lungs has been well characterized by using Equations (11)–(15). Algorithm 2 provides a step-by-step explanation of the proposed CapsNet with the IELM model.
Algorithm 2: Pseudo Code of Proposed CapsNet with IELMs
Input Parameters: Input Image = Iinput; No. of iteration = Niterations; Capsule = C; Image Features = Fimage; Threshold = Tthreshold; IELMs = EIELM; Output Image = Ooutput; Disease = D.
Output: Classification of COVID-19-infected CT scans
1. For n =1:
2.  Fimage = C (Iinput)
3. Execute Algorithm 1
4. Ooutput = EIELM (Fimage) // see Equation (12)
5. If Ooutput == Tthreshold
6.  D is True  // COVID-19 detected
7. Else
8.  D is False  // Normal
9. End

3.4. DL Classifiers

In this study, the classification accuracy of the proposed CapsNet with IELMs model was compared with five different DL methods such as VGG-16 and VGG-19 [74,75], ResNet-101 [73], DenseNet-169, and DenseNet-201 [75]. All these models are pre-trained with ImageNet [76]. The VGG network is constructed with the help of very small ConvL filters. The VGG-16 [73] consists of 13 layers of convolutional processing and three levels of FCL. The VGG19 model is identical to the VGG16 model with the exception that it supports 19 layers. The “16” and “19” represent the model’s number of weight layers. This indicates that VGG-19 has three more ConvL, compared to VGG-16. ResNet architecture is a kind of ANN that allows layers to be skipped without negatively impacting performance [73]. Additionally, ResNet-101 [77] has a total of 101 layers of depth. Conv, MaxPooling layers, dense layers, transition layers, and dense layers are the components that make up the architecture of DenseNet-169 and DenseNet-201, respectively. The ReLU activation function [66,67,68] and the SoftMax activation function [69,70,71] are both included in the overall design of the DenseNet-169 and DenseNet-201 models.

3.5. Blockchain-Based FL

In this part of the article, we take a look at a proposed situation involving the decentralized exchange of data across five separate hospitals. Each hospital agreed to contribute the locally trained model (weights) that it has developed, and the method that we have proposed helps hide user data while also sharing the model over a decentralized network. In addition to this, FL is used in the process of aggregating the net impacts of models that are shared throughout five hospitals. Utilizing FL to exchange data across hospitals in a way that does not compromise patient confidentiality is the major goal of this work.
It is necessary to encrypt patient data since these records are both private and take up a substantial amount of space on discs. The placing of data on the blockchain, which only has a certain amount of space available for it to store information, is expensive in terms of both cost and the resources needed to process it. Therefore, the hospital is the repository for the actual CT scan data, and blockchain serves to make it easier to obtain the trained model. When a new hospital contributes data, the block receives an update in the form of a transaction to verify that the contributor is the rightful owner of the data. The information about the hospital constitutes both the data type and the data size. Each transaction involved in the process of exchanging and retrieving data is shown in Figure 4. The proposed paradigm takes into account requirements for the retrieval of shared data. Multiple hospitals may share data and train a collaborative model to improve their ability to predict the most favorable results possible via combined efforts. The retrieval technique does not in any way violate the confidentiality of hospitals. We provide a multi-organization architecture that makes use of BCT, which was influenced by the work presented in the study [78]. The data for various categories are shared throughout all of the hospitals (H). Each category of H has a distinct community, and these communities are responsible for maintaining the table log(n). The blockchain has a record of the one-of-a-kind IDs for each hospital.
Equation (16) describes the process of retrieving data into the physically existent nodes. Equation (16) is also used to determine how far apart two nodes are located, with P standing for the data categories used to collect the information from the hospitals. In addition, we calculate the distance Di (Pi, Pj) between two nodes which are used to retrieve the information. The characteristics of the weight matrix for the node Pi and Pj are denoted by X a b P i and X a b P j , respectively. The logic and the distance between the nodes are used by each hospital to establish its unique identifier (UID).
D i P i , P j = a , b     P i P j   P i P j X ab P i +   X ab P j   a , b     P i P j X ab P i +   X ab P j l o g D p P i , P j
Equation (17) is used to measure the nodes (Pi, Pj) with UID:
D i P i , P j = P i U I D   P j U I D
To ensure that patient information is kept private while maintaining data decentralization, the randomization method for two hospital nodes is described in Equation (18).
P n V N Z exp   P n V N O
where V N Z is used to present the privacy of data, N and N′ represents the data neighboring, and the outcome of the data is represented by O. Laplace (£), on the other hand, is used for the local model training (Ti) to protect patients’ privacy across several hospitals. In our case, we are sharing the data with five different hospitals. Thus, Equations (19)–(21) are utilized to keep the data private.
T ^ i = T i + £   s /
s = m a x P ,   P   f P ,   f P
Equation (21) is obtained after putting the value of s in Equation (19).
T ^ i = T i + £   s = m a x P ,   P   f P ,   f P /  
The global model is trained using the consensus algorithm, which is carried out with the help of the local models. Since all of the nodes work together to train the model, we provide proof of work so that the data may be distributed across the nodes. The mean absolute error (MAE) is used as the accuracy measure during the training phase of the consensus method, which analyses the quality of the local models. Equations (22)–(23) describe the consensus algorithm process used by the hospitals.
M A E   T i = 1 n i = 1 n A i f x i
M A E   P i = 1 n i = 1 n M A E   T i + G   M A E   T i  
The value of MAE (Ti) from Equation (22) represents that the model is locally trained. The G value from Equation (23) demonstrates the global model weight. Additionally, f(xi) shows the model prediction, while the original data is represented by Ti and Ai. All of the information held by hospitals is signed and encrypted using both public and private keys to maintain patient confidentiality. As a result, Ti is responsible for computing all transactions and broadcasting each model transaction. If permission is obtained, each transaction will be added to the distributed ledger and recorded there. The following steps are performed during the training of the consensus algorithm.
  • The local model Ti transaction is handed over from the Pi node to the Pj node.
  • Local model Ti is sent up to the leader via node Pj.
  • The leader sends out a broadcast to the Pi and Pj with the block node.
  • Check that the Pi and Pj are correct, then wait for authorization.
  • Finally, the blocks should be saved in the database of the retrieval blockchain.

3.6. Performance Evaluation

Different evaluation metrics such as accuracy (ACU), recall (REC), specificity (SPF), F1-score, dice similarity coefficient (DSC), and precision (PRE), are used to measure the classification performance of the proposed model. Additionally, these metrics are also utilized to calculate the segmentation performance of the SegsCap model. The receiver operating characteristic (ROC) and the area under (AU) the ROC curve of the proposed model and five DL models are also measured. The relationship between REC [79] and SPF [80] is inverse: as REC grows, SPF tends to decrease, and vice versa. High REC tests will provide positive results for people with an illness, whereas high SPF testing will indicate that patients with negative results do not have a condition [81]. The following Equations (24)–(31) are used to measure these metrics.
A C U = T P + T N T P + T N + F P + F N
R E C = T P T P + F N
S P F = T N T N + F P
P R E = T P T P + F P
F 1 s c o r e = 2 P R E R E C P R E + R E C
D S C   A , B = 2 A B A + B
T P R = T P T P + F N
F P R = F P T N + F P
where A and B represent the target regions, the spatial overlap between two segmentations is measured using DSC, TP and TN represent the true positive and true negative, separately. TPR is used to represent the true positive rate, FPR shows the false positive rate, and FP and FN denote false positive and false negative, respectively.

4. Results and Discussions

In this section, we present comprehensive results obtained by using the proposed blockchain-based FL model and five different DL models.

4.1. Experimental Setup

The proposed model was built using open-source TensorFlow (TF) federation version (v) 2.1.0, whilst the five DL models are implemented with TF v1.8. The Keras library was also utilized as the backend for their respective implementations. In addition, Python is used in the development of processes that are unconnected to convolutional networks. The DL models were made more efficient by using an Adams optimizer to make adjustments. We also fine-tuned the DL models by using a learning rate (LR) of 0.0001, batch size of 32, a momentum of 0.9, dropout of 0.2, ReLU activation function, and 100 epochs. The experiment was carried out using a workstation that ran the Windows operating system and had 32 GB of RAM together with an 11 GB NVIDIA GPU. We split the five CT scan images dataset into 70% for training, 20% for validation, and 10% of the data used for testing. A total of 70% of the five datasets are used to train the CapsNet with the IELMs model using TF federated v 2.1.0. This same ratio of the training set is also used to train the five DL models. On the FL blockchain, these models are implemented using Python 3.9.5.

4.2. Comparison of the Proposed Model with DL Models

To test the efficacy of the proposed model and five DL models, many different performance metrics were examined for the accurate diagnosis of COVID-19 using CT scan images. The confusion matrices for the proposed CapsNet with the IELMs model with different DL models are shown in Figure 5. In the test set, there were 540 COVID-19 images and 540 normal CT scan images. In the confusion matrices, actual occurrences were grouped along rows, and predicted cases along columns. Among 540 COVID-19 instances, the proposed model presented classified 530 cases and incorrectly categorized 10 cases as normal. The VGG-16 model correctly classified the 503 COVID-19 cases and misclassified 33 cases as normal. Additionally, VGG-19, ResNet-101, DenseNet-169, and DenseNet-201 correctly classify the 508, 513, 517, and 522 COVID-19 cases, respectively. The detailed results are presented in Figure 5.
We carried out a comprehensive set of tests using the proposed model with five different DL models, including (VGG-16, VGG-19, ResNet-101, DenseNet-169, and DenseNet-201). DL models consisting of multiple layers were used for the analysis of the COVID-19 CT scan images dataset, and the results of these evaluations are shown in Table 4.
From Table 4, it has been observed that the proposed CapsNet with IELMs achieves the highest accuracy of 98.99%, PRE of 98.96%, REC of 98.97%, 98.95% of SPF, and 98.96% F1-score. The highest value of REC (98.97%) of the proposed model among five different DL models indicates that the model is more effective and accurate in classifying COVID-19-infected CT scans. However, the significant value of SPF (98.95%) shows that the proposed model is also effective in classifying normal cases using CT scan images. The second-highest results were obtained by the DenseNet-201. The DenseNet-201 model attains the ACU of 95.45%, PRE of 95.54%, REC of 95.47%, and F1-score of 95.48%. The VGG-16 model shows poor results in classifying COVID-19 cases using CT scan images. Figure 6 presents the performance of the proposed model in comparison to the five alternative DL models at each level of classification using a ROC curve. The two parameters that are discussed in the Equations (30) and (31) are shown on the ROC curve.
The higher the AU(ROC) is, the greater the model is considered to be efficient for medical diagnosis. The AU(ROC) curves of our classifiers are plotted in Figure 7. For the proposed model, the AUC was 0.997 and 0.993 for COVID-19 and normal class, respectively. For VGG-16 and VGG-19, the AUC was 0.959 and 0.978 for the COVID-19 class, respectively. For ResNet-101, the AUC was 0.969. Finally, the DenseNet-169 and DenseNet-201 achieved an AUC of 0.985 and 0.988 for the COVID-19 class, respectively. The AUC results reveal that the proposed model could contribute efficiently to classifying COVID-19 cases from CT scan images.
A total of 540 CT scan slices from each hospital were used in the process of evaluating the proposed model in addition to five other DL models using five test lists. Figure 8 presents a segmentation of the COVID-19 infection, which demonstrates that our technique works better than the baseline methods. The output of the approach that was proposed is quite close to ground truth. The detailed results are presented in Table 5.
From Table 5, it has been observed that UNET achieves DSC of 84.19%, and ACU of 84.15%. Similarly, UNET++ achieves the DSC of 86.01%. The proposed model achieves the DSC of 95.50% which is 9.49% and 11.31% superior to UNET++ and UNET, respectively.

4.3. Analysis of FL Security

The data came from a wide variety of places, including hospitals equipped with a wide range of different sorts of CT scan devices. To conduct an accurate performance analysis of FL, the datasets have been split among five different hospitals. As a result of this agreement, several hospitals will be able to share data and take advantage of FL. The performance of our proposed blockchain-based FL model is shown in Figure 9a to show how it changed when the hospitals or providers were varied. The use of a wide variety of data sources produces improved results. The F1-score achieved by the proposed model after distributing the data into five different hospitals is shown in Figure 9b. The convergence of model loss is seen in Figure 9c. The amount of time required to run models that make use of the FL framework is shown in Figure 9d. Because samples obtained from multiple hospitals are not the same, the accuracy does not fluctuate gradually as illustrated in Figure 9a. The number of patients or CT scan slices has a direct bearing on the level of accuracy achieved. A similar approach may be used with the model loss as well. The local models are combined into one global model, and the local models are trained using standardized data from each hospital. The success of the collaborative approach is affected by the total number of hospitals involved. Table 4 compares the FL model with the local model. The local model is trained using the whole dataset, whereas the FL model is instructed by the local models. Figure 9a,b indicate that as the number of data sources increases, performance improves considerably. Nonetheless, FL does not affect accuracy, and it accomplishes data sharing while maintaining anonymity.
In addition, the distinctions in privacy analysis refer to a methodical process that enables hospitals to gain insight from the vast majority of data while simultaneously ensuring that these findings do not enable any individual to differentiate between the data or re-identify the data. In other words, the process enables hospitals to gain insight from the vast majority of data while protecting the privacy of individuals. Equation (18), on the other hand, is used to guarantee that the data’s integrity is maintained by extracting the value that is included inside the data. The decentralized trust system provided by BCT makes it possible for everything to work automatically according to a preset program, which results in increased data security. A stringent collection of algorithms is used by the decentralized BCT, which allows it to give an assurance that the data is authentic, accurate, transparent, traceable, and tamper-proof because it employs a demanding set of algorithms. The suppliers of the data are responsible for their information and can make any required adjustments. The database that the blockchain stores is then updated with real data, together with the digital signature of the owner. Using the smart contract, the owner of the data has the power to administer and modify the policy of the database. This permission is granted to the owner. A large number of cryptographic protocols are implemented inside the blockchain to guarantee the data of its users remains private.

4.4. Comparison with State-of-the-Art Methods

COVID-19 has been the focus of several studies, such as [10,11,12,13,14,20,25]; however, these approaches do not take into consideration data sharing to train a better prediction model. However, certain algorithms combine GAN with data augmentation (DA) to produce faked visuals. The reliability of such methods cannot be guaranteed when it comes to medical photos. The examination of the data is difficult since there is only a little quantity of patient information [70]. Our proposed method gathers a significant number of CT scans from five different databases and uses this information to develop a more accurate prediction model. To begin, we examine the current state-of-the-art research about the DL models shown in Table 6.
In addition, we examine FL alongside pre-trained models such as VGG-16 and 19, ResNet, MobileNet, DenseNet, and Capsule Network, amongst others. According to the findings, the accuracy is comparable regardless of whether the local model is trained with the entire dataset or whether the data is divided among hospitals and the model weights are combined using blockchain-based FL. In the last part of this article, we contrast our method with data-sharing tactics that are based on BCT. A DL and blockchain-based solution to the sharing of medical pictures was presented in the study [73]. The fundamental shortcoming of the model is that it is not based on FL and does not aggregate neural network weights across the blockchain. In addition, [82,83] presents a framework for FL; however, they only investigate the possibility of sharing vehicle data. The global model is instructed by our proposed framework to collect data from a variety of hospitals, and it is instructed to operate as a collaborative global model.

4.5. Computational Cost

A comparison is made between the local various DL learning models and the blockchain-based FL (CapsNet with IELMs) that has been suggested. Despite this, federated architectures based on blockchains provide a high level of both security and privacy. Figure 10a indicates that the communication load increases as the number of hospitals or transactions increases, which in turn causes the operational costs to increase. Furthermore, as can be shown in Figure 10b, the computational cost of our proposed system is less than that of the various traditional DL models.

4.6. Discussions

According to the findings of this research, it is feasible to aggregate COVID-19 data utilizing blockchain-based FL across participating centers, while still ensuring patient anonymity while employing this approach. When there is little time to set up intricate data-sharing agreements across organizations or even nations, this decentralized training strategy might be an essential facilitator for scaling AI-based solutions during a pandemic. However, the medical imaging scenario is more complicated and entails unique challenges (such as high-dimensional data [89,90,91,92] and imbalanced cohort sizes [91]), which exert unexplored influences on currently used FL techniques [81]. Recently, the use of FL has been shown in other fields such as edge computing [84,93] (for example, digital devices [81]). To the best of our knowledge, this is the prior research that demonstrates both the practicability and effectiveness of blockchain-based FL for COVID-19 image interpretation where collaborative effort is especially advantageous in times of global crisis. Our observations suggest that blockchain-based FL improved generalization performance overall with single-site models and their ensemble, demonstrating successful decentralized optimization with diverse training data distributions.
In this study, we have applied the blockchain-based FL technique to our proposed CapsNet with the IELMs model. The proposed model is compared with five different DL models in terms of different metrics such as ACU, REC, SPF, F1-score, and PRE. Additionally, we have used the CT scan images to diagnose COVID-19 with these models. Moreover, we collected the CT scan images from five different publicly [56,57,58,59,60,61] available sources. Each CT scan images dataset has been categorized into five different clients while each client represents each hospital. Different CT scans contained different types of resolutions because different CT scan scanners (such as Brilliance ICT, Samatom definition Edge, and Brilliance 16P CT) were used to create the images. Two techniques, signal normalization and spatial normalization, have been used to normalize the data. Afterward, we used the SegsCap to segment the COVID-19-infected portions in the CT scan images. The segmented portions of COVID-19 were used for training of CapsNet with IELMs and five different models such as VGG-16, VGG-19, ResNet-101, DenseNet-169, and DenseNet-201. The proposed model achieved 98.99% accuracy, which is more than other DL models. The proposed model has been used in collaboration with a blockchain-based FL framework to divide in different hospitals, which has been represented in Figure 4. We have also compared the proposed model with the state-of-the-art classifiers, which can be seen in Table 6. The study of experimental results with modern state-of-the-art methodology demonstrates that the suggested model for identifying COVID-19 using CT scans has contributed significantly to the clinical expert’s assistance. An ensemble of 3D-ResNet networks with a channel-based attention module was developed by Kuzdeuov et al. [82], and they obtained a classification accuracy of 93.30% while attempting to identify COVID-19. When categorizing the COVID-19 CT scan images, Wang et al. [83] constructed a unique CNN model that is ensembled with other ML classifiers to obtain optimal results. They succeeded in achieving an 82.92% precision rate. A 2D-CNN model was created by Jin et al. [84] to categorize COVID-19, as well as pneumonia including viral and bacterial infections, and normal pictures. Their model was accurate 94.41% of the time. The study that was given by Chen et al. [91] produced a classification performance of 95.20% in terms of accuracy for the UNET++ with CNN model when applied to CT scan pictures of several chest illnesses. Based on chest CT images, Li et al. [92] built a CNN-based pre-trained model that they called ResNet-50. This model is capable of automatically diagnosing COVID-19 as well as pneumonia. They had a success rate of 90% when it came to making appropriate diagnoses.

5. Conclusions and Future Work

In this study, a system has been built that has the potential to make use of existing data to improve the identification of CT scans, as well as to facilitate the interchange of data across hospitals while preserving patient anonymity. The problem of heterogeneity in data is tackled head-on by the method of data normalization. In addition, COVID-19 patients are recognized by the use of CapsNet with IELMs-based segmentation and classification, as well as a method for collaboratively training a global model through the application of BCT and FL. For training and evaluating the proposed model and five different DL models, extensive experiments were performed on a large number of publicly available COVID-19 CT scan datasets. The CapsNet combined with IELMs achieved the highest level of accuracy, which was 98.99%. The proposed model is adroit since it can learn from data collected from conventional hospital sources. Furthermore, if hospitals continue to share the confidential data that they collect to train a more accurate global model, the proposed model could be able to assist in the identification of COVID-19 patients via the use of lung screening. The limitation of this study is that the proposed model is not suitable for sonography and CXR images in its current state. However, in the future, we will combine blockchain and FL with a deep attention module for the classification of different chest diseases including COVID-19, lung cancer, and tuberculosis using different medical imaging modalities such as CXR, CT scans, and sonography.

Author Contributions

Conceptualization, T.A., A.N. and H.M.; methodology, A.N. and H.M.; validation, R.A.N., T.A., W.-K.L. and H.M.; formal analysis, A.N. and W.-K.L.; investigation, A.N. and R.A.N.; resources, A.N., T.A. and H.M.; data curation, R.A.N.; writing—original draft preparation, A.N. and H.M.; writing—review and editing, A.N. and H.M.; visualization, A.N. and H.M.; supervision, T.A., R.A.N. and W.-K.L.; funding acquisition, W.-K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation (NRF) grant funded by the Ministry of Science and ICT (MSIT), South Korea (No. NRF2021R1A2C1014432 and NRF2022R1G1A1010226).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, G.; Liu, X.; Li, C.; Xu, Z.; Ruan, J.; Zhu, H.; Meng, T.; Li, K.; Huang, N.; Zhang, S. A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images. IEEE Trans. Med. Imaging 2020, 39, 2653–2663. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, X.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Zheng, C. A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT. IEEE Trans. Med. Imaging 2020, 39, 2615–2625. [Google Scholar] [CrossRef] [PubMed]
  3. Han, Z.; Wei, B.; Hong, Y.; Li, T.; Cong, J.; Zhu, X.; Wei, H.; Zhang, W. Accurate screening of COVID-19 using attention-based deep 3D multiple instance learning. IEEE Trans. Med. Imaging 2020, 39, 2584–2594. [Google Scholar] [CrossRef] [PubMed]
  4. Ouyang, X.; Huo, J.; Xia, L.; Shan, F.; Liu, J.; Mo, Z.; Yan, F.; Ding, Z.; Yang, Q.; Song, B.; et al. Dual-sampling attention network for diagnosis of COVID-19 from community acquired pneumonia. IEEE Trans. Med. Imaging 2020, 39, 2595–2605. [Google Scholar] [CrossRef]
  5. Kang, H.; Xia, L.; Yan, F.; Wan, Z.; Shi, F.; Yuan, H.; Jiang, H.; Wu, D.; Sui, H.; Zhang, C.; et al. Diagnosis of coronavirus disease 2019 (COVID-19) with structured latent multi-view representation learning. IEEE Trans. Med. Imaging 2020, 39, 2606–2614. [Google Scholar] [CrossRef]
  6. Wang, J.; Bao, Y.; Wen, Y.; Lu, H.; Luo, H.; Xiang, Y.; Li, X.; Liu, C.; Qian, D. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans. Med. Imaging 2020, 39, 2572–2583. [Google Scholar] [CrossRef]
  7. Xu, G.; Li, H.; Liu, S.; Yang, K.; Lin, X. Verifynet: Secure and verifiable federated learning. IEEE Trans. Inf. Secur. 2020, 15, 911–926. [Google Scholar] [CrossRef]
  8. Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Trans. Ind. Inform. 2019, 16, 4177–4186. [Google Scholar] [CrossRef]
  9. Voulodimos, A.; Protopapadakis, E.; Katsamenis, I.; Doulamis, A.; Doulamis, N. Deep learning models for COVID-19 infected area segmentation in CT images. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 29 June–2 July 2021; pp. 404–411. [Google Scholar]
  10. Loddo, A.; Pili, F.; Di Ruberto, C. Deep learning for COVID-19 diagnosis from ct images. Appl. Sci. 2021, 11, 8227. [Google Scholar] [CrossRef]
  11. Nair, R.; Adi Alhudhaif, A.; Koundal, D.; Doewes, R.I.; Sharma, P. Deep learning-based COVID-19 detection system using pulmonary CT scans. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2716–2727. [Google Scholar] [CrossRef]
  12. Fusco, R.; Grassi, R.; Granata, V.; Setola, S.V.; Grassi, F.; Cozzi, D.; Pecori, B.; Izzo, F.; Petrillo, A. Artificial intelligence and COVID-19 using chest CT scan and chest X-ray images: Machine learning and deep learning approaches for diagnosis and treatment. J. Pers. Med. 2021, 11, 993. [Google Scholar] [CrossRef]
  13. Majid, A.; Khan, M.A.; Nam, Y.; Tariq, U.; Roy, S.; Mostafa, R.R.; Sakr, R.H. COVID19 classification using CT images via ensembles of deep learning models. Comput. Mater. Contin. 2021, 69, 319–337. [Google Scholar] [CrossRef]
  14. Fouladi, S.; Ebadi, M.J.; Safaei, A.A.; Bajuri, M.Y.; Ahmadian, A. Efficient deep neural networks for classification of COVID-19 based on CT images: Virtualization via software defined radio. Comput. Commun. 2021, 176, 234–248. [Google Scholar] [CrossRef]
  15. Saood, A.; Hatem, I. COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 2021, 21, 19. [Google Scholar] [CrossRef]
  16. Li, Z.; Zhao, S.; Chen, Y.; Luo, F.; Kang, Z.; Cai, S.; Zhao, W.; Liu, J.; Zhao, D.; Li, Y. A deep-learning-based framework for severity assessment of COVID-19 with CT images. Expert Syst. Appl. 2021, 185, 115616. [Google Scholar] [CrossRef]
  17. Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference On Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1310–1321. [Google Scholar]
  18. Song, F.; Qin, Z.; Xue, L.; Zhang, J.; Lin, X.; Shen, X. Privacy-preserving keyword similarity search over encrypted spatial data in cloud computing. IEEE Internet Things J. 2021, 9, 6184–6198. [Google Scholar] [CrossRef]
  19. Kong, L.; Liu, X.-Y.; Sheng, H.; Zeng, P.; Chen, G. Federated tensor mining for secure industrial internet of things. IEEE Trans. Ind. Inform. 2019, 16, 2144–2153. [Google Scholar] [CrossRef]
  20. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  21. Zhang, X.; Ji, S.; Wang, H.; Wang, T. Private, yet practical, multiparty deep learning. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017; pp. 1442–1452. [Google Scholar]
  22. Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles. IEEE Trans. Veh. Technol. 2020, 69, 4298–4311. [Google Scholar] [CrossRef]
  23. Liu, Y.; Yu, J.J.Q.; Kang, J.; Niyato, D.; Zhang, S. Privacy-preserving traffic flow prediction: A federated learning approach. IEEE Internet Things J. 2020, 7, 7751–7763. [Google Scholar] [CrossRef]
  24. Qu, Y.; Gao, L.; Luan, T.H.; Xiang, Y.; Yu, S.; Li, B.; Zheng, G. Decentralized privacy using blockchain-enabled federated learning in fog computing. IEEE Internet Things J. 2020, 7, 5171–5183. [Google Scholar] [CrossRef]
  25. LaLonde, R.; Bagci, U. Capsules for object segmentation. arXiv 2018, arXiv:1804.04241. [Google Scholar]
  26. Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  27. Chattu, V.K. A review of artificial intelligence, big data, and blockchain technology applications in medicine and global health. Big Data Cogn. Comput. 2021, 5, 41. [Google Scholar]
  28. Kim, H.; Park, J.; Bennis, M.; Kim, S.-L. Blockchained on-device federated learning. IEEE Commun. Lett. 2019, 24, 1279–1283. [Google Scholar] [CrossRef]
  29. Martinez, I.; Francis, S.; Hafid, A.S. Record and reward federated learning contributions with blockchain. In Proceedings of the 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), Guilin, China, 17–19 October 2019; pp. 50–57. [Google Scholar]
  30. He, C.; Mushtaq, E.; Ding, J.; Avestimehr, S. Fednas: Federated deep learning via neural architecture search. CVPR Workshop 2021, 3618. [Google Scholar]
  31. Bao, X.; Su, C.; Xiong, Y.; Huang, W.; Hu, Y. Flchain: A blockchain for auditable federated learning with trust and incentive. In Proceedings of the 2019 5th International Conference on Big Data Computing and Communications (BIGCOM), QingDao, China, 9–11 August 2019; pp. 151–159. [Google Scholar]
  32. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. Covid-caps: A capsule network-based framework for identification of COVID-19 cases from x-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef]
  33. Umer, M.; Hong, C.S. FLchain: Federated learning via MEC-enabled blockchain network. In Proceedings of the 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), Matsue, Japan, 18–20 September 2019; pp. 1–4. [Google Scholar]
  34. Xuan, S.; Jin, M.; Li, X.; Yao, Z.; Yang, W.; Man, D. DAM-SE: A Blockchain-Based Optimized Solution for the Counterattacks in the Internet of Federated Learning Systems. Secur. Commun. Netw. 2021, 2021, 9965157. [Google Scholar] [CrossRef]
  35. Salman, F.M.; Abu-Naser, S.S.; Alajrami, E.; Abu-Nasser, B.S.; Alashqar, M.A.M. COVID-19 detection using artificial intelligence. Int. J. Acad. Eng. Res. 2020, 4, 18–25. [Google Scholar]
  36. Kaissis, G.; Ziller, A.; Passerat-Palmbach, J.; Ryffel, T.; Usynin, D.; Trask, A.; Lima, I.; Mancuso, J.; Jungmann, F.; Steinborn, M.-M.; et al. End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nat. Mach. Intell. 2021, 3, 473–484. [Google Scholar] [CrossRef]
  37. Flores, M.; Dayan, I.; Roth, H.; Zhong, A.; Harouni, A.; Gentili, A.; Abidin, A.; Liu, A.; Costa, A.; Wood, B.; et al. Federated Learning used for predicting outcomes in SARS-COV-2 patients. Res. Sq. 2021. [Google Scholar] [CrossRef]
  38. Dou, Q.; So, T.Y.; Jiang, M.; Liu, Q.; Vardhanabhuti, V.; Kaissis, G.; Li, Z.; Si, W.; Lee, H.H.C.; Yu, K.; et al. Federated deep learning for detecting COVID-19 lung abnormalities in CT: A privacy-preserving multinational validation study. NPJ Digit. Med. 2021, 4, 60. [Google Scholar] [CrossRef]
  39. Yang, D.; Xu, Z.; Li, W.; Myronenko, A.; Roth, H.R.; Harmon, S.; Xu, S.; Turkbey, B.; Turkbey, E.; Wang, X.; et al. Federated semi-supervised learning for COVID region segmentation in chest CT using multi-national data from China, Italy, Japan. Med. Image Anal. 2021, 70, 101992. [Google Scholar] [CrossRef]
  40. Kumar, R.; Khan, A.A.; Kumar, J.; Zakria, A.; Golilarz, N.A.; Zhang, S.; Ting, Y.; Zheng, C.; Wang, W. Blockchain-federated-learning and deep learning models for COVID-19 detection using ct imaging. IEEE Sens. J. 2021, 21, 16301–16314. [Google Scholar] [CrossRef]
  41. Podder, P.; Das, S.R.; Mondal, M.R.H.; Bharati, S.; Maliha, A.; Hasan, J.; Piltan, F. LDDNet: A Deep Learning Framework for the Diagnosis of Infectious Lung Diseases. Sensors 2023, 23, 480. [Google Scholar] [CrossRef]
  42. Attallah, O. RADIC: A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics. Chemom. Intell. Lab. Syst. 2023, 233, 104750. [Google Scholar] [CrossRef]
  43. El-Bana, S.; Al-Kabbany, A.; Sharkas, M. A multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in COVID-19 scans. PeerJ Comput. Sci. 2020, 6, e303. [Google Scholar] [CrossRef]
  44. Shah, V.; Keniya, R.; Shridharani, A.; Punjabi, M.; Shah, J.; Mehendale, N. Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiol. 2021, 28, 497–505. [Google Scholar] [CrossRef]
  45. Attallah, O.; Samir, A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl. Soft Comput. 2022, 128, 109401. [Google Scholar] [CrossRef]
  46. Attallah, O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit. Health 2022, 8, 20552076221092543. [Google Scholar] [CrossRef]
  47. Zhao, W.; Jiang, W.; Qiu, X. Deep learning for COVID-19 detection based on CT images. Sci. Rep. 2021, 11, 14353. [Google Scholar] [CrossRef]
  48. Shankar, K.; Perumal, E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. Complex Intell. Syst. 2021, 7, 1277–1293. [Google Scholar] [CrossRef]
  49. Jingxin, L.; Mengchao, Z.; Yuchen, L.; Jinglei, C.; Yutong, Z.; Zhong, Z.; Lihui, Z. COVID-19 lesion detection and segmentation–A deep learning method. Methods 2022, 202, 62–69. [Google Scholar] [CrossRef] [PubMed]
  50. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Zomaya, A.Y. Federated learning for COVID-19 detection with generative adversarial networks in edge cloud computing. IEEE Internet Things J. 2021, 9, 10257–10271. [Google Scholar] [CrossRef]
  51. Aswathy, A.L.; Vinod, C.S.S. Cascaded 3D UNet architecture for segmenting the COVID-19 infection from lung CT volume. Sci. Rep. 2022, 12, 3090. [Google Scholar]
  52. Singh, P.; Kaur, R. Implementation of the QoS framework using fog computing to predict COVID-19 disease at early stage. World J. Eng. 2021, 19, 80–89. [Google Scholar] [CrossRef]
  53. Kochgaven, C.; Mishra, P.; Shitole, S. Detecting Presence of COVID-19 with ResNet-18 using PyTorch. In Proceedings of the 2021 International Conference on Communication information and Computing Technology (ICCICT), Mumbai, India, 25–27 June 2021; pp. 1–6. [Google Scholar]
  54. Chen, X.; Shao, Y.; Xue, Z.; Yu, Z. Multi-Modal COVID-19 Discovery With Collaborative Federated Learning. In Proceedings of the 2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS), Xi’an, China, 7–8 November 2021; pp. 52–56. [Google Scholar]
  55. Le Dinh, T.; Lee, S.-H.; Kwon, S.-G.; Kwon, K.-R. COVID-19 Chest X-ray Classification and Severity Assessment Using Convolutional and Transformer Neural Networks. Appl. Sci. 2022, 12, 4861. [Google Scholar] [CrossRef]
  56. Yang, X.; He, X.; Zhao, J.; Zhang, Y.; Zhang, S.; Xie, P. COVID-CT-dataset: A CT scan dataset about COVID-19. arXiv 2020, arXiv:2003.13865. [Google Scholar]
  57. Rahimzadeh, M.; Attar, A.; Sakhaei, S.M. A fully automated deep learning-based network for detecting COVID-19 from a new and large lung ct scan dataset. Biomed. Signal Process. Control 2021, 68, 102588. [Google Scholar] [CrossRef]
  58. Qorib, M.; Oladunni, T.; Denis, M.; Ososanya, E.; Cotae, P. COVID-19 vaccine hesitancy: Text mining, sentiment analysis and machine learning on COVID-19 vaccination twitter dataset. Expert Syst. Appl. 2023, 212, 118715. [Google Scholar] [CrossRef]
  59. Angelov, P.; Soares, E. Towards explainable deep neural networks (xDNN). Neural Netw. 2020, 130, 185–194. [Google Scholar] [CrossRef]
  60. Morozov, S.P.; Andreychenko, A.E.; Pavlov, N.A.; Vladzymyrskyy, A.V.; Ledikhova, N.V.; Gombolevskiy, V.A.; Blokhin, I.A.; Gelezhe, P.B.; Gonchar, A.V.; Chernina, V.Y. Mosmeddata: Chest ct scans with COVID-19 related findings dataset. arXiv 2020, arXiv:2005.06465. [Google Scholar]
  61. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  62. Wang, Y.; Huang, L.; Jiang, S.; Zou, J.; Fu, H.; Yang, S.; Wang, Y. Capsule networks showed excellent performance in the classification of hERG blockers/nonblockers. Front. Pharmacol. 2020, 10, 1631. [Google Scholar] [CrossRef]
  63. Malik, H.; Farooq, M.S.; Khelifi, A.; Abid, A.; Qureshi, J.N.; Hussain, M. A comparison of transfer learning performance versus health experts in disease diagnosis from medical imaging. IEEE Access 2020, 8, 139367–139386. [Google Scholar] [CrossRef]
  64. Malik, H.; Anees, T.; Din, M.Z. BDCNet: Multi-classification convolutional neural network model for classification of COVID-19, pneumonia, and lung cancer from chest radiographs. Multimed. Syst. 2022, 28, 815–829. [Google Scholar] [CrossRef]
  65. Naeem, A.; Anees, T.; Fiza, M.; Naqvi, R.A.; Lee, S.-W. SCDNet: A Deep Learning-Based Framework for the Multiclassification of Skin Cancer Using Dermoscopy Images. Sensors 2022, 22, 5652. [Google Scholar] [CrossRef]
  66. Saeed, H.; Malik, H.; Bashir, U.; Ahmad, A.; Riaz, S.; Ilyas, M.; Bukhari, W.A.; Khan, M.I.A. Blockchain technology in healthcare: A systematic review. PLoS ONE 2022, 17, e0266462. [Google Scholar] [CrossRef]
  67. Malik, H.; Bashir, U.; Ahmad, A. Multi-classification neural network model for detection of abnormal heartbeat audio signals. Biomed. Eng. Adv. 2022, 4, 100048. [Google Scholar] [CrossRef]
  68. Naeem, A.; Anees, T.; Naqvi, R.A.; Loh, W.-K. A Comprehensive Analysis of Recent Deep and Federated-Learning-Based Methodologies for Brain Tumor Diagnosis. J. Pers. Med. 2022, 12, 275. [Google Scholar] [CrossRef]
  69. Komal, A.; Malik, H. Transfer Learning Method with Deep Residual Network for COVID-19 Diagnosis Using Chest Radiographs Images. In Proceedings of the International Conference on Information Technology and Applications; Springer: Singapore, 2022; pp. 145–159. [Google Scholar]
  70. Malik, H.; Anees, T.; Din, M.; Naeem, A. CDC_Net: Multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. Multimed. Tools Appl. 2022, 1–26. [Google Scholar] [CrossRef]
  71. Liang, N.-Y.; Huang, G.-B.; Saratchandran, P.; Sundararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
  72. Wang, B.; Huang, S.; Qiu, J.; Liu, Y.; Wang, G. Parallel online sequential extreme learning machine based on MapReduce. Neurocomputing 2015, 149, 224–232. [Google Scholar] [CrossRef]
  73. Mascarenhas, S.; Agarwal, M. A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification. In Proceedings of the 2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON), Bengaluru, India, 19–21 November 2021; Volume 1, pp. 96–99. [Google Scholar]
  74. Sitaula, C.; Hossain, M.B. Attention-based VGG-16 model for COVID-19 chest X-ray image classification. Appl. Intell. 2021, 51, 2850–2863. [Google Scholar] [CrossRef] [PubMed]
  75. Wang, S.-H.; Zhang, Y.-D. DenseNet-201-based deep neural network with composite learning factor and precomputation for multiple sclerosis classification. ACM Trans. Multimed. Comput. Commun. Appl. 2020, 16, 1–19. [Google Scholar] [CrossRef]
  76. Deng, J. A large-scale hierarchical image database. In Proceedings of the IEEE Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  77. Targ, S.; Almeida, D.; Lyman, K. Resnet in resnet: Generalizing residual architectures. arXiv 2016, arXiv:1603.08029. [Google Scholar]
  78. Maymounkov, P.; Mazieres, D. Kademlia: A peer-to-peer information system based on the xor metric. In International Workshop on Peer-to-Peer Systems; Springer: Berlin/Heidelberg, Germany, 2002; pp. 53–65. [Google Scholar]
  79. Cintrón-Arias, A.; Banks, H.T.; Capaldi, A.; Lloyd, A.L. A sensitivity matrix based methodology for inverse problem formulation. J. Inverse Ill-Posed Probl. 2009, 17, 545–564. [Google Scholar] [CrossRef]
  80. Lund, O.; Nielsen, M.; Kesmir, C.; Pedersen, A.G.; Justesen, S.; Lundegaard, C.; Worning, P.; Sylvester-Hvid, C.; Lamberth, K. Definition of supertypes for HLA molecules using clustering of specificity matrices. Immunogenetics 2004, 55, 797–810. [Google Scholar] [CrossRef]
  81. Pang, J.; Huang, Y.; Xie, Z.; Han, Q.; Cai, Z. Realizing the heterogeneity: A self-organized federated learning framework for IoT. IEEE Internet Things J. 2020, 8, 3088–3098. [Google Scholar] [CrossRef]
  82. Kuzdeuov, A.; Baimukashev, D.; Karabay, A.; Ibragimov, B.; Mirzakhmetov, A.; Nurpeiissov, M.; Lewis, M.; Varol, H.A. A network-based stochastic epidemic simulator: Controlling COVID-19 with region-specific policies. IEEE J. Biomed. Health Inform. 2020, 24, 2743–2754. [Google Scholar] [CrossRef]
  83. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). Eur. Radiol. 2021, 31, 6096–6104. [Google Scholar] [CrossRef]
  84. Jin, C.; Chen, W.; Cao, Y.; Xu, Z.; Tan, Z.; Zhang, X.; Deng, L.; Zheng, C.; Zhou, J.; Shi, H.; et al. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat. Commun. 2020, 11, 5088. [Google Scholar] [CrossRef]
  85. Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Chen, Y.; Su, J.; Lang, G. A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 2020, 6, 1122–1129. [Google Scholar] [CrossRef]
  86. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Zhao, H.; Jie, Y.; Wang, R. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef]
  87. Wang, B.; Jin, S.; Yan, Q.; Xu, H.; Luo, C.; Wei, L.; Zhao, W.; Hou, X.; Ma, W.; Xu, Z.; et al. AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system. Appl. Soft Comput. 2021, 98, 106897. [Google Scholar] [CrossRef]
  88. Durga, R.; Poovammal, E. FLED-Block: Federated Learning Ensembled Deep Learning Blockchain Model for COVID-19 Prediction. Front. Public Health 2022, 10, 892499. [Google Scholar] [CrossRef]
  89. Zheng, C.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Wang, X. Deep learning-based detection for COVID-19 from chest CT using weak label. MedRxiv 2020. [Google Scholar] [CrossRef]
  90. Shi, F.; Xia, L.; Shan, F.; Song, B.; Wu, D.; Wei, Y.; Yuan, H.; Jiang, H.; He, Y.; Gao, Y.; et al. Large-scale screening to distinguish between COVID-19 and community-acquired pneumonia using infection size-aware classification. Phys. Med. Biol. 2021, 66, 065031. [Google Scholar] [CrossRef]
  91. Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Chen, Q.; Huang, S.; Yang, M.; Yang, X.; et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 2020, 10, 19196. [Google Scholar] [CrossRef]
  92. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 2020, 200905. [Google Scholar]
  93. Naeem, A.; Anees, T.; Ahmed, K.T.; Naqvi, R.A.; Ahmad, S.; Whangbo, T. Deep learned vectors’ formation using auto-correlation, scaling, and derivations with CNN for complex and huge image retrieval. Complex Intell. Syst. 2022, 1–23. [Google Scholar] [CrossRef]
Figure 1. Samples of CT scan images of COVID-19 were collected from Datasets 1 to 5.
Figure 1. Samples of CT scan images of COVID-19 were collected from Datasets 1 to 5.
Bioengineering 10 00203 g001
Figure 2. Proposed blockchain-based FL framework used for the identification of COVID-19 through CT scans.
Figure 2. Proposed blockchain-based FL framework used for the identification of COVID-19 through CT scans.
Bioengineering 10 00203 g002
Figure 3. Proposed CapsNet with IELMs architecture.
Figure 3. Proposed CapsNet with IELMs architecture.
Bioengineering 10 00203 g003
Figure 4. Procedure of keeping medical records on blockchain across five distinct hospitals.
Figure 4. Procedure of keeping medical records on blockchain across five distinct hospitals.
Bioengineering 10 00203 g004
Figure 5. Confusion matrices; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Figure 5. Confusion matrices; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Bioengineering 10 00203 g005
Figure 6. ROC curves; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Figure 6. ROC curves; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Bioengineering 10 00203 g006
Figure 7. AU(ROC) curves; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Figure 7. AU(ROC) curves; (a) proposed model, (b) VGG-16, (c) VGG-19, (d) ResNet-101, (e) DenseNet-169, and (f) DenseNet-201.
Bioengineering 10 00203 g007
Figure 8. Segmentation of COVID-19 infected CT scan. The first row represents the ground truth portion of the COVID-19-infected patients. The second row presents the prediction made by the proposed model about the infected area. In the third and fourth row are presented the results produced by the UNET++ and UNET, respectively.
Figure 8. Segmentation of COVID-19 infected CT scan. The first row represents the ground truth portion of the COVID-19-infected patients. The second row presents the prediction made by the proposed model about the infected area. In the third and fourth row are presented the results produced by the UNET++ and UNET, respectively.
Bioengineering 10 00203 g008
Figure 9. A result achieved by the proposed blockchain-based FL model. (a) Accuracy of the proposed model on COVID-19 CT scan dataset for five different hospitals. (b) F1-score of the proposed model; (c) model loss on COVID-19 CT scan dataset five different hospitals. (d) Time of COVID-19 CT scan dataset for five different hospitals using FL.
Figure 9. A result achieved by the proposed blockchain-based FL model. (a) Accuracy of the proposed model on COVID-19 CT scan dataset for five different hospitals. (b) F1-score of the proposed model; (c) model loss on COVID-19 CT scan dataset five different hospitals. (d) Time of COVID-19 CT scan dataset for five different hospitals using FL.
Bioengineering 10 00203 g009
Figure 10. Comparison of the computational cost of the models. (a) Proposed blockchain-based FL cost. (b) Computational cost comparison with five DL and FL.
Figure 10. Comparison of the computational cost of the models. (a) Proposed blockchain-based FL cost. (b) Computational cost comparison with five DL and FL.
Bioengineering 10 00203 g010
Table 1. Summary of related work for the identification of COVID-19 by using FL and DL models.
Table 1. Summary of related work for the identification of COVID-19 by using FL and DL models.
RefModelsObjectiveImage TypeFLOutcomes
CXRCT
[37]EMR CXR AITo identify the COVID-19-infected patients.×AUC = 0.92
[38]FL + CNNTo detect lung abnormalities that occur due to COVID-19.×Specificity = 95.27%
[39]FL + Semi-supervised LearningTo segment the COVID-19-infected regions of the lungs.×Accuracy = 0.71
[40]Capsule NetworkTo find the COVID-19 infection in the lungs.×Precision = 0.83
[50]FED-GANTo identify the COVID-19-infected individuals.×Precision = 96.6%
[51]3D UNETTo segment the infected areas of the lungs.×Accuracy = 98.07%
[52]RFTo predict COVID-19 virus activity×Accuracy = 81.80%
[53]ResNet-18To find evidence of COVID-19’s existence in the lungs.Accuracy = 97.78%
[54]AlexNetTo identify COVID-19 from pneumonia.×Accuracy = 96.29%
[55]Multiple CNN modelsTo forecast the COVID-19 illness××Accuracy = 94.00%
Table 2. Summary of the five datasets of CT scans of COVID-19 and normal cases.
Table 2. Summary of the five datasets of CT scans of COVID-19 and normal cases.
DatasetsNo. of PatientsCOVID-19 CT ScansNormal
CT Scans
Image FormatType of CT ScansTotal Number of CT Scans
Dataset 1 [56]100035,6359367DICOM2D45,002
Dataset 2 [57]8928,3955611PNG3D 34,006
Dataset 3 [58]NA12521230PNG2D2482
Dataset 4 [59]11101252543D CT scans3D379
Dataset 5 [60]216349463PNG2D812
Table 3. Comparison between CapsNet and traditional ANNs.
Table 3. Comparison between CapsNet and traditional ANNs.
StepsProcessCapsNetTraditional ANN
1OutputVector (Vj)Scaler (Sj)
2Affine Transformation A ^ u | h = T u , h ,   A u NA
3Weighted Sum S a = i M i , a   A ^ u | h S a = i = 1 3 M i X i + C
4Activation Function S K = | | S K | | 2 1 + | | S K | | 2   S K | | S K | | I L , K A = F X j
Table 4. Comparison of the proposed model with five DL models.
Table 4. Comparison of the proposed model with five DL models.
ModelsNodesPre-TrainedACUPRERECSPFF1-Score
VGG-16MLPImageNet91.75%91.42%91.47%91.29%91.22%
VGG-19MLPImageNet92.21%92.09%92.12%92.08%92.14%
ResNet-101MLPImageNet94.81%94.21%94.32%94.27%94.22%
DenseNet-169MLPImageNet94.99%95.01%94.96%94.97%95.00%
DenseNet-201MLPImageNet95.45%95.54%95.47%95.43%95.48%
Proposed ModelsCapsule Network and IELMs-98.99%98.96%98.97%98.95%98.96%
Table 5. Comparison of the proposed model with UNET and UNET++ for the segmentation process.
Table 5. Comparison of the proposed model with UNET and UNET++ for the segmentation process.
ModelsACUPRERECSPFF1-ScoreDSC
UNET84.15%84.42%84.99%84.75%84.22%84.19%
UNET++86.10%86.99%86.09%86.99%86.02%86.01%
Proposed Model95.81%95.49%95.32%95.57%95.51%95.50%
Table 6. Performance comparison of the proposed model with recent state-of-the-art.
Table 6. Performance comparison of the proposed model with recent state-of-the-art.
RefModelsDisease ClassificationAccuracy
(%)
Data SharingBCT Privacy Protection
[82]3D-ResNet + AttentionCOVID-19, Pneumonia, and Normal93.30NoNo
[83]CNNCOVID-19 and Normal82.90NoNo
[40]Capsule NetworkCOVID-19 and Normal98.0YesYes
[84]2D-CNNCOVID-19, Pneumonia, and Normal94.41NoNo
[85]2D-CNNCOVID-19, Influenza, and Normal86.70NoNo
[86]ResNet-50COVID-19, Pneumonia, and Normal86.00NoNo
[87]UNET++COVID-19, Pneumonia, and Normal97.40NoNo
[88]Deep learningCOVID-1998.20YesYes
[89]UNET and CNNCOVID-19, Pneumonia, and Normal90.70NoNo
[90]RFCOVID-19, Pneumonia (bacterial and viral), and Normal87.90NoNo
[91]UNET++ and CNNCOVID-19, Pneumonia, and Normal95.20NoNo
[92]ResNet-50COVID-19, Pneumonia, and Normal90.00NoNo
OursCapsNet with IELMsCOVID-19 and Normal98.99YesYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malik, H.; Anees, T.; Naeem, A.; Naqvi, R.A.; Loh, W.-K. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering 2023, 10, 203. https://doi.org/10.3390/bioengineering10020203

AMA Style

Malik H, Anees T, Naeem A, Naqvi RA, Loh W-K. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering. 2023; 10(2):203. https://doi.org/10.3390/bioengineering10020203

Chicago/Turabian Style

Malik, Hassaan, Tayyaba Anees, Ahmad Naeem, Rizwan Ali Naqvi, and Woong-Kee Loh. 2023. "Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans" Bioengineering 10, no. 2: 203. https://doi.org/10.3390/bioengineering10020203

APA Style

Malik, H., Anees, T., Naeem, A., Naqvi, R. A., & Loh, W. -K. (2023). Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering, 10(2), 203. https://doi.org/10.3390/bioengineering10020203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop