Next Article in Journal
Entropy-Based Optimization of 3D-Printed Microchannels for Efficient Heat Dissipation
Previous Article in Journal
Towards Numerical Method-Informed Neural Networks for PDE Learning
Previous Article in Special Issue
A Genetic Algorithm for Site-Specific Management Zone Delineation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application

by
Roman Rodriguez-Aguilar
1,*,
Jose-Antonio Marmolejo-Saucedo
2,3 and
Utku Köse
4
1
Facultad de Ciencias Economicas y Empresariales, Universidad Panamericana, Ciudad de Mexico 03920, Mexico
2
Romway Machinery Manufacturing Co., Ltd., No. 16 Julong Road, Huangze Industrial Park, Shengzhou 312400, China
3
Centro de Investigación en Ciencias Fisico-Matematicas, Universidad Autonoma de Nuevo Leon, San Nicolas de los Garza 66450, Mexico
4
Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Isparta 32260, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2393; https://doi.org/10.3390/math13152393
Submission received: 12 June 2025 / Revised: 16 July 2025 / Accepted: 22 July 2025 / Published: 25 July 2025
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)

Abstract

Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently utilized in deep learning applications to analyze detailed structures and organs in the body, using advanced intelligent software. However, challenges related to performance and data privacy often arise when using medical data from patients and healthcare institutions. To address these issues, new approaches have emerged, such as federated learning. This technique ensures the secure exchange of sensitive patient and institutional data. It enables machine learning or deep learning algorithms to establish a client–server relationship, whereby specific parameters are securely shared between models while maintaining the integrity of the learning tasks being executed. Federated learning has been successfully applied in medical settings, including diagnostic applications involving medical images such as MRI data. This research introduces an analytical intelligence system based on an Internet of Medical Things (IoMT) framework that employs federated learning to provide a safe and effective diagnostic solution for brain tumor identification. By utilizing specific brain MRI datasets, the model enables multiple local capsule networks (CapsNet) to achieve improved classification results. The average accuracy rate of the CapsNet model exceeds 97%. The precision rate indicates that the CapsNet model performs well in accurately predicting true classes. Additionally, the recall findings suggest that this model is effective in detecting the target classes of meningiomas, pituitary tumors, and gliomas. The integration of these components into an analytical intelligence system that supports the work of healthcare personnel is the main contribution of this work. Evaluations have shown that this approach is effective for diagnosing brain tumors while ensuring data privacy and security. Moreover, it represents a valuable tool for enhancing the efficiency of the medical diagnostic process.

1. Introduction

Technological advancements have significantly influenced modern life in the 21st century, largely due to the foundational effects of information and communication technologies that emerged in the previous century. Today, many of these advanced technologies build upon past developments and are impacting various fields by enhancing existing application methods. In this context, artificial intelligence (AI) plays a crucial role in creating flexible systems that are capable of learning about specific problems and providing effective solutions. Among the family of learning algorithms, machine learning (ML) models have taken a leading role in developing data-driven computational tools that assist in solving real-world challenges [1,2,3]. These tools are widely utilized across numerous fields, with healthcare being the most prominent due to its long-standing relationship with ML-based methodologies. From medical diagnosis to treatment planning, and from drug discovery to robotics in healthcare, ML has been employed successfully for various cutting-edge medical applications [4,5,6,7]. As we move further into the new century, ML has evolved into deep learning (DL), which features more advanced models. These DL models are adept at handling complex, detailed, and large amounts of data to yield improved solutions. In medical applications, DL models have demonstrated significant success in analyzing various types of data. Notably, medical image data from technologies such as MRI, CT, and X-rays have been extensively utilized to develop automated decision-making tools powered by DL [8,9,10].
Magnetic resonance imaging (MRI) is one of the most effective medical imaging technologies, as it allows for the detailed screening of structures and organs within the body [11]. Because of this capability, MRI is commonly used for routine check-ups and the diagnosis of various diseases. In particular, its role in cancer research is crucial, as different types of cancer can be diagnosed through MRI. The literature showcases numerous successful applications of deep learning (DL) models in cancer diagnosis [12,13,14]. While different types of cancer or medical images may require various pre-processing steps, deep learning has emerged as a powerful tool, enabling detailed analyses that lead to early diagnoses and often outperforming expert evaluations [15,16,17]. Research focusing on deep learning and cancer applications continues to enhance the use of MRI, facilitating the development of diverse application designs that improve accuracy and performance. However, there is still a need for advancements in decision-making tools to ensure that they are reliable when analyzing sensitive patient data. Furthermore, the increasing demand for data sharing and communication highlights the necessity for more advanced solutions that integrate both software and hardware components within the same platform. Therefore, methods for cancer diagnosis can be further expanded through the sustainable integration of multiple components with the aim of enhancing user interaction and improving decision-making strategies.
The main goal of this study is to introduce a framework for the Internet of Medical Things (IoMT) that leverages federated learning (FL) to provide a secure and effective solution for cancer diagnosis. Specifically, this framework supports IoMT to create a robust ecosystem wherein users can perform MRI diagnoses using the FL infrastructure. This infrastructure not only ensures patient data privacy but also enhances deep learning (DL) tasks. The primary focus of the study is on brain tumor diagnosis through MRI, as this area significantly contributes to the relationship between DL and cancer research. Capsule networks (CapsNet) within the DL structure were applied, as they have demonstrated superior performance compared to other models like convolutional neural networks (CNN). By employing specific datasets of brain MRI images, the IoMT-FL-oriented framework enables multiple local capsule networks (CapsNet) to produce improved classification outcomes for the globally coordinated model. Once the framework is established, this study will assess the effectiveness of brain tumor diagnosis applications through both technical evaluations and user contributions.
In relation to the overall scope of the study, the remaining content is organized as follows. The next section gives a general overview of cancer research oriented toward deep learning (DL) and federated learning (FL). It specifically addresses the open problems targeted in this study. Following that, the third section describes the DL and FL infrastructure being utilized, as well as the designed Internet of Medical Things (IoMT) framework. The fourth section is devoted to the applications and findings derived from various evaluations. The fifth section provides a comprehensive discussion of these findings, while also addressing any limitations identified during the process. Lastly, the concluding section synthesizes the key insights and outlines the proposed directions for future work.

2. Literature Review

Deep learning (DL) has a wide range of applications in cancer research. While early uses of artificial intelligence (AI) were often linked to traditional machine learning (ML) solutions, the current trend primarily focuses on utilizing DL models to enhance the outcomes of cancer diagnosis, treatment, and decision-making processes. Cancer research encompasses various areas, including pathology, screening, diagnosis, treatment, genetic studies, and precision medicine. Since these areas require extensive data analysis, DL plays a crucial role in automating tasks and improving outcomes in terms of their effectiveness and efficiency [13,14,16,18,19,20,21,22,23,24]. Research interests within these studies have targeted different types of cancer, as the processes of examination, diagnosis, and treatment can vary significantly. Additionally, there is a growing interest in integrating medical imaging with cancer research and DL to promote the integration of these fields. This literature review will specifically focus on the literature related to DL applications in cancer diagnosis.
Previous research has actively focused on developing deep learning (DL) models for diagnostic purposes. Notably, certain cancer types have received more research attention to enhance success rates through DL methods, particularly in the realm of medical imaging. For instance, Sun et al. diagnosed lung cancer using CT images, employing various DL models such as convolutional neural networks (CNN), deep belief networks (DBN), and stacked denoising autoencoders (SDAE) [25]. Their research compared DL approaches with traditional computer-aided diagnosis and found that CNNs and DBNs yielded better outcomes, achieving accuracy rates of 80% and 81%, respectively [25]. Adila et al. developed a 3D DL approach to enhance lung cancer diagnosis using CT data, yielding an area under the curve (AUC) rate of 94.4% across a total of 6716 cases [26]. Another study by Lakshmanaprabu employed deep neural networks (DNNs) for lung cancer diagnosis, reporting a sensitivity of 95.26%, a specificity of 96.2%, and an overall accuracy of 96.2% [27].
CT images have gained remarkable popularity in the field of DL applications for lung cancer diagnosis [28,29,30]. A recent review by Wang explored various DL-based cancer diagnosis applications, highlighting that most data are derived from CT, with different CNN architectures being employed (such as U-Net, VGG-16, YOLO, and 3D CNN) to achieve accuracy rates ranging from 80% to 95% [31]. Additionally, MRI has also been utilized in alternative research on DL and lung cancer diagnosis or screening [32,33,34].
Deep learning (DL) has been extensively applied in the diagnosis of breast cancer. A study by Zheng et al. applied a specifically designed convolutional neural network (CNN) model to diagnose breast cancer using various types of images, including MRI. This CNN-based model achieved an accuracy of 97.2%, a sensitivity of 98.3%, and a specificity of 96.5% [15]. Another research study conducted by Shen et al. focused on mammography images; their CNN model improved the area under the curve (AUC) to 0.91, demonstrating a sensitivity of 86.1% and a specificity of 80.1% [16]. Hu et al. employed multiparametric MRI (mpMRI), combined with transfer learning, to enhance the performance of a CNN model for breast cancer diagnosis. Their findings indicated improved outcomes thanks to the use of mpMRI and transfer learning [35]. Witowski et al. used a 3D CNN to diagnose breast cancer based on MRI data, showing that such a system can compete with the performance of radiologists [36]. Additionally, another study using a CNN to interpret breast cancer MRI data reported success rates with an accuracy of 92.8%, a sensitivity of 89.5%, and a specificity of 94.3% [37].
While substantial research efforts have concentrated on lung and breast cancer, the existing literature also encompasses a comprehensive examination of various other cancer types. Although the intensity of research studies and outcomes may vary across different cancer types, significant advancements have been made to enhance the relationship between deep learning (DL) and automated diagnosis tasks. For instance, prostate cancer has been diagnosed using MRI data and various DL models, particularly convolutional neural networks (CNNs) [38,39,40,41]. Similarly, colorectal cancer diagnoses have relied on DL models being applied to MRI data [42,43,44]. Liver cancer diagnosis has also benefited from DL applications, with MRI data, clinical information, and specific DL models like CNNs producing noteworthy diagnostic outcomes [45,46,47,48].
Brain tumors have become a crucial topic in DL literature, as DL models have led to improved diagnostic results. A review study by Işık et al. showed successful image segmentation tasks in MRI-based brain tumor analysis, owing to the efficacy of DL models [49]. Additionally, a study by Sajid et al. utilized CNNs, along with a post-processing step, on MRI brain samples, achieving notable rates such as a Dice score of 86%, a sensitivity of 86%, and a specificity of 91% [50]. Furthermore, research by Huq et al. employed two different types of CNN models, achieving accuracy rates of 97.3% and 96.5% for classifying brain tumor types [51]. Lastly, a comparative study by Paul et al. assessed fully connected neural networks and CNNs for diagnosis applications using MRI data from 191 patients. The study concluded better outcomes for the CNN model [52].
Ranjbarzadeh et al. developed the cascade convolutional neural network (C-ConvNet/C-CNN) and integrated it with a distance-wise attention (DWA) mechanism to enhance brain tumor diagnosis. Their study achieved mean Dice scores of 92% for the whole tumor, 91% for the enhancing tumor, and 87% for the tumor core [53]. In a noteworthy study, Hashemzehi et al. introduced a hybrid model combining convolutional neural networks (CNN) and neural autoregressive distribution estimation (NADE) for MRI diagnostics. The CNN-NADE model demonstrated superior performance, achieving accuracy of around 95%, sensitivity of 95%, specificity of 97%, precision of 95%, and an F1-score of 95% [54]. In a more recent study, Aamir et al. utilized the EfficientNet-B0 and ResNet50 models to diagnose brain tumors, comparing their results with various competing models, mostly other CNN architectures. Their model achieved an impressive overall accuracy of 98.95% [55]. For 2D MRI data, Ottom et al. employed the Znet, which is based on deep neural network (DNN) data augmentation, to improve brain tumor segmentation outcomes [56]. Additionally, Chattopadhyay and Maitra used CNN for 2D MRI data and achieved an accuracy rate of 99.74%, which is reported to be superior compared to other models in the literature [57]. Many other studies in the literature focus on deep learning models for brain tumor diagnosis from MRI, and several review articles provide the latest insights on advancements in deep learning applications in this field [58,59,60,61,62].
This study focuses on the diagnosis of brain tumors using MRI data, as highlighted in the existing literature. While automating and improving the diagnosis of all cancer types through deep learning (DL) is critical, brain tumors hold a unique position due to their potential to cause ongoing cognitive, psychological, and physical negative effects on patients’ lives [63]. Therefore, it is important to explore alternative research methods to enhance the existing literature. One significant challenge is the sharing of patient data, including imaging data, due to increasing concerns about data privacy and the regulations surrounding the sharing and editing of personal information. Additionally, cybersecurity issues have adversely affected computational systems, including AI-based platforms. Consequently, ensuring data security while maintaining an appropriate level of privacy for medical purposes has become a vital task.
To balance these requirements, federated learning (FL) appears to be an effective solution. This approach allows distributed clients to run machine learning (ML) and DL models locally while sharing specific training parameters to achieve a globally learned ML/DL model [64]. This makes FL a promising strategy for developing secure and privacy-preserving systems for brain tumor diagnosis. The review of the literature reveals that FL has been applied in cancer research; however, there is still considerable possibility for expanding its applications specifically for brain tumor diagnosis [65]. Recent studies have predominantly focused on image segmentation solutions. For instance, Yi et al. developed the SU-Net, an enhanced version of U-Net that incorporates FL, which reportedly outperforms similar studies in the literature [66]. Additionally, a study by Islam et al. introduced a CNN-based FL model that achieved an accuracy rate of 91.05% for brain tumor diagnosis using MRI data [67]. Similarly, Sheller et al. conducted a multi-institutional data-sharing project and developed an FL-based solution for brain tumor segmentation, achieving a Dice score of 85% [68]. In summary, the literature indicates a growing interest in FL for brain tumor research, as this approach provides advantages that can enhance the related outcomes [69].
Based on the general literature review, there is a growing interest in applying deep learning (DL) to brain tumor diagnosis. Convolutional neural networks (CNNs) are currently the most popular DL models in this area, as evidenced by numerous research studies. Although some studies have explored various DL models, there is still the potential for the use of alternative DL architectures to enhance the findings reported thus far. One such alternative is capsule networks (CapsNet), which appear to be underutilized in the context of brain tumor diagnosis. This represents an opportunity to leverage CapsNet and contribute to cancer research focused on brain tumors. Additionally, federated learning (FL) is emerging as a valuable approach in DL-based cancer research, particularly in relation to CapsNet applications. The implementation of FL offers several advantages, including the preservation of data privacy, optimization of DL performance, and enhancement of cybersecurity in ecosystems involving multi-institutional collaborations.
Furthermore, it is essential to note that the development of an FL-based ecosystem can positively impact MRI data sharing. To address these open problems and foster innovative research, establishing an FL ecosystem within an Internet of Medical Things (IoMT) architecture can be highly effective. This study aims to utilize a multi-institutional IoMT framework in which MRI data can be shared in a structured manner, facilitating DL-based brain tumor diagnosis within a CapsNet-driven federated learning framework.

3. Federated Learning-Based IOMT for Brain Tumor Analysis by Capsule Networks

This study focused on improving the diagnosis of brain tumors using federated learning (FL) and an Internet of Medical Things (IoMT) setup, allowing for secure and private communication among multiple institutions. Given the current technological landscape, which emphasizes the need to minimize risks in communication processes, an IoMT approach integrated with FL has emerged as an essential methodology for addressing this issue. The following sections provide detailed information about each component of the proposed solution in this study.

3.1. IoMT Ecosystem

The Internet of Things (IoT) has proven to be an effective technological model for enhancing data processing and communication capabilities. From smartwatches to smart cameras and robotic devices, IoT technology enables the creation of an ecosystem in which different devices can communicate with each other. This connectivity allows for AI-based data analysis and inference, leading to improved outcomes [70]. Consequently, the IoT has impacted various fields, including healthcare. This led to the development of the term “Internet of Medical Things” (IoMT), which refers specifically to smart solutions that enhance medical applications through the communication of devices. Examples of the IoMT include diagnostic applications and advanced communication platforms that support activities such as multi-institutional emergency solutions and health tourism initiatives [71,72,73]. Given the significant advantages that these technologies offer, this study focuses on an IoMT framework that facilitates data collection and processing through a cloud infrastructure. A general overview of this ecosystem is presented in Figure 1.
Considering the general structure of the Internet of Medical Things (IoMT), several mechanisms exist within the ecosystem to support brain tumor diagnosis from MRI data:
  • In this IoMT framework, radiologists, doctors, patients, and healthcare staff actively participate in data usage and communication.
  • MRI data from patients are traditionally collected using screening devices. However, these data are automatically shared with data storage and processing nodes, which operate independently across different healthcare institutions.
  • The data storage and processing nodes perform pre-processing tasks on the MRI data to make a dataset suitable for the deep learning (DL) model known as CapsNet. Additionally, the original MRI data can be shared with radiologists, enabling them to write reports if needed.
  • In this ecosystem scenario, multiple healthcare institutions can connect to the IoMT infrastructure, allowing for the sharing of MRI data, along with any supplementary metadata or reports, among all relevant actors. This facilitates communication among all parties involved—except for patients—and contributes to the data storage and management capability of the system. To support this, a software platform may be utilized, and simple interaction methods (such as Bluetooth beacons, QR codes, or NFC components) can be employed to help individuals access data within healthcare institutions. It is important to note that the entire data-sharing mechanism is restricted with a federated learning (FL) approach.
  • Each healthcare institution hosts local nodes, which receive processed MRI data from the data storage and processing nodes. These local nodes are integrated with the FL framework, ensuring data privacy. By default, the FL framework eliminates any patient data while enhancing the overall performance of deep learning within the ecosystem. Nonetheless, original MRI data can be shared among multiple institutions if necessary; however, this falls outside the preferred FL-based IoMT approach presented in this study.
  • Local nodes execute their CapsNet-based deep learning processes and share local data with the global server node of both the IoMT ecosystem and the FL framework, which is referred to as the central node. The central node is responsible for finalizing the trained CapsNet model, which can diagnose brain tumors from newly acquired MRI data. It also shares the results with authorized radiologists, doctors, and healthcare staff.
  • The mechanisms described above are designed within a cloud infrastructure, leveraging the advantages of cloud computing, such as scalability, flexibility, load balancing, and performance optimization, in the IoMT ecosystem. It is also noteworthy that both the FL and cloud solutions contribute to the cybersecurity aspects of the developed system.

3.2. Federated Learning

Federated learning (FL) is a leading approach designed to preserve data privacy in machine learning (ML) and deep learning (DL) systems. Utilizing FL frameworks allows for the enhancement of overall learning performance, particularly when dealing with large and complex datasets. As advancements in AI-driven data analysis continue, the demand for more data in medical applications has become essential. However, this necessity brings challenges related to data management and usage. In response to the growing risks of cybersecurity breaches, the past decade has seen the introduction of various national and global regulations aimed at imposing restrictions on the use of personal data. These regulations emphasize the personal right of individuals to own their data and to control how it is processed by third parties [74]. Therefore, to strike a balance between the effective use of patient data in robust deep learning applications and the protection of data privacy rights, federated learning has proven to be an effective tool for adjusting AI learning processes within a distributed framework [74,75].
In a typical federated learning scenario, interactions occur between the clients and a central server. Local machines perform their own training tasks using their data, which then join the federated learning framework. Once the training processes are completed, the resulting local parameters or data are shared with a global machine, acting as a server. This server aggregates the parameters from all clients to construct the final trained model [76].
Considering N local clients as C 1 ,   C 2 , C 3 , ,   C N and local data sets (parts) D 1 ,   D 2 , D 3 , ,   D N , the local training processes are followed by aggregation in the global side. This mechanism can be defined as follows (see Equation (1)):
w g l o b a l t + 1 = A g g r e g a t i o n ( w i t + 1 ; i [ 1 , ,   N ] )
where w corresponds to the parameter (represented as weight), while t represents the training phases. Aggregation is the determined aggregation function, and i shows the index value of the local client.
The first approach to federated learning (FL) was introduced by Google [77]. Since then, FL has undergone various modifications, leading to the development of different types of FL architectures appearing in the literature. Figure 2 illustrates a typical client–server architecture, which consists of a local–global model of FL.
In addition to the client–server architecture, there are other architectures that utilize peer-to-peer communication or various methods of data sharing and aggregation. Depending on how the data are used, federated learning (FL) approaches can also be structured vertically or hierarchically, based on the style of training data [74,78,79,80]. This study adopted the client–server architecture to implement standard processing mechanisms within the Internet of Medical Things (IoMT) ecosystem. Furthermore, the FL framework employed in this study follows a hierarchical flow for data processing. The chosen algorithm for model aggregations is federated averaging (FedAvg), where the parameters associated with each client are weighted and averaged to create a global model. In the FedAvg algorithm, the weight factor is determined by the volume of data for each specific client [76]. Although different machine learning (ML) or deep learning (DL) models can be utilized in local nodes, this study required that all nodes use the CapsNet model for diagnostic applications.

3.3. CapsNet-Based Deep Learning

In deep learning applications, convolutional neural networks (CNNs) have become a popular model for analyzing image data. CNNs are a specific type of neural network architecture that includes fully connected neurons, which are fed by specially designed layers. These layers process the input data by filtering it to capture important features and can also reduce the dimensionality or flatten the data, making it easier for the fully connected neurons to work effectively [81,82]. The layers involved in these processes are known as convolutional layers, pooling layers, and flattening layers, respectively. In addition to these standard layers, there are several specialized layers designed to enhance data processing in various ways. By combining these CNN layers, image-based data can be effectively analyzed, and classification tasks can be performed with improved accuracy. However, CNN models face challenges in terms of recognizing changing conditions in images, such as rotations, deformations, and variations in texture. They also struggle to capture all features of the image data during pooling operations [83,84]. These limitations have prompted researchers to introduce capsule networks (CapsNet) as a new deep learning model.
CapsNet is a neural network model that utilizes capsule layers, which are composed of groups of neurons that accept and output vector data. Unlike the scalar values that are commonly found in traditional convolutional neural networks (CNNs), this grouping within capsules allows for a more comprehensive representation of the different properties associated with the same feature [83,85]. As a result, a CapsNet model can better recognize additional details in image data. As shown in Figure 3, a typical CapsNet model enhances the outputs of convolutional layers by processing them thoroughly through capsule layers [84]. In the literature, the features of CapsNet have been further developed with various modifications such as dynamic routing, which includes primary capsule layers and class capsule layers. These modifications help to calculate additional parameters, such as the likelihood of the existence of specific features [84].
To enhance the performance of brain tumor diagnosis in this study, CapsNet was utilized as the deep learning (DL) component. Compared to traditional convolutional neural networks (CNNs), CapsNet is a preferred choice for analyzing medical image data because it effectively captures spatial hierarchies. This allows it to represent complex anatomical components, such as tumors, and detect structures with varying appearances thanks to its capsule layers. Additionally, CapsNet has fewer parameters than typical CNNs, which reduces the risk of overfitting [83,84,85]. In this study, a federated learning (FL) framework was established by integrating CapsNet models within both local nodes and a global server. Within this framework, training data from the local nodes were aggregated at the server side (global aggregator) to create a final trained CapsNet model for the diagnosis application.

3.4. Brain Tumor Diagnosis Using MRI Data

This study focused on MRI data for diagnosing brain tumors. MRI was selected due to its advantages in tumor detection and its widespread use by radiologists and physicians. A significant challenge in this area is the need for enough data that have been collected specifically with labeled classes to accurately determine different brain tumor types. To address this, the dataset provided by Cheng et al. was utilized in this study [86]. This dataset contains T1-weighted contrast-enhanced MRI images collected from two hospitals in China: Nanfang Hospital in Guangzhou and the General Hospital of Tianjin Medical University, covering data from 2005 to 2010. It includes a total of 3064 slices, each with dimensions of 512 × 512 pixels, a pixel size of 0.49 × 0.49 mm, and a slice thickness of 6 mm, with a gap of 1 mm between the slices. These slices are associated with 233 patients and were classified by three radiologists into three categories: 708 meningiomas, 930 pituitary tumors, and 1426 gliomas [86] (see Figure 4). The dataset also contains MATLAB R2024b.mat files that include specific metadata such as tumor type, patient ID, and tumor location with coordinate values.
The study organized the mechanisms of the Internet of Medical Things (IoMT) ecosystem, which includes the components necessary for an advanced setup aimed at providing an effective, sustainable, and secure diagnostic system. Specifically, the privacy of MRI data and patient information was protected by implementing a federated learning (FL) framework. It is evident that the CapsNet architecture was employed to enhance the overall performance of diagnostics, thereby supporting radiologists, doctors, and healthcare staff through a deep learning approach. The developed IoMT and FL-based solution was applied through various applications, and several evaluation methodologies were utilized to better understand the system’s performance across multiple criteria. The next section provides details about these applications and the results that were obtained.

4. Results

The designed FL-based IoMT approach has been passed through diagnosis applications to obtain some findings that clarify how the provided solution was successful enough in terms of detecting brain tumors in the structured technological setup. In this context, the following paragraphs give detailed information about the general application setup, parameter adjustments for the CapsNet model, and findings from several evaluation perspectives in terms of technical success capabilities and users’ perspectives regarding the system.

4.1. Federated Learning-Based IoMT Setup

The system developed in this study has been examined within a simulation environment, where several healthcare institutions (local nodes) were able to share MRI data using a federated learning (FL) framework. This FL environment was established using the Python Flower library [87], and the entire simulation infrastructure was coded in Python 3.13.0.
A high-performance workstation (Lenovo, Beijing, China) running Ubuntu 22.04 LTS, equipped with an Intel Core i9-12900K CPU (featuring 16 cores and 24 threads), 64 GB of DDR4 RAM, and an Nvidia RTX 3090 GPU (with 24 GB of VRAM), was utilized for simulation flow applications. The simulation considered varying numbers of healthcare institutions: 5, 10, and 15.
Since a real Internet of Medical Things (IoMT) environment should include screening devices as well as other communicating smart devices, the process of sharing MRI data with the CapsNet model was designed to include delays of a few seconds, meaning that local nodes joined the ongoing FL with some latency. The simulation setup was designed to ensure that the FL framework remained active, allowing new healthcare institutions to continuously contribute new MRI data so that the deep learning process can be repeated accordingly.
The MRI dataset used in this study [86] was divided into different data groups, enabling the local nodes (healthcare institutions) to receive varying amounts of MRI data throughout the FL process. To ensure the CapsNet model was robust enough, hyperparameter optimization was performed before proceeding with the specific applications. The CapsNet model used in this system was identified through a hyperparameter optimization process conducted across the entire MRI dataset. This optimization utilized the grid search methodology, focusing on specific parameters of the designed CapsNet architecture [88]. The resulting CapsNet model, derived from this hyperparameter optimization, was further applied to the diagnosis of brain tumors within a simulated FL-IoMT environment. Table 1 presents those parameters that were considered for optimization, along with their corresponding values. These parameter values were established based on relevant publications in the field.
Before optimizing the hyperparameters, the relevant parameters—the number of filters, learning rate, dropout value, and optimal epoch number—were determined based on preliminary experiments conducted on the CapsNet model. The optimal number of filters was established as 64 from the available candidates {32, 64, 128}. The learning rate was set at 0.001, within a logarithmic scale ranging from 1 × 10−5 to 1 × 10−2, and the dropout rate was chosen to be 0.4 from the available options {0.2, 0.4, 0.5, 0.6, 0.8}. The optimum epoch number, determined using an early stopping criterion, was found to be 500.
In the hyperparameter optimization process, the CapsNet models used the Adam optimizer with a default learning rate of 0.001; this same value was also applied in subsequent applications. Ultimately, the optimized CapsNet model consisted of five convolutional layers and five capsule layers, with the kernel initializer set to uniform. Additionally, the batch size was established at 10, the routing at 5, and the activation function was chosen to be ReLU.
In terms of overall architecture, the CapsNet model was designed as follows, beginning with two convolutional layers. After the second convolutional layer, a batch normalization layer and an average pooling layer were added. Following the average pooling layer, another convolutional layer was included, followed by another average pooling layer. This sequence continued through the remaining two convolutional layers, culminating in the architecture being supported by five capsule layers.

4.2. Applications and Findings for Brain Tumor Diagnosis

In the context of brain tumor diagnosis, the MRI dataset was divided based on the varying number of healthcare institutions involved. For each institution, 65% of the MRI data samples were allocated for training, while the remaining 35% were used for testing. This separation was conducted in such a way that slices from the same patient did not appear in both the training and testing sets, thereby eliminating any potential risk of information leakage. A small code function was utilized to perform a pre-check on the dataset for this purpose. To address any potential bias resulting from imbalanced class distributions (with the following class counts: meningioma: 708, pituitary: 930, and glioma: 1426), we implemented a class-weighted loss function, along with data augmentation techniques such as rotations and flips for the minority classes. Images with standard orientation were considered, so the rotation angles were small, ranging from ±5 to ±15 degrees. Greater rotations could create medically unrealistic images. A horizontal flip was considered because left-right symmetry is common in many organs, and the appearance of a tumor does not fundamentally change if it is flipped horizontally. A vertical flip was not considered because flipping an image of this nature vertically can change the anatomy in an unrealistic way or confuse the spatial relationship. All image slices were resized to 224 × 224 pixels to improve processing efficiency. Since the tumor regions are clearly visible in the slices, no additional background removal was necessary. Similarly, intensity normalization was not performed, as the slices already possessed contrast-enhanced intensities suitable for the classification task.
The CapsNet model underwent 500 epochs of training while considering varying delay rates for the Internet of Medical Things (IoMT) effect. To evaluate the stability and sustainability of the system, the training and testing phases were executed 30 times. Given that the diagnosis problem consists of three classes, the known metrics of the classification and confusion matrices were assessed using the average rates from the 30 runs. Additionally, the simulated numbers of healthcare institutions were set to 5, 10, and 15, respectively, with findings evaluated across these three different scenarios. The overall training time performances for the federated learning (FL) framework were also analyzed in relation to the changing delay rates in each scenario. Table 2 provides details regarding the performance findings, with the highest training time values for the same delay rates (e.g., specifically for a 2 s delay).
In the applied scenarios, the CapsNet model size was 5.6 MB. Each communication round involved both an upload and a download step, resulting in total communication traffic of 56 MB for 5 institutions, 112 MB for 10 institutions, and 168 MB for 15 institutions. This translated to a total of 5.6 GB, 11.2 GB, and 16.8 GB for over 100 rounds, respectively. During these scenarios, aggregation occurred in each round, with the server receiving model updates and performing averaging after each client completed its local epoch. Importantly, the bandwidth was not specifically limited (for example, via throughput throttling).
All clients used a fixed batch size of 10 to ensure uniformity and address the potential variance resulting from mini-batch effects in diverse datasets. However, different batch sizes could be explored in future research. To maintain simplicity, a homogeneous hardware setup was used for the institutions (clients) involved in the scenarios. Future research may include evaluations involving clients possessing varying hardware resources.
A sensitivity analysis highlighted that the marginal increase in training time for every additional second of delay becomes less significant as the number of clients increases. This indicates that as more clients participate, the system exhibits some degree of delay tolerance. The results demonstrated a nonlinear relationship between system performance and client count, with a higher number of clients contributing greater robustness and redundancy, which helped balance communication latency. In other words, increasing the number of healthcare institutions (local nodes) tends to improve resistance against time delays caused by the Internet of Medical Things (IoMT). Therefore, contributions from more healthcare institutions can help balance overall system performance, provided there are no additional technical issues.
Table 2 shows that the training time includes both local computation (training the model on each client) and communication between the server and clients. A detailed analysis of this parameter indicated that 15–20% of the training time was dedicated to communication, and this percentage increased with higher delay settings. Furthermore, greater time efficiency in larger federated setups was achievable through parallel execution, which kept the local computation time per client largely consistent. Finally, it was observed that even though the global synchronization time exhibited cumulative delays during aggregation, the training time per client decreased in scenarios involving more healthcare institutions, thanks to the increased parallelism.
Based on the best-performing scenarios under changing delay conditions, Table 3 presents the average findings for three different scenarios, considering Accuracy, Precision, Recall, and F1-Score.
The results presented are based on the test data, which consist of a total of 1072 samples. Following this, Table 3 and Figure 5, Figure 6 and Figure 7 illustrate the resulting confusion matrices for each scenario. Additionally, each figure is accompanied by tables that outline the per-class metrics, including Precision (PPV), Recall, F1-Score, AUC-ROC, and AUC-PRC.
As shown in Table 3 and the accompanying figures, the most favorable results were recorded from the largest number of healthcare institutions (15) participating in the Federated Learning–Internet of Medical Things (FL-IoMT) system. The presence of more local nodes enhanced the deep learning (DL) flow within the framework of the federated learning approach. Upon detailed examination, the average accuracy rate of the CapsNet model exceeded 97%. The precision rate indicates that the CapsNet model performs highly in accurately predicting true classes. Additionally, the recall findings suggest that the model is effective at detecting the target classes of meningiomas, pituitary tumors, and gliomas.
Based on the findings presented in Table 4, Table 5 and Table 6, it is evident that all tumor types were uniformly classified with high accuracy, due to the class balancing techniques employed in this study. The CapsNet model effectively mitigated the effects of class imbalance, as demonstrated by the consistent AUC-ROC and AUC-PRC values across all scenarios, with particularly significant improvements in AUC-PRC. Moreover, the performance remained stable or showed slight improvement with an increase in the number of participating healthcare facilities, especially in terms of Recall and F1-Score for the Pituitary and Meningiomas classes. Additionally, both the AUC-ROC and AUC-PRC results exhibited a minor enhancement, suggesting that a greater number of participating institutions contributed to improving generalization, which is a crucial indicator of model performance.

4.3. Findings for the Comparative Evaluation

The average accuracy rates for the three-scenario flow obtained from the CapsNet model were compared with those of several deep learning (DL) competitors. These competitors were selected from alternative DL models discussed in recent studies related to brain tumor diagnosis. In general, the same application and simulation workflow used for the CapsNet model was also applied to the competitors.
The CapsNet model was compared with the following models: EfficientNet-B0, ResNet50 (as referenced in [55]), the Znet (DNN) model from Ref. [56], the CNN model from Ref. [57], the BTSCNet model from Ref. [89], and the JGate-AttResUNet model introduced in Ref. [90]. The parameters for these competing models were configured to match the values given in their original sources, with some necessary adjustments and limited hyperparameter optimization performed. This optimization considered parameters such as learning rate, number of layers, batch size, and dropout rates to ensure a bias-free comparison and compatibility with the FL-IoMT infrastructure. Additionally, the same data split used for the CapsNet model was applied to the competitors. Table 7 presents the accuracy findings for each scenario.
According to the findings presented in Table 7, the CapsNet model proposed in this study achieved the highest accuracy rates in two scenarios. The closest competitors were BTSCNet and JGate-AttResUNet. In the first scenario, involving five healthcare institutions, the JGate-AttResUNet model attained the highest accuracy; however, the CapsNet model from this study achieved a commendable accuracy rate of 0.9748. These results indicate that CapsNet is quite effective for brain tumor diagnosis, especially in comparative analyses. Furthermore, the findings suggest that a simpler deep learning (DL) model like CapsNet can successfully address diagnostic challenges using MRI data. Overall, the results from all the compared models demonstrate the advanced capabilities of deep learning in medical image analysis, with all accuracy rates exceeding 90%.

4.4. Findings for User Evaluation

In addition to the technical evaluations, the entire FL-based IoMT approach underwent a user evaluation involving 30 participants: 10 doctors, 10 radiologists, and 10 healthcare staff. These participants used the system simulation for a period of three months. At the end of the usage period, all participants were asked to provide feedback through a survey prepared with specific statements. Responses were evaluated using a five-point Likert scale (1: totally disagree, 2: disagree, 3: no opinion, 4: agree, 5: totally agree).
Doctors and radiologists were specifically queried about the system’s performance in terms of diagnosis and data communication, while healthcare staff focused primarily on its usability. All users were also asked for their general opinions regarding the potential use of the system in healthcare institutions. The survey statements and the findings obtained from them are presented in Table 8, Table 9 and Table 10, respectively.
The findings from user evaluations indicate that the overall perception of the system is positive, particularly regarding its diagnostic capabilities, performance, and usability. Users expressed excitement about the potential of deep learning (DL) in brain tumor diagnosis. Additionally, there was a notably positive attitude toward the use of federated learning (FL) for data privacy. Doctors and radiologists saw potential in the system for applications beyond just cancer diagnosis. Overall, all users believed that the system can be adapted for various healthcare applications outside the scope of diagnosis.
A total of 30 participants took part in the user evaluation. The positive findings from this evaluation were viewed from an exploratory perspective and provided valuable preliminary feedback for the developed solution. It is also important to note that the survey adhered to established usability and perception metrics. Additionally, the responses were collected anonymously to eliminate any potential social desirability bias that could arise from self-reported measures.

4.5. Threat Analysis

Federated learning (FL) does not automatically protect against all types of privacy attacks. While it aims to safeguard data privacy by keeping raw data on clients localized, adversaries may still attempt to determine whether specific instances were used in training. They can exploit methods such as membership inference attacks, gradient inversion attacks, and model poisoning. In model poisoning, clients intentionally alter their updates to degrade or distort the resulting global model [91,92,93,94,95,96].
In this study, key privacy-preserving techniques, such as secure aggregation and differential privacy, were not integrated into the FL setup. Secure aggregation ensures that the server can only access encrypted, aggregated updates from clients, while differential privacy introduces controlled noise to gradient updates, thereby reducing the risk of information leakage.
Future research will explore the implementation of these and other privacy-enhancing strategies to conduct empirical attack tests, further assessing the system’s resilience.

5. Discussion and Limitations

As a result of the research and developments described in this study, several significant outcomes were achieved. Along with the technical findings and evaluations, some general discussion points can be highlighted:
  • The modern world requires the distributed use of technology for wide-area communication, especially in healthcare. The findings emphasize the role of the Internet of Medical Things (IoMT) in supporting diagnostic needs.
  • AI components, particularly deep learning (DL) models, have shown effectiveness in cancer applications. CapsNet was applied successfully to brain tumor diagnosis in a three-class problem, outperforming other DL models. This study contributes to the literature by addressing multi-class challenges in AI and healthcare while highlighting the importance of effective data feeding and model balance.
  • Medical imaging is a prominent application of AI, and this study utilized MRI data on brain tumors, encouraging further research in this area.
  • Despite advancements in healthcare technology, the demand for more data raises privacy concerns. This study demonstrated that federated learning (FL) within an IoMT system can effectively ensure data privacy without requiring local healthcare institutions to share MRI data while it is still training a global CapsNet model.
  • User evaluations indicated that the developed FL-IoMT system was both usable and effective in cancer research. Healthcare professionals, including radiologists and doctors, provided positive feedback on its diagnostic potential and user-friendliness.
  • Overall, the study highlights the collaborative role of IoT and AI in enhancing healthcare applications and decision support, paving the way for a better future in healthcare.

Limitations

The current study has some limitations that should be considered for future research and alternative work plans. First, the performance of the federated learning (FL) and Internet of Medical Things (IoMT) backgrounds of the system was evaluated solely in a simulated environment. Future studies could explore real-world applications, using alternative datasets from local hospitals and adapting to changing conditions to better assess the effectiveness of the proposed approach. This would also contribute to evaluating the generalization of the designed solution. Additionally, the study employed a deep learning (DL) model; therefore, future research may benefit from experimenting with different DL models or hybrid approaches that combine machine learning (ML) and DL techniques, as well as integrating image processing methods to analyze MRI data in greater detail.
Regarding the MRI data, this study was based on a specific dataset; thus, further studies should consider utilizing alternative MRI data. Expanding the data scope to include various imaging data related to brain tumor diagnosis could yield more comprehensive insights. Furthermore, since the current study focused exclusively on image data, future research could incorporate multi-modal data to enhance the analysis. It is also crucial to conduct further user evaluations involving a larger number of participants and control groups to gain a better understanding of the generalizability of the developed system and solution. As mentioned earlier, the study is limited in its defensive techniques against privacy attacks, highlighting the need for further research examining attack scenarios. Finally, the current study concentrated solely on cancer diagnosis. Based on user feedback, there is potential for extending the application scope to include other types of medical problems.

6. Conclusions

This study presents a Federated Learning (FL)-based Internet of Medical Things (IoMT) system designed to ensure an effective and privacy-preserving method for brain tumor diagnosis. The system incorporates IoMT components that facilitate collaboration among healthcare institutions and the involved stakeholders. The FL framework employs a client–server (local–global model) structure, which allows patient data to remain securely stored at host institutions while enabling the training of deep learning (DL) models for diagnostic applications. The DL infrastructure is supported by a capsule network (CapsNet) model, and brain tumor diagnosis is performed using MRI data. After developing the system, simulations were conducted to evaluate its performance by varying the number of participating healthcare institutions and introducing delay rates that simulated realistic IoMT communication flows. According to the evaluation results, the CapsNet model achieved successful classification results for tumor diagnosis and outperformed several competing models in the literature.
In the context of the FL framework, an increased number of healthcare institutions, even with reasonable delay rates, confirmed a balanced performance. The system aims to serve radiologists, doctors, and healthcare staff, and so user-oriented feedback was collected accordingly. Evaluations indicated that the developed system was effective in diagnosing brain tumors and had a positive impact on the target audience. Users expressed satisfaction with the privacy-preserving capabilities of the FL mechanism included in the system. Based on these results, it can be concluded that the system is suitable for implementation in healthcare institutions that utilize the IoMT infrastructure to adopt a collaborative approach with this AI-based solution. Furthermore, it is believed that the system can be further expanded to create a broad (potentially global) network of healthcare institutions and can be applied to alternative diagnostic cases.
With these positive outcomes achieved, the authors are motivated to take further steps related to the developed system and the research conducted. In the future, it is planned that real-world applications should be implemented in collaboration with healthcare institutions. These applications will provide additional insights and opportunities to enhance the system’s capabilities, leading to its official use in healthcare settings. Furthermore, future work will involve additional research on the system’s internal components. Plans include exploring different types of federated learning (FL) architectures, enhancing the mechanisms of the Internet of Medical Things (IoMT) infrastructure, and experimenting with various deep learning models, including alternate variations of CapsNet. This approach aims to identify potential areas for further improvement of the current findings. Another important aspect of future work will focus on incorporating defensive techniques against privacy attacks. Finally, the authors hope to broaden the system’s application to encompass other types of cancers and diseases.
The use of artificial intelligence tools in the healthcare sector has significant potential for improving the productivity and efficiency of healthcare services. The gradual integration of these tools and their use by healthcare personnel should be considered. This paper is a first approach to achieving this goal.

Author Contributions

Conceptualization, R.R.-A., J.-A.M.-S. and U.K.; data curation, U.K.; formal analysis, R.R.-A., J.-A.M.-S. and U.K.; methodology, R.R.-A., J.-A.M.-S. and U.K.; writing—original draft, R.R.-A. and U.K.; writing—review and editing, R.R.-A., J.-A.M.-S. and U.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data presented in the study are openly available on FigShare at https://figshare.com/articles/dataset/brain_tumor_dataset/1512427, accessed on 23 January 2024.

Acknowledgments

We would like to thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

Author Jose-Antonio Marmolejo-Saucedo was employed by the company Romway Machinery Manufacturing Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  2. Bonaccorso, G. Machine Learning Algorithms; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
  3. Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  4. Kononenko, I. Machine learning for medical diagnosis: History, state of the art and perspective. Artif. Intell. Med. 2001, 23, 89–109. [Google Scholar] [CrossRef]
  5. Garg, A.; Mago, V. Role of machine learning in medical research: A survey. Comput. Sci. Rev. 2021, 40, 100370. [Google Scholar] [CrossRef]
  6. Shailaja, K.; Seetharamulu, B.; Jabbar, M.A. Machine learning in healthcare: A review. In Proceedings of the 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 29–31 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 910–914. [Google Scholar]
  7. Mohanty, S.N.; Nalinipriya, G.; Jena, O.P.; Sarkar, A. (Eds.) Machine Learning for Healthcare Applications; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
  8. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  9. Domingues, I.; Pereira, G.; Martins, P.; Duarte, H.; Santos, J.; Abreu, P.H. Using deep learning techniques in medical imaging: A systematic review of applications on CT and PET. Artif. Intell. Rev. 2020, 53, 4093–4160. [Google Scholar] [CrossRef]
  10. Meedeniya, D.; Kumarasinghe, H.; Kolonne, S.; Fernando, C.; De la Torre Díez, I.; Marques, G. Chest X-ray analysis empowered with deep learning: A systematic review. Appl. Soft Comput. 2022, 126, 109319. [Google Scholar] [CrossRef] [PubMed]
  11. Westbrook, C.; Talbot, J. MRI in Practice; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  12. Debelee, T.G.; Kebede, S.R.; Schwenker, F.; Shewarega, Z.M. Deep learning in selected cancers’ image analysis—A survey. J. Imaging 2020, 6, 121. [Google Scholar] [CrossRef] [PubMed]
  13. Munir, K.; Elahi, H.; Ayub, A.; Frezza, F.; Rizzi, A. Cancer diagnosis using deep learning: A bibliographic review. Cancers 2019, 11, 1235. [Google Scholar] [CrossRef]
  14. Boldrini, L.; Bibault, J.E.; Masciocchi, C.; Shen, Y.; Bittner, M.I. Deep learning: A review for the radiation oncologist. Front. Oncol. 2019, 9, 977. [Google Scholar] [CrossRef]
  15. Zheng, J.; Lin, D.; Gao, Z.; Wang, S.; He, M.; Fan, J. Deep learning assisted efficient AdaBoost algorithm for breast cancer detection and early diagnosis. IEEE Access 2020, 8, 96946–96954. [Google Scholar] [CrossRef]
  16. Shen, L.; Margolies, L.R.; Rothstein, J.H.; Fluder, E.; McBride, R.; Sieh, W. Deep learning to improve breast cancer detection on screening mammography. Sci. Rep. 2019, 9, 12495. [Google Scholar] [CrossRef] [PubMed]
  17. Levine, A.B.; Schlosser, C.; Grewal, J.; Coope, R.; Jones, S.J.; Yip, S. Rise of the machines: Advances in deep learning for cancer diagnosis. Trends Cancer 2019, 5, 157–169. [Google Scholar] [CrossRef] [PubMed]
  18. Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef]
  19. Vellido, A.; Lisboa, P.J. Neural networks and other machine learning methods in cancer research. In Proceedings of the International Work-Conference on Artificial Neural Networks, San Sebastián, Spain, 20–22 June 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 964–971. [Google Scholar]
  20. Mccarthy, J.F.; Marx, K.A.; Hoffman, P.E.; Gee, A.G.; O’neil, P.; Ujwal, M.L.; Hotchkiss, J. Applications of machine learning and high-dimensional visualization in cancer detection, diagnosis, and management. Ann. N. Y. Acad. Sci. 2004, 1020, 239–262. [Google Scholar] [CrossRef]
  21. Sattlecker, M.; Stone, N.; Bessant, C. Current trends in machine-learning methods applied to spectroscopic cancer diagnosis. TrAC Trends Anal. Chem. 2014, 59, 17–25. [Google Scholar] [CrossRef]
  22. Echle, A.; Rindtorff, N.T.; Brinker, T.J.; Luedde, T.; Pearson, A.T.; Kather, J.N. Deep learning in cancer pathology: A new generation of clinical biomarkers. Br. J. Cancer 2021, 124, 686–696. [Google Scholar] [CrossRef]
  23. Tran, K.A.; Kondrashova, O.; Bradley, A.; Williams, E.D.; Pearson, J.V.; Waddell, N. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med. 2021, 13, 152. [Google Scholar] [CrossRef]
  24. Zeng, Z.; Mao, C.; Vo, A.; Li, X.; Nugent, J.O.; Khan, S.A.; Clare, S.E.; Luo, Y. Deep learning for cancer type classification and driver gene identification. BMC Bioinform. 2021, 22, 491. [Google Scholar] [CrossRef]
  25. Sun, W.; Zheng, B.; Qian, W. Computer aided lung cancer diagnosis with deep learning algorithms. In Proceedings of the Medical imaging 2016: Computer-Aided Diagnosis, San Diego, CA, USA, 27 February–3 March 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9785, pp. 241–248. [Google Scholar]
  26. Ardila, D.; Kiraly, A.P.; Bharadwaj, S.; Choi, B.; Reicher, J.J.; Peng, L.; Tse, D.; Etemadi, M.; Ye, W.; Corrado, G.; et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019, 25, 954–961. [Google Scholar] [CrossRef]
  27. Lakshmanaprabu, S.K.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Future Gener. Comput. Syst. 2019, 92, 374–382. [Google Scholar]
  28. Riquelme, D.; Akhloufi, M.A. Deep learning for lung cancer nodules detection and classification in CT scans. AI 2020, 1, 28–67. [Google Scholar] [CrossRef]
  29. Rossetto, A.M.; Zhou, W. Deep learning for categorization of lung cancer ct images. In Proceedings of the 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Philadelphia, PA, USA, 17–19 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 272–273. [Google Scholar]
  30. Chaunzwa, T.L.; Hosny, A.; Xu, Y.; Shafer, A.; Diao, N.; Lanuti, M.; Christiani, D.C.; Mak, R.H.; Aerts, H.J. Deep learning classification of lung cancer histology using CT images. Sci. Rep. 2021, 11, 5471. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, L. Deep learning techniques to diagnose lung cancer. Cancers 2022, 14, 5569. [Google Scholar] [CrossRef] [PubMed]
  32. Jiang, J.; Hu, Y.C.; Tyagi, N.; Zhang, P.; Rimner, A.; Deasy, J.O.; Veeraraghavan, H. Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets. Med. Phys. 2019, 46, 4392–4404. [Google Scholar] [CrossRef]
  33. Bębas, E.; Borowska, M.; Derlatka, M.; Oczeretko, E.; Hładuński, M.; Szumowski, P.; Mojsak, M. Machine-learning-based classification of the histological subtype of non-small-cell lung cancer using MRI texture analysis. Biomed. Signal Process. Control 2021, 66, 102446. [Google Scholar] [CrossRef]
  34. Wang, C.; Rimner, A.; Hu, Y.C.; Tyagi, N.; Jiang, J.; Yorke, E.; Riyahi, S.; Mageras, G.; Deasy, J.O.; Zhang, P. Toward predicting the evolution of lung tumors during radiotherapy observed on a longitudinal MR imaging study via a deep learning algorithm. Med. Phys. 2019, 46, 4699–4707. [Google Scholar] [CrossRef]
  35. Hu, Q.; Whitney, H.M.; Giger, M.L. A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI. Sci. Rep. 2020, 10, 10536. [Google Scholar] [CrossRef]
  36. Witowski, J.; Heacock, L.; Reig, B.; Kang, S.K.; Lewin, A.; Pysarenko, K.; Patel, S.; Samreen, N.; Rudnicki, W.; Łuczyńska, E.; et al. Improving breast cancer diagnostics with deep learning for MRI. Sci. Transl. Med. 2022, 14, eabo4802. [Google Scholar] [CrossRef]
  37. Eskreis-Winkler, S.; Onishi, N.; Pinker, K.; Reiner, J.S.; Kaplan, J.; Morris, E.A.; Sutton, E.J. Using deep learning to improve nonsystematic viewing of breast cancer on MRI. J. Breast Imaging 2021, 3, 201–207. [Google Scholar] [CrossRef]
  38. Schelb, P.; Kohl, S.; Radtke, J.P.; Wiesenfarth, M.; Kickingereder, P.; Bickelhaupt, S.; Kuder, T.A.; Stenzinger, A.; Hohenfellner, M.; Schlemmer, H.P.; et al. Classification of cancer at prostate MRI: Deep learning versus clinical PI-RADS assessment. Radiology 2019, 293, 607–617. [Google Scholar] [CrossRef]
  39. Liu, S.; Zheng, H.; Feng, Y.; Li, W. Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In Proceedings of the Medical Imaging 2017: Computer-Aided Diagnosis, Orlando, FL, USA, 11–16 February 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10134, pp. 581–584. [Google Scholar]
  40. Wang, X.; Yang, W.; Weinreb, J.; Han, J.; Li, Q.; Kong, X.; Yan, Y.; Ke, Z.; Luo, B.; Liu, T.; et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: Deep learning versus non-deep learning. Sci. Rep. 2017, 7, 15415. [Google Scholar] [CrossRef]
  41. De Vente, C.; Vos, P.; Hosseinzadeh, M.; Pluim, J.; Veta, M. Deep learning regression for prostate cancer detection and grading in bi-parametric MRI. IEEE Trans. Biomed. Eng. 2020, 68, 374–383. [Google Scholar] [CrossRef] [PubMed]
  42. Zhang, W.; Yin, H.; Huang, Z.; Zhao, J.; Zheng, H.; He, D.; Li, M.; Tan, W.; Tian, S.; Song, B. Development and validation of MRI-based deep learning models for prediction of microsatellite instability in rectal cancer. Cancer Med. 2021, 10, 4164–4173. [Google Scholar] [CrossRef] [PubMed]
  43. Soomro, M.H.; De Cola, G.; Conforto, S.; Schmid, M.; Giunta, G.; Guidi, E.; Neri, E.; Caruso, D.; Ciolina, M.; Laghi, A. Automatic segmentation of colorectal cancer in 3D MRI by combining deep learning and 3D level-set algorithm-a preliminary study. In Proceedings of the 2018 IEEE 4th Middle East Conference on Biomedical Engineering (MECBME), Tunis, Tunisia, 28–30 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 198–203. [Google Scholar]
  44. Yang, T.; Liang, N.; Li, J.; Yang, Y.; Li, Y.; Huang, Q.; Li, R.; He, X.; Zhang, H. Intelligent imaging technology in diagnosis of colorectal cancer using deep learning. IEEE Access 2019, 7, 178839–178847. [Google Scholar] [CrossRef]
  45. Zhen, S.H.; Cheng, M.; Tao, Y.B.; Wang, Y.F.; Juengpanich, S.; Jiang, Z.Y.; Jiang, Y.K.; Yan, Y.Y.; Lu, W.; Lue, J.M.; et al. Deep learning for accurate diagnosis of liver tumor based on magnetic resonance imaging and clinical data. Front. Oncol. 2020, 10, 680. [Google Scholar] [CrossRef]
  46. Velichko, Y.S.; Gennaro, N.; Karri, M.; Antalek, M.; Bagci, U. A Comprehensive Review of Deep Learning Approaches for Magnetic Resonance Imaging Liver Tumor Analysis. Adv. Clin. Radiol. 2023, 5, 1–15. [Google Scholar] [CrossRef]
  47. Trivizakis, E.; Manikis, G.C.; Nikiforaki, K.; Drevelegas, K.; Constantinides, M.; Drevelegas, A.; Marias, K. Extending 2-D convolutional neural networks to 3-D for advancing deep learning cancer classification with application to MRI liver tumor differentiation. IEEE J. Biomed. Health Inform. 2018, 23, 923–930. [Google Scholar] [CrossRef]
  48. Hamm, C.A.; Wang, C.J.; Savic, L.J.; Ferrante, M.; Schobert, I.; Schlachter, T.; Lin, M.; Duncan, J.S.; Weinreb, J.C.; Chapiro, J.; et al. Deep learning for liver tumor diagnosis part I: Development of a convolutional neural network classifier for multi-phasic MRI. Eur. Radiol. 2019, 29, 3338–3347. [Google Scholar] [CrossRef]
  49. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
  50. Sajid, S.; Hussain, S.; Sarwar, A. Brain tumor detection and segmentation in MR images using deep learning. Arab. J. Sci. Eng. 2019, 44, 9249–9261. [Google Scholar] [CrossRef]
  51. Haq, E.U.; Jianjun, H.; Li, K.; Haq, H.U.; Zhang, T. An MRI-based deep learning approach for efficient classification of brain tumors. J. Ambient Intell. Humaniz. Comput. 2021, 14, 6697–6718. [Google Scholar] [CrossRef]
  52. Paul, J.S.; Plassard, A.J.; Landman, B.A.; Fabbri, D. Deep learning for brain tumor classification. In Proceedings of the Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging, Orlando, FL, USA, 12–14 February 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10137, pp. 253–268. [Google Scholar]
  53. Ranjbarzadeh, R.; Bagherian Kasgari, A.; Jafarzadeh Ghoushchi, S.; Anari, S.; Naseri, M.; Bendechache, M. Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci. Rep. 2021, 11, 10930. [Google Scholar] [CrossRef]
  54. Hashemzehi, R.; Mahdavi, S.J.S.; Kheirabadi, M.; Kamel, S.R. Detection of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE. Biocybern. Biomed. Eng. 2020, 40, 1225–1232. [Google Scholar] [CrossRef]
  55. Aamir, M.; Rahman, Z.; Dayo, Z.A.; Abro, W.A.; Uddin, M.I.; Khan, I.; Imran, A.S.; Ali, Z.; Ishfaq, M.; Guan, Y.; et al. A deep learning approach for brain tumor classification using MRI images. Comput. Electr. Eng. 2022, 101, 108105. [Google Scholar] [CrossRef]
  56. Ottom, M.A.; Rahman, H.A.; Dinov, I.D. Znet: Deep learning approach for 2D MRI brain tumor segmentation. IEEE J. Transl. Eng. Health Med. 2022, 10, 1800508. [Google Scholar] [CrossRef] [PubMed]
  57. Chattopadhyay, A.; Maitra, M. MRI-based brain tumour image detection using CNN based deep learning method. Neurosci. Inform. 2022, 2, 100060. [Google Scholar] [CrossRef]
  58. Nazir, M.; Shakil, S.; Khurshid, K. Role of deep learning in brain tumor detection and classification (2015 to 2020): A review. Comput. Med. Imaging Graph. 2021, 91, 101940. [Google Scholar] [CrossRef]
  59. Jyothi, P.; Singh, A.R. Deep learning models and traditional automated techniques for brain tumor segmentation in MRI: A review. Artif. Intell. Rev. 2023, 56, 2923–2969. [Google Scholar] [CrossRef]
  60. Akinyelu, A.A.; Zaccagna, F.; Grist, J.T.; Castelli, M.; Rundo, L. Brain tumor diagnosis using machine learning, convolutional neural networks, capsule neural networks and vision transformers, applied to MRI: A survey. J. Imaging 2022, 8, 205. [Google Scholar] [CrossRef]
  61. Arabahmadi, M.; Farahbakhsh, R.; Rezazadeh, J. Deep learning for smart Healthcare—A survey on brain tumor detection from medical imaging. Sensors 2022, 22, 1960. [Google Scholar] [CrossRef]
  62. Taha, A.M.; Ariffin, D.S.B.B.; Abu-Naser, S.S. A Systematic Literature Review of Deep and Machine Learning Algorithms in Brain Tumor and Meta-Analysis. J. Theor. Appl. Inf. Technol. 2023, 101, 21–36. [Google Scholar]
  63. Dubey, A.; Agrawal, S.; Agrawal, V.; Dubey, T.; Jaiswal, A. Breast Cancer and the Brain: A Comprehensive Review of Neurological Complications. Cureus 2023, 15, e48941. [Google Scholar] [CrossRef] [PubMed]
  64. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 12. [Google Scholar] [CrossRef]
  65. Chowdhury, A.; Kassem, H.; Padoy, N.; Umeton, R.; Karargyris, A. A review of medical federated learning: Applications in oncology and cancer research. In Proceedings of the International MICCAI Brainlesion Workshop, Virtual Event, 27 September 2021; Springer International Publishing: Cham, Germany, 2021; pp. 3–24. [Google Scholar]
  66. Yi, L.; Zhang, J.; Zhang, R.; Shi, J.; Wang, G.; Liu, X. SU-Net: An efficient encoder-decoder model of federated learning for brain tumor segmentation. In Proceedings of the International Conference on Artificial Neural Networks, Bratislava, Slovakia, 15–18 September 2020; Springer International Publishing: Cham, Germany, 2020; pp. 761–773. [Google Scholar]
  67. Islam, M.; Reza, M.T.; Kaosar, M.; Parvez, M.Z. Effectiveness of federated learning and CNN ensemble architectures for identifying brain tumors using MRI images. Neural Process. Lett. 2023, 55, 3779–3809. [Google Scholar] [CrossRef]
  68. Sheller, M.J.; Reina, G.A.; Edwards, B.; Martin, J.; Bakas, S. Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018; Revised Selected Papers, Part I 4; Springer International Publishing: Cham, Germany, 2019; pp. 92–104. [Google Scholar]
  69. Naeem, A.; Anees, T.; Naqvi, R.A.; Loh, W.K. A comprehensive analysis of recent deep and federated-learning-based methodologies for brain tumor diagnosis. J. Pers. Med. 2022, 12, 275. [Google Scholar] [CrossRef]
  70. Rose, K.; Eldridge, S.; Chapin, L. The Internet of Things: An Overview; The Internet Soc. (ISOC): Reston, VI, USA, 2015; Volume 80, pp. 1–53. [Google Scholar]
  71. Vishnu, S.; Ramson, S.J.; Jegan, R. Internet of medical things (IoMT)-An overview. In Proceedings of the 2020 5th International Conference on Devices, Circuits and Systems (ICDCS), Coimbatore, India, 5–6 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 101–104. [Google Scholar]
  72. Wei, K.; Zhang, L.; Guo, Y.; Jiang, X. Health monitoring based on internet of medical things: Architecture, enabling technologies, and applications. IEEE Access 2020, 8, 27468–27478. [Google Scholar] [CrossRef]
  73. Kose, G.; Colakoglu, O.E. Health Tourism with Data Mining: Present State and Future Potentials. Int. J. Inf. Commun. Technol. Digit. Converg. 2023, 8, 23–33. [Google Scholar]
  74. Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
  75. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
  76. Qi, P.; Chiaro, D.; Guzzo, A.; Ianni, M.; Fortino, G.; Piccialli, F. Model aggregation techniques in federated learning: A comprehensive survey. Future Gener. Comput. Syst. 2023, 150, 272–293. [Google Scholar] [CrossRef]
  77. McMahan, H.B.; Ramage, D.; Talwar, K.; Zhang, L. Learning differentially private recurrent language models. arXiv 2017, arXiv:1710.06963. [Google Scholar]
  78. Wink, T.; Nochta, Z. An approach for peer-to-peer federated learning. In Proceedings of the 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Taipei, Taiwan, 21–24 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 150–157. [Google Scholar]
  79. Li, H.; Li, C.; Wang, J.; Yang, A.; Ma, Z.; Zhang, Z.; Hua, D. Review on security of federated learning and its application in healthcare. Future Gener. Comput. Syst. 2023, 144, 271–290. [Google Scholar] [CrossRef]
  80. Rieke, N.; Hancox, J.; Li, W.; Milletari, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. npj Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef] [PubMed]
  81. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  82. Sewak, M.; Karim, M.R.; Pujari, P. Practical Convolutional Neural Networks: Implement Advanced Deep Learning Models Using Python; Packt Publishing Ltd.: Birmingham, UK, 2018. [Google Scholar]
  83. Patrick, M.K.; Adekoya, A.F.; Mighty, A.A.; Edward, B.Y. Capsule networks–a survey. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1295–1310. [Google Scholar] [CrossRef]
  84. Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), LongBeach, CA, USA, 4–9 December 2017; p. 30. [Google Scholar]
  85. Hinton, G.E.; Krizhevsky, A.; Wang, S.D. Transforming auto-encoders. In Artificial Neural Networks and Machine Learning–ICANN 2011, Proceedings of the 21st International Conference on Artificial Neural Networks, Espoo, Finland, 14–17 June 2011; Proceedings, Part I 21; Springer: Berlin/Heidelberg, Germany, 2011; pp. 44–51. [Google Scholar]
  86. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef] [PubMed]
  87. Beutel, D.J.; Topal, T.; Mathur, A.; Qiu, X.; Fernandez-Marques, J.; Gao, Y.; Sani, L.; Li, K.H.; Parcollet, T.; De Gusmão, P.P.; et al. Flower: A friendly federated learning research framework. arXiv 2020, arXiv:2007.14390. [Google Scholar]
  88. Shekar, B.H.; Dagnew, G. Grid search-based hyperparameter tuning and classification of microarray cancer data. In Proceedings of the 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, 25–28 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar]
  89. Chaki, J.; Woźniak, M. A deep learning based four-fold approach to classify brain MRI: BTSCNet. Biomed. Signal Process. Control 2023, 85, 104902. [Google Scholar] [CrossRef]
  90. Ruba, T.; Tamilselvi, R.; Beham, M.P. Brain tumor segmentation using JGate-AttResUNet–A novel deep learning approach. Biomed. Signal Process. Control 2023, 84, 104926. [Google Scholar] [CrossRef]
  91. Bai, L.; Hu, H.; Ye, Q.; Li, H.; Wang, L.; Xu, J. Membership Inference Attacks and Defenses in Federated Learning: A Survey. ACM Comput. Surv. 2024, 57, 89. [Google Scholar] [CrossRef]
  92. Xia, G.; Chen, J.; Yu, C.; Ma, J. Poisoning attacks in federated learning: A survey. IEEE Access 2023, 11, 10708–10722. [Google Scholar] [CrossRef]
  93. Huang, Y.; Gupta, S.; Song, Z.; Li, K.; Arora, S. Evaluating gradient inversion attacks and defenses in federated learning. Adv. Neural Inf. Process. Syst. 2021, 34, 7232–7241. [Google Scholar]
  94. Chen, W.N.; Choquette-Choo, C.A.; Kairouz, P. Communication efficient federated learning with secure aggregation and differential privacy. In Proceedings of the NeurIPS 2021 Workshop Privacy in Machine Learning, Turku, Finland, 6–14 December 2021. [Google Scholar]
  95. Boenisch, F.; Dziedzic, A.; Schuster, R.; Shamsabadi, A.S.; Shumailov, I.; Papernot, N. Reconstructing individual data points in federated learning hardened with differential privacy and secure aggregation. In Proceedings of the 2023 IEEE 8th European Symposium on Security and Privacy (EuroS & P), Delft, The Netherlands, 3–7 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 241–257. [Google Scholar]
  96. Adnan, M.; Kalra, S.; Cresswell, J.C.; Taylor, G.W.; Tizhoosh, H.R. Federated learning and differential privacy for medical image analysis. Sci. Rep. 2022, 12, 1953. [Google Scholar] [CrossRef]
Figure 1. Overview of the IoMT ecosystem proposed.
Figure 1. Overview of the IoMT ecosystem proposed.
Mathematics 13 02393 g001
Figure 2. Client–server (local–global model) architecture for federated learning.
Figure 2. Client–server (local–global model) architecture for federated learning.
Mathematics 13 02393 g002
Figure 3. Typical CapsNet architecture for image-based applications [84].
Figure 3. Typical CapsNet architecture for image-based applications [84].
Mathematics 13 02393 g003
Figure 4. Sample MRI scans: (a) meningioma, (b) pituitary, and (c) gliomas brain tumor [86].
Figure 4. Sample MRI scans: (a) meningioma, (b) pituitary, and (c) gliomas brain tumor [86].
Mathematics 13 02393 g004
Figure 5. Findings of the performed brain tumor applications in Scenario 1: 5 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Figure 5. Findings of the performed brain tumor applications in Scenario 1: 5 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Mathematics 13 02393 g005
Figure 6. Findings for the performed brain tumor applications for Scenario 2: 10 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Figure 6. Findings for the performed brain tumor applications for Scenario 2: 10 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Mathematics 13 02393 g006
Figure 7. Findings for the performed brain tumor applications for Scenario 3: 15 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Figure 7. Findings for the performed brain tumor applications for Scenario 3: 15 healthcare institutions. The green color represents correct classifications, and the red color represents errors.
Mathematics 13 02393 g007
Table 1. Hyperparameters and the values considered in the grid search methodology.
Table 1. Hyperparameters and the values considered in the grid search methodology.
The ParameterValues
Number of Convolutional Layers{3, 4, 5, 7}
Number of Capsule Layers{3, 4, 5, 6, 7}
Activation{RELU, TANH, SOFT-MAX}
Kernel Size Value{3 × 3, 4 × 4, 5 × 5, 7 × 7, 8 × 8, 9 × 9,}
Kernel Initializer{UNIFORM, NORMAL}
Stride{1, 2, 3}
Routing{1, 3, 5}
Batch{1, 10, 50, 75}
Table 2. Training time performances of the FL framework for three different scenarios with changing delays in IoMT.
Table 2. Training time performances of the FL framework for three different scenarios with changing delays in IoMT.
ScenarioDelay Rate *Training Time **
5 healthcare institutions1.00124
5 healthcare institutions1.50141
5 healthcare institutions2.00168
5 healthcare institutions2.50196
10 healthcare institutions2.00137
10 healthcare institutions2.50157
10 healthcare institutions3.00179
10 healthcare institutions3.50212
15 healthcare institutions2.50133
15 healthcare institutions3.00156
15 healthcare institutions3.50183
15 healthcare institutions4.00207
* Delay rate is in seconds. ** Training time is in minutes.
Table 3. Average findings for the chosen classification metrics.
Table 3. Average findings for the chosen classification metrics.
ScenarioAccuracyPrecisionRecallF1-Score
5 healthcare institutions0.97480.96120.96880.9642
10 healthcare institutions0.98130.97050.96480.9671
15 healthcare institutions0.98880.97910.98190.9762
Table 4. Findings regarding the per-class metrics in Scenario 1: 5 healthcare institutions.
Table 4. Findings regarding the per-class metrics in Scenario 1: 5 healthcare institutions.
ClassPrecisionRecallF1-ScoreAUC-ROCAUC-PRC
Meningiomas0.97320.97320.97320.98120.9709
Pituitary0.97160.98560.97850.99100.9817
Gliomas0.98190.96110.97140.98080.9713
Table 5. Findings regarding the per-class metrics in Scenario 2: 10 healthcare institutions.
Table 5. Findings regarding the per-class metrics in Scenario 2: 10 healthcare institutions.
ClassPrecisionRecallF1-ScoreAUC-ROCAUC-PRC
Meningiomas0.98120.97860.97990.97310.9784
Pituitary0.97870.99040.98450.97500.9837
Gliomas0.98560.97150.97850.97330.9761
Table 6. Findings regarding the per-class metrics in Scenario 3: 15 healthcare institutions.
Table 6. Findings regarding the per-class metrics in Scenario 3: 15 healthcare institutions.
ClassPrecisionRecallF1-ScoreAUC-ROCAUC-PRC
Meningiomas0.97930.99190.99060.97710.9893
Pituitary0.98820.98820.98820.97600.9861
Gliomas0.98920.98560.98740.97630.9855
Table 7. Comparative accuracy findings for the different scenarios.
Table 7. Comparative accuracy findings for the different scenarios.
ScenarioEfficientNet-B0, ResNet50 [55]Znet (DNN)
[56]
CNN
[57]
BTSCNet [90]JGate-AttResUNet
[91]
CapsNet
(This Study)
5 healthcare institutions0.96210.94780.93120.96430.98110.9748
10 healthcare institutions0.94320.93170.93560.96140.96740.9813
15 healthcare institutions0.96820.96250.91870.97260.97720.9888
Table 8. Statements and findings for the doctors’ survey.
Table 8. Statements and findings for the doctors’ survey.
NoStatementResponses in the 5-Point Likert ScaleAverage
12345
1“I found this system useful.”001274.6
2“The system is successful enough in diagnosing brain tumors.”002174.5
3“The IoMT infrastructure of the system allows a collaborative application among different institutions.”011264.3
4“I found the usage period boring.”721001.4
5“I think the system can be expanded to alternative healthcare applications apart from diagnosis.”000284.8
6“The DL in the system is effective in MRI analysis.”002264.4
7“The AI solution can be used for decision support on cancer research.”000284.8
8“The system is successful enough in ensuring data privacy.”001184.7
9“The whole system has a good performance in communication and diagnosis flow.”001364.5
10“This system can be used in real applications within healthcare institutions.”000374.7
Table 9. Statements and findings for the radiologists’ survey.
Table 9. Statements and findings for the radiologists’ survey.
NoStatementResponses in the 5-Point Likert ScaleAverage
12345
1“The IoMT infrastructure of the system allows a collaborative application among different institutions.”002174.5
2“The system is successful enough in diagnosing brain tumors.”011174.4
3“I found this system useful.”000194.9
4“I think the system can be expanded to alternative healthcare applications apart from diagnosis.”000374.7
5“The DL in the system is effective in MRI analysis.”001274.6
6“I think this system cannot support cancer research.”712001.5
7“This system can help me with efficiency and better report writing.”001184.7
8“The system is successful enough in ensuring data privacy.”000284.8
9“This system can be used in real applications within healthcare institutions.”011264.3
10“This system can help me in improving efficiency for screening.”011174.4
11“This system can help me in better report writing.”010184.6
Table 10. Statements and findings for the healthcare staff survey.
Table 10. Statements and findings for the healthcare staff survey.
NoStatementResponses in the 5-Point Likert ScaleAverage
12345
1“The IoMT infrastructure of the system allows a collaborative application among different institutions.”001184.7
2“I found the usage period boring.”910001.1
3“I found this system useful.”112084.9
4“This system can be used in real applications within healthcare institutions.”001184.7
5“I think this system cannot be used effectively by healthcare staff.”811001.3
6“The system is successful enough in ensuring data privacy.”001094.8
7“The whole system has a good performance in communication and diagnosis flow.”012164.2
8“The system can optimize my tasks regarding patient records.”001274.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rodriguez-Aguilar, R.; Marmolejo-Saucedo, J.-A.; Köse, U. Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application. Mathematics 2025, 13, 2393. https://doi.org/10.3390/math13152393

AMA Style

Rodriguez-Aguilar R, Marmolejo-Saucedo J-A, Köse U. Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application. Mathematics. 2025; 13(15):2393. https://doi.org/10.3390/math13152393

Chicago/Turabian Style

Rodriguez-Aguilar, Roman, Jose-Antonio Marmolejo-Saucedo, and Utku Köse. 2025. "Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application" Mathematics 13, no. 15: 2393. https://doi.org/10.3390/math13152393

APA Style

Rodriguez-Aguilar, R., Marmolejo-Saucedo, J.-A., & Köse, U. (2025). Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application. Mathematics, 13(15), 2393. https://doi.org/10.3390/math13152393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop