Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,032)

Search Parameters:
Keywords = privacy issues

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 540 KB  
Article
Pricing Incentive Mechanisms for Medical Data Sharing in the Internet of Things: A Three-Party Stackelberg Game Approach
by Dexin Zhu, Zhiqiang Zhou, Huanjie Zhang, Yang Chen, Yuanbo Li and Jun Zheng
Sensors 2026, 26(2), 488; https://doi.org/10.3390/s26020488 - 12 Jan 2026
Abstract
In the context of the rapid growth of the Internet of Things and mobile health services, sensors and smart wearable devices are continuously collecting and uploading dynamic health data. Together with the long-term accumulated electronic medical records and multi-source heterogeneous clinical data from [...] Read more.
In the context of the rapid growth of the Internet of Things and mobile health services, sensors and smart wearable devices are continuously collecting and uploading dynamic health data. Together with the long-term accumulated electronic medical records and multi-source heterogeneous clinical data from healthcare institutions, these data form the cornerstone of intelligent healthcare. In the context of medical data sharing, previous studies have mainly focused on privacy protection and secure data transmission, while relatively few have addressed the issue of incentive mechanisms. However, relying solely on technical means is insufficient to solve the problem of individuals’ willingness to share their data. To address this challenge, this paper proposes a three-party Stackelberg-game-based incentive mechanism for medical data sharing. The mechanism captures the hierarchical interactions among the intermediator, electronic device users, and data consumers. In this framework, the intermediator acts as the leader, setting the transaction fee; electronic device users serve as the first-level followers, determining the data price; and data consumers function as the second-level followers, deciding on the purchase volume. A social network externality is incorporated into the model to reflect the diffusion effect of data demand, and the optimal strategies and system equilibrium are derived through backward induction. Theoretical analysis and numerical experiments demonstrate that the proposed mechanism effectively enhances users’ willingness to share data and improves the overall system utility, achieving a balanced benefit among the cloud platform, electronic device users, and data consumers. This study not only enriches the game-theoretic modeling approaches to medical data sharing but also provides practical insights for designing incentive mechanisms in IoT-based healthcare systems. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

41 pages, 3213 KB  
Review
Generative Adversarial Networks for Modeling Bio-Electric Fields in Medicine: A Review of EEG, ECG, EMG, and EOG Applications
by Jiaqi Liang, Yuheng Zhou, Kai Ma, Yifan Jia, Yadan Zhang, Bangcheng Han and Min Xiang
Bioengineering 2026, 13(1), 84; https://doi.org/10.3390/bioengineering13010084 - 12 Jan 2026
Abstract
Bio-electric fields—manifested as Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), and Electrooculogram (EOG)—are fundamental to modern medical diagnostics but often suffer from severe data imbalance, scarcity, and environmental noise. Generative Adversarial Networks (GANs) offer a powerful, nonlinear solution to these modeling hurdles. This review [...] Read more.
Bio-electric fields—manifested as Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), and Electrooculogram (EOG)—are fundamental to modern medical diagnostics but often suffer from severe data imbalance, scarcity, and environmental noise. Generative Adversarial Networks (GANs) offer a powerful, nonlinear solution to these modeling hurdles. This review presents a comprehensive survey of GAN methodologies specifically tailored for bio-electric signal processing. We first establish a theoretical foundation by detailing GAN principles, training mechanisms, and critical structural variants, including advancements in loss functions and conditional architectures. Subsequently, the paper extensively analyzes applications ranging from high-fidelity signal synthesis and noise reduction to multi-class classification. Special attention is given to clinical anomaly detection, specifically covering epilepsy, arrhythmia, depression, and sleep apnea. Furthermore, we explore emerging applications such as modal transformation, Brain–Computer Interfaces (BCI), de-identification for privacy, and signal reconstruction. Finally, we critically evaluate the computational trade-offs and stability issues inherent in current models. The study concludes by delineating prospective research avenues, emphasizing the necessity of interdisciplinary synergy to advance personalized medicine and intelligent diagnostic systems. Full article
Show Figures

Graphical abstract

64 pages, 13395 KB  
Review
Low-Cost Malware Detection with Artificial Intelligence on Single Board Computers
by Phil Steadman, Paul Jenkins, Rajkumar Singh Rathore and Chaminda Hewage
Future Internet 2026, 18(1), 46; https://doi.org/10.3390/fi18010046 - 12 Jan 2026
Abstract
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence [...] Read more.
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence (AI) for a more dynamic and robust malware detection solution. An innovative approach utilising AI is focusing on image classification techniques to detect malware on resource-constrained Single-Board Computers (SBCs) such as the Raspberry Pi. In this method the conversion of malware binaries into 2D images is examined, which can be analysed by deep learning models such as convolutional neural networks (CNNs) to classify them as benign or malicious. The results show that the image-based approach demonstrates high efficacy, with many studies reporting detection accuracy rates exceeding 98%. That said, there is a significant challenge in deploying these demanding models on devices with limited processing power and memory, in particular those involving of both calculation and time complexity. Overcoming this issue requires critical model optimisation strategies. Successful approaches include the use of a lightweight CNN architecture and federated learning, which may be used to preserve privacy while training models with decentralised data are processed. This hybrid workflow in which models are trained on powerful servers before the learnt algorithms are deployed on SBCs is an emerging field attacting significant interest in the field of cybersecurity. This paper synthesises the current state of the art, performance compromises, and optimisation techniques contributing to the understanding of how AI and image representation can enable effective low-cost malware detection on resource-constrained systems. Full article
Show Figures

Graphical abstract

44 pages, 2806 KB  
Systematic Review
Machine Learning and Deep Learning in Lung Cancer Diagnostics: A Systematic Review of Technical Breakthroughs, Clinical Barriers, and Ethical Imperatives
by Mobarak Abumohsen, Enrique Costa-Montenegro, Silvia García-Méndez, Amani Yousef Owda and Majdi Owda
AI 2026, 7(1), 23; https://doi.org/10.3390/ai7010023 - 11 Jan 2026
Viewed by 55
Abstract
The use of machine learning (ML) and deep learning (DL) in lung cancer detection and classification offers great promise for improving early diagnosis and reducing death rates. Despite major advances in research, there is still a significant gap between successful model development and [...] Read more.
The use of machine learning (ML) and deep learning (DL) in lung cancer detection and classification offers great promise for improving early diagnosis and reducing death rates. Despite major advances in research, there is still a significant gap between successful model development and clinical use. This review identifies the main obstacles preventing ML/DL tools from being adopted in real healthcare settings and suggests practical advice to tackle them. Using PRISMA guidelines, we examined over 100 studies published between 2022 and 2024, focusing on technical accuracy, clinical relevance, and ethical aspects. Most of the reviewed studies rely on computed tomography (CT) imaging, reflecting its dominant role in current lung cancer screening workflows. While many models achieve high performance on public datasets (e.g., >95% sensitivity on LUNA16), they often perform poorly on real clinical data due to issues like domain shift and bias, especially toward underrepresented groups. Promising solutions include federated learning for data privacy, synthetic data to support rare subtypes, and explainable AI to build trust. We also present a checklist to guide the development of clinically applicable tools, emphasizing generalizability, transparency, and workflow integration. The study recommends early collaboration between developers, clinicians, and policymakers to ensure practical adoption. Ultimately, for ML/DL solutions to gain clinical acceptance, they must be designed with healthcare professionals from the beginning. Full article
28 pages, 1344 KB  
Article
Tiny Language Model Guided Flow Q Learning for Optimal Task Scheduling in Fog Computing
by Bhargavi K and Sajjan G. Shiva
Algorithms 2026, 19(1), 60; https://doi.org/10.3390/a19010060 - 10 Jan 2026
Viewed by 56
Abstract
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation [...] Read more.
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation of fog computing by the industries worldwide is due to the advantages like reduced latency, high operational efficiency, and high-level data privacy. The highly distributed and heterogeneous nature of fog computing leads to significant challenges related to resource management, data security, task scheduling, data privacy, and interoperability. The task typically represents a job generated by the IoT device. The action indicates the way of executing the tasks whose decision is taken by the scheduler. Task scheduling is one of the prominent issues in fog computing which includes the process of effectively scheduling the tasks among fog devices to effectively utilize the resources and meet the Quality of Service (QoS) requirements of the applications. Improper task scheduling leads to increased execution time, overutilization of resources, data loss, and poor scalability. Hence there is a need to do proper task scheduling to make optimal task distribution decisions in a highly dynamic resource-constrained heterogeneous fog computing environment. Flow Q learning (FQL) is a potential form of reinforcement learning algorithm which uses the flow matching policy for action distribution. It can handle complex forms of data and multimodal action distribution which make it suitable for the highly volatile fog computing environment. However, flow Q learning struggles to achieve a proper trade-off between the expressive flow model and a reduction in the Q function, as it relies on a one-step optimization policy that introduces bias into the estimated Q function value. The Tiny Language Model (TLM) is a significantly smaller form of a Large Language Model (LLM) which is designed to operate over the device-constrained environment. It can provide fair and systematic guidance to disproportionally biased deep learning models. In this paper a novel TLM guided flow Q learning framework is designed to address the task scheduling problem in fog computing. The neutrality and fine-tuning capability of the TLM is combined with the quick generable ability of the FQL algorithm. The framework is simulated using the Simcan2Fog simulator considering the dynamic nature of fog environment under finite and infinite resources. The performance is found to be good with respect to parameters like execution time, accuracy, response time, and latency. Further the results obtained are validated using the expected value analysis method which is found to be satisfactory. Full article
27 pages, 4646 KB  
Article
Early Tuberculosis Detection via Privacy-Preserving, Adaptive-Weighted Deep Models
by Karim Gasmi, Afrah Alanazi, Najib Ben Aoun, Mohamed O. Altaieb, Alameen E. M. Abdalrahman, Omer Hamid, Sahar Almenwer, Lassaad Ben Ammar, Samia Yahyaoui and Manel Mrabet
Diagnostics 2026, 16(2), 204; https://doi.org/10.3390/diagnostics16020204 - 8 Jan 2026
Viewed by 114
Abstract
Background: Tuberculosis (TB) is a significant global health issue, particularly in resource-limited regions where radiological expertise is constrained. This project aims to develop a scalable deep learning system that safeguards privacy and achieves high accuracy in the early identification of tuberculosis using chest [...] Read more.
Background: Tuberculosis (TB) is a significant global health issue, particularly in resource-limited regions where radiological expertise is constrained. This project aims to develop a scalable deep learning system that safeguards privacy and achieves high accuracy in the early identification of tuberculosis using chest X-ray images. The objective is to implement federated learning with an adaptive-weighted ensemble optimised by a Genetic Algorithm (GA) to address the challenges of centralised training and single-model approaches. Method: We developed an ensemble learning method that combines multiple locally trained models to improve diagnostic consistency and reduce individual-model bias. An optimisation system that autonomously selected the optimal ensemble weights determined each model’s contribution to the final decision. A controlled augmentation process was employed to enhance the model’s robustness and reduce the likelihood of overfitting by introducing realistic alterations to appearance, geometry, and acquisition conditions. Federated learning facilitated collaboration among universities for training while ensuring data privacy was maintained during the establishment of the optimal ensemble at each location. In this system, just model parameters were transmitted, excluding patient photographs. This enabled the secure amalgamation of global data without revealing sensitive clinical information. Standard diagnostic metrics, including accuracy, sensitivity, precision, F1 score, AUC, and confusion matrices, were employed to evaluate the model’s performance. Results: The proposed federated, GA-optimized ensemble demonstrated superior performance compared with individual models and fixed-weight ensembles. The system achieved 98% accuracy, 97% F1 score, and 0.999 AUC, indicating highly reliable discrimination between TB-positive and typical cases. Federated learning preserved model robustness across heterogeneous data sources, while ensuring complete patient privacy. Conclusions: The proposed federated, GA-optimized ensemble achieves highly accurate and robust early tuberculosis detection while preserving patient privacy across distributed clinical sites. This scalable framework demonstrates strong potential for reliable AI-assisted TB screening in resource-limited healthcare settings. Full article
(This article belongs to the Special Issue Tuberculosis Detection and Diagnosis 2025)
Show Figures

Figure 1

23 pages, 3127 KB  
Article
Heterogeneous Federated Learning via Knowledge Transfer Guided by Global Pseudo Proxy Data
by Wenhao Sun, Xiaoxuan Guo, Wenjun Liu and Fang Sun
Future Internet 2026, 18(1), 36; https://doi.org/10.3390/fi18010036 - 8 Jan 2026
Viewed by 83
Abstract
Federated learning with data free knowledge distillation enables effective and privacy-preserving knowledge aggregation by employing generators to produce local pseudo samples during client-side model migration. However, in practical applications, data distributions across different institutions are often non-independent and identically distributed (Non-IID), which introduces [...] Read more.
Federated learning with data free knowledge distillation enables effective and privacy-preserving knowledge aggregation by employing generators to produce local pseudo samples during client-side model migration. However, in practical applications, data distributions across different institutions are often non-independent and identically distributed (Non-IID), which introduces bias in local models and consequently impedes the effective transfer of knowledge to the global model. In addition, insufficient local training can further exacerbate model bias, undermining overall performance. To address these challenges, we propose a heterogeneous federated learning framework that enhances knowledge transfer through guidance from global proxy data. Specifically, a noise filter is incorporated into the training of local generators to mitigate the negative impact of low-quality pseudo proxy samples on local knowledge distillation. Furthermore, a global generator is introduced to produce global pseudo proxy samples, which, together with local pseudo proxy data, are used to construct a cross attention matrix. This design effectively alleviates overfitting and underfitting issues in local models caused by data heterogeneity. Extensive experiments on publicly available datasets with heterogeneous data distributions demonstrate the superiority of the proposed framework. Results show that when the Dirichlet distribution coefficient is 0.05, our method achieves an average accuracy improvement of 5.77% over popular baselines; when the coefficient is 0.1, the improvement reaches 6.54%. Even under uniformly distributed sample classes, our model still achieves an average accuracy improvement of 7.07% compared to other methods. Full article
Show Figures

Figure 1

20 pages, 498 KB  
Article
Defending Against Backdoor Attacks in Federated Learning: A Triple-Phase Client-Side Approach
by Yunran Chen and Boyuan Li
Electronics 2026, 15(2), 273; https://doi.org/10.3390/electronics15020273 - 7 Jan 2026
Viewed by 182
Abstract
Federated learning effectively addresses the issues of data privacy and communication overhead in traditional deep learning through distributed local training. However, its open architecture is seriously threatened by backdoor attacks, where malicious clients can implant triggers to control the global model. To address [...] Read more.
Federated learning effectively addresses the issues of data privacy and communication overhead in traditional deep learning through distributed local training. However, its open architecture is seriously threatened by backdoor attacks, where malicious clients can implant triggers to control the global model. To address these issues, this paper proposes a novel three-stage defense mechanism based on local clients. First, through text readability analysis, each client’s local data is independently evaluated to construct a global scoring distribution model, and a dynamic threshold is used to precisely locate and remove suspicious samples with low readability. Second, frequency analysis and perturbation are performed on the remaining data to identify and disrupt triggers based on specific words while preserving the basic semantics of the text. Third, n-gram distribution analysis is employed to detect and remove samples containing abnormally high-frequency word sequences, which may correspond to complex backdoor attack patterns. Experimental results show that this method can effectively defend against various backdoor attacks with minimal impact on model accuracy, providing a new solution for the security of federated learning. Full article
(This article belongs to the Special Issue Empowering IoT with AI: AIoT for Smart and Autonomous Systems)
Show Figures

Figure 1

38 pages, 4718 KB  
Review
Mass Spectrometry-Based Metabolomics in Pediatric Health and Disease
by Debasis Sahu, Andrei M. Matusa, Alicia DiBattista, Bradley L. Urquhart and Douglas D. Fraser
Metabolites 2026, 16(1), 49; https://doi.org/10.3390/metabo16010049 - 6 Jan 2026
Viewed by 331
Abstract
Mass spectrometry-based metabolomics is a valuable tool for advancing pediatric health research. Along with nuclear magnetic resonance, it enables detailed biochemical analysis from minimal sample volumes, a critical feature for pediatric diagnosis. Metabolomics supports early detection of inherited metabolic disorders, monitors metabolic changes [...] Read more.
Mass spectrometry-based metabolomics is a valuable tool for advancing pediatric health research. Along with nuclear magnetic resonance, it enables detailed biochemical analysis from minimal sample volumes, a critical feature for pediatric diagnosis. Metabolomics supports early detection of inherited metabolic disorders, monitors metabolic changes during growth, and identifies disease markers for a range of conditions, including metabolic, neurodevelopmental, oncological, and infectious diseases. Integrating metabolomic data with genomic, proteomic (i.e., multi-omics approaches), and clinical information enables more precise and preventive care by enhancing risk assessment and informing targeted treatments. However, routine clinical use faces several challenges, including establishing age- and sex-specific reference ranges, standardizing sample collection and processing, ensuring consistency across platforms and laboratories, expanding reference databases, and improving data comparability. Ethical and regulatory issues, including informed consent, data privacy, and equitable access, also require careful consideration. Advances in high-resolution and single-cell metabolomics, artificial intelligence for data analysis, and cost-effective testing are expected to address these barriers and support broader clinical adoption. As standards and data-sharing initiatives grow, metabolomics will play an increasingly important role in pediatric diagnostics and personalized care, enabling earlier disease detection, improved treatment monitoring, and better long-term outcomes for children. Full article
Show Figures

Graphical abstract

39 pages, 2066 KB  
Review
Mapping the Ischemic Continuum: Dynamic Multi-Omic Biomarker and AI for Personalized Stroke Care
by Valentin Titus Grigorean, Cosmin Pantu, Alexandru Breazu, Stefan Oprea, Octavian Munteanu, Mugurel Petrinel Radoi, Carmen Giuglea and Andrei Marin
Int. J. Mol. Sci. 2026, 27(1), 502; https://doi.org/10.3390/ijms27010502 - 3 Jan 2026
Viewed by 295
Abstract
Although there have been advancements in stroke treatment (reperfusion) therapy, and it has been shown that many individuals continue to suffer from partial recoveries and continuing decline in their neurological status as a result of suffering a stroke, a primary barrier to providing [...] Read more.
Although there have been advancements in stroke treatment (reperfusion) therapy, and it has been shown that many individuals continue to suffer from partial recoveries and continuing decline in their neurological status as a result of suffering a stroke, a primary barrier to providing precise care to patients with stroke continues to be the inability to capture changes in molecular and cellular programs over time and in biological compartments. This review synthesizes evidence that represents the entire continuum of ischemia, beginning with acute metabolic failure and excitotoxicity, and ending with immune response in the nervous system, reprogramming of glial cells, remodeling of vessels, and plasticity at the level of networks, and organizes this evidence in a temporal framework that includes three biological compartments:central nervous system tissue, cerebrospinal fluid, and peripheral blood. Additionally, this review discusses new technologies which enable researchers to discover biomarkers at an extremely high resolution, including single-cell and spatial multi-omics, profiling of extracellular vesicles, proteoform-resolved proteomics, and glymphatic imaging, as well as new computational methods and machine-learning algorithms to integrate data from multiple modalities and predict trajectories of disease progression. The final section of this review will provide an overview of translationally relevant and ethically relevant issues regarding the deployment of predictive biomarkers, such as privacy, access, equity, and fairness, and emphasize the importance of global coordination of research efforts in order to ensure the clinical applicability and global equity of biomarker-based diagnostics and treatments. Full article
(This article belongs to the Special Issue Stroke: Novel Molecular Mechanisms and Therapeutic Approaches)
Show Figures

Figure 1

13 pages, 607 KB  
Article
A Secure and Efficient Authentication Scheme with Privacy Protection for Internet of Medical Things
by Feihong Xu, Jianbo Wu, Qing An and Rahman Ziaur
Sensors 2026, 26(1), 313; https://doi.org/10.3390/s26010313 - 3 Jan 2026
Viewed by 303
Abstract
The Internet of Medical Things represents a pivotal application of Internet of Things technology in Healthcare 4.0, offering substantial practical benefits in enhancing medical quality, reducing costs, and minimizing errors. In history, researchers have proposed numerous privacy-preserving authentication schemes to safeguard Internet of [...] Read more.
The Internet of Medical Things represents a pivotal application of Internet of Things technology in Healthcare 4.0, offering substantial practical benefits in enhancing medical quality, reducing costs, and minimizing errors. In history, researchers have proposed numerous privacy-preserving authentication schemes to safeguard Internet of Medical Things applications. Nevertheless, due to design shortcomings, existing solutions still encounter significant security and performance challenges, rendering them impractical for real-world use. To resolve the issue, this work introduces a novel practical Internet of Medical Things-based smart healthcare system, leveraging a pairing-free certificateless signature scheme and hash-based message authentication code. Through formal security proofs under standard cryptographic assumptions, and performance analysis, our scheme demonstrates enhanced security while maintaining desirable computational and communication efficiency. Full article
Show Figures

Figure 1

26 pages, 1919 KB  
Systematic Review
Federated Learning for Histopathology Image Classification: A Systematic Review
by Meriem Touhami, Mohammad Faizal Ahmad Fauzi, Zaka Ur Rehman and Sarina Mansor
Diagnostics 2026, 16(1), 137; https://doi.org/10.3390/diagnostics16010137 - 1 Jan 2026
Viewed by 416
Abstract
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development [...] Read more.
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development of effective DL models. Federated learning (FL) offers a promising solution by enabling collaborative model training across institutions without exposing sensitive data. This systematic review aims to comprehensively evaluate the current state of FL applications in histopathological image classification by identifying prevailing methodologies, datasets, and performance metrics and highlighting existing challenges and future research directions. Methods: Following PRISMA guidelines, 24 studies published between 2020 and 2025 were analyzed. The literature was retrieved from ScienceDirect, IEEE Xplore, MDPI, Springer Nature Link, PubMed, and arXiv. Eligible studies focused on FL-based deep learning models for histopathology image classification with reported performance metrics. Studies unrelated to FL in histopathology or lacking accessible full texts were excluded. Results: The included studies utilized 10 datasets (8 public, 1 private, and 1 unspecified) and reported classification accuracies ranging from 69.37% to 99.72%. FedAvg was the most commonly used aggregation algorithm (14 studies), followed by FedProx, FedDropoutAvg, and custom approaches. Only two studies reported their FL frameworks (Flower and OpenFL). Frequently employed model architectures included VGG, ResNet, DenseNet, and EfficientNet. Performance was typically evaluated using accuracy, precision, recall, and F1-score. Federated learning demonstrates strong potential for privacy-preserving digital pathology applications. However, key challenges remain, including communication overhead, computational demands, and inconsistent reporting standards. Addressing these issues is essential for broader clinical adoption. Conclusions: Future work should prioritize standardized evaluation protocols, efficient aggregation methods, model personalization, robustness, and interpretability, with validation across multi-institutional clinical environments to fully realize the benefits of FL in histopathological image classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

47 pages, 1535 KB  
Review
Navigating the Future of Education: A Review on Telecommunications and AI Technologies, Ethical Implications, and Equity Challenges
by Christos Koukaras, Stavros G. Stavrinides, Euripides Hatzikraniotis, Maria Mitsiaki, Paraskevas Koukaras and Christos Tjortjis
Telecom 2026, 7(1), 2; https://doi.org/10.3390/telecom7010002 - 1 Jan 2026
Viewed by 635
Abstract
The increasing integration of Artificial Intelligence (AI) in education (AIEd) and its dependence on contemporary communication infrastructures (5G/6G, the Internet of Things (IoT), and Multi-Access Edge Computing (MEC)) has prompted a surge of research into applications, infrastructural dependencies, and deployment constraints. This is [...] Read more.
The increasing integration of Artificial Intelligence (AI) in education (AIEd) and its dependence on contemporary communication infrastructures (5G/6G, the Internet of Things (IoT), and Multi-Access Edge Computing (MEC)) has prompted a surge of research into applications, infrastructural dependencies, and deployment constraints. This is giving rise to a new paradigm termed AI-Enabled Telecommunication-Based Education (AITE). This review synthesises the recent literature (2022–2025) to examine how telecommunications and AI technologies converge to enhance educational ecosystems through adaptive learning systems, intelligent tutoring systems, AI-driven assessment, and administration. The findings reveal that low-latency, high-bandwidth connectivity, combined with edge-deployed analytics, enables real-time personalisation, continuous feedback, and scalable learning models that extend beyond traditional classrooms. In addition, persistent critical challenges are also reported, including issues with ethical governance, data privacy, algorithmic fairness, and uneven access to digital infrastructure, all affecting equitable adoption. By linking pedagogical transformation with telecom performance metrics—namely, latency, Quality of Service (QoS), and device interconnectivity—this work outlines a unified cross-layer framework for AITE. This review concludes by identifying future research avenues in ethical AI deployment, resilient architectures, and inclusive policy design to ensure transparent, secure, and human-centred educational transformation. Full article
Show Figures

Figure 1

39 pages, 2012 KB  
Systematic Review
Blockchain Technology and Maritime Logistics: A Systematic Literature Review
by Christian Muñoz-Sánchez, Jesica Menéndez-García, Jorge Alejandro Silva, Jose Arturo Garza-Reyes, Dulce María Monroy-Becerril and Eugene Hakizimana
Logistics 2026, 10(1), 12; https://doi.org/10.3390/logistics10010012 - 31 Dec 2025
Viewed by 514
Abstract
Background: Blockchain has been extensively discussed for enhancing transparency, traceability, and trust in general; however, there is fragmented empirical evidence available with respect to this issue within maritime logistics. The objective is to integrate and categorize peer-reviewed publications concerning applications of blockchain [...] Read more.
Background: Blockchain has been extensively discussed for enhancing transparency, traceability, and trust in general; however, there is fragmented empirical evidence available with respect to this issue within maritime logistics. The objective is to integrate and categorize peer-reviewed publications concerning applications of blockchain in maritime logistics and related supply chain domains. Methods: A systematic literature review with PRISMA 2020 was performed in Scopus database, and after a process of screening and eligibility, a total of 78 journal articles published mainly from September 2024 were incorporated. Descriptive and bibliometric analyses were conducted, and VOS viewer-based bibliographic coupling were employed to visualize thematic structure. Results: The review identifies seven research priorities for blockchain in maritime logistics: Technological Interoperability, Economic and Operational Impact, Cybersecurity and Privacy, Adoption and Scalability, Decision-Making and Trust, Environmental Sustainability, and Standardization and Regulatory Frameworks. Blockchain’s primary advantages are enhanced data integrity and visibility, whereas key challenges include interoperability, legal/regulatory uncertainty (e.g., e-doc recognition), high costs, scalability ceilings, integration with legacy systems, and data governance fears. Conclusions: The application of blockchain in maritime logistics depends on combined technical and institutional enabling conditions; an Integrated Blockchain Adoption Framework (IBAF) is suggested, and providing practical guides based on standardization, legal convergence, and hybrid governance modes. Full article
Show Figures

Figure 1

26 pages, 2440 KB  
Article
Robust Aggregation in Over-the-Air Computation with Federated Learning: A Semantic Anti-Interference Approach
by Jun-Cheng Ji, Chan-Tong Lam, Ke Wang and Benjamin K. Ng
Mathematics 2026, 14(1), 124; https://doi.org/10.3390/math14010124 - 29 Dec 2025
Viewed by 209
Abstract
Over-the-air federated learning (AirFL) enables distributed model training across wireless edge devices, preserving data privacy and minimizing bandwidth usage. However, challenges such as channel noise, non-identically distributed data, limited computational resources, and small local datasets lead to distorted model updates, inconsistent global models, [...] Read more.
Over-the-air federated learning (AirFL) enables distributed model training across wireless edge devices, preserving data privacy and minimizing bandwidth usage. However, challenges such as channel noise, non-identically distributed data, limited computational resources, and small local datasets lead to distorted model updates, inconsistent global models, increased training latency, and overfitting, all of which reduce accuracy and efficiency. To address these issues, we propose the Semantic Anti-Interference Aggregation (SAIA) framework, which integrates a semantic autoencoder, component-wise median aggregation, validation accuracy weighting, and data augmentation. First, a semantic autoencoder compresses model parameters into low-dimensional vectors, maintaining high signal quality and reducing communication costs. Second, component-wise median aggregation minimizes noise and outlier impact, ideal for AirFL as it avoids mean-based aggregation’s noise sensitivity and complex methods’ high computation. Third, validation accuracy weighting aligns updates from non-identically distributed data to ensure consistent global models. Fourth, data augmentation doubles dataset sizes, mitigating overfitting and reducing variance. Experiments on MNIST demonstrate that SAIA achieves an accuracy of approximately 96% and a loss of 0.16, improving accuracy by 3.3% and reducing loss by 39% compared to conventional federated learning approaches. With reduced computational and communication overhead, SAIA ensures efficient training on resource constrained IoT devices. Full article
(This article belongs to the Special Issue Federated Learning Strategies for Machine Learning)
Show Figures

Figure 1

Back to TopTop