Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (170)

Search Parameters:
Keywords = communication-efficient FL

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2673 KB  
Article
TAFL-UWSN: A Trust-Aware Federated Learning Framework for Securing Underwater Sensor Networks
by Raja Waseem Anwar, Mohammad Abrar, Abdu Salam and Faizan Ullah
Network 2026, 6(1), 18; https://doi.org/10.3390/network6010018 - 19 Mar 2026
Viewed by 142
Abstract
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient [...] Read more.
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient collaborative learning. Federated learning (FL) offers a privacy-preserving method for decentralized model training but is inherently vulnerable to Byzantine threats and malicious participants. This paper proposes trust-aware FL for underwater sensor networks (TAFL-UWSN), a trust-aware FL framework designed to improve security, reliability, and energy efficiency in UASNs by incorporating trust evaluation directly into the FL process. The goal is to mitigate the impact of adversarial nodes while maintaining model performance in low-resource underwater environments. TAFL-UWSN integrates continuous trust scoring based on packet forwarding reliability, sensing consistency, and model deviation. Trust scores are used to weight or filter model updates both at the node level and the edge layer, where Autonomous Underwater Vehicles (AUVs) act as mobile aggregators. A trust-aware federated averaging algorithm is implemented, and extensive simulations are conducted in a custom Python-based environment, comparing TAFL-UWSN to standard FedAvg and Byzantine-resilient FL approaches under various attack conditions. TAFL-UWSN achieved a model accuracy exceeding 92% with up to 30% malicious nodes while maintaining a false positive rate below 5.5%. Communication overhead was reduced by 28%, and energy usage per node dropped by 33% compared to baseline methods. The TAFL-UWSN framework demonstrates that integrating trust into FL enables secure, efficient, and resilient underwater intelligence, validating its potential for broader application in distributed, resource-constrained environments. Full article
Show Figures

Figure 1

25 pages, 389 KB  
Article
FedQuAD: Fast-Converging Curvature-Aware Federated Learning for Credit Default Prediction from Private Accounting Data
by Dingwen Bai, MuGa WaEr and Qichun Wu
Mathematics 2026, 14(6), 1012; https://doi.org/10.3390/math14061012 - 17 Mar 2026
Viewed by 228
Abstract
Credit default prediction from firm-level accounting statements is central to risk management, yet the underlying financial data are highly sensitive and often siloed across banks, auditors, and platforms. Federated learning (FL) offers a practical route to collaborative modeling without centralizing raw records, but [...] Read more.
Credit default prediction from firm-level accounting statements is central to risk management, yet the underlying financial data are highly sensitive and often siloed across banks, auditors, and platforms. Federated learning (FL) offers a practical route to collaborative modeling without centralizing raw records, but standard FL optimization can converge slowly under severe client heterogeneity, heavy-tailed accounting features, and label imbalance typical of default events. This paper proposes FedQuAD, a novel fast-converging FL algorithm that couples (i) quasi-Newton curvature aggregation on the server with a lightweight limited-memory update to accelerate global progress, (ii) a proximal variance-reduced local solver that stabilizes client drift under non-IID accounting distributions, and (iii) federated robust standardization of tabular financial ratios via secure aggregated quantile statistics to mitigate scale instability and outliers. FedQuAD is communication-efficient by design: It transmits compact gradient and curvature sketches and adapts local computation to each client’s stochasticity and drift. We provide convergence guarantees for strongly convex default-risk objectives (logistic and calibrated GLM losses) under bounded heterogeneity, and extend the analysis to nonconvex deep tabular models via expected stationarity bounds. Experiments on public credit-risk benchmarks with simulated cross-silo (institutional) partitions demonstrate that FedQuAD reaches target AUC and calibration error with substantially fewer communication rounds than representative baselines while maintaining privacy constraints compatible with secure aggregation and optional client-level differential privacy accounting. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

27 pages, 3391 KB  
Article
A Hybrid Federated–Incremental Learning Framework for Continuous Authentication in Zero-Trust Networks
by Jie Ji, Shi Qiu, Shengpeng Ye and Xin Liu
Future Internet 2026, 18(3), 154; https://doi.org/10.3390/fi18030154 - 16 Mar 2026
Viewed by 141
Abstract
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept [...] Read more.
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept drift frequently lead to degraded model robustness and personalization. To address these issues, this paper presents a hybrid learning framework that integrates federated learning with incremental learning (IL) for sustainable authentication. A Dynamic Weighted Federated Aggregation (DWFA) algorithm is developed to mitigate concept drift by adjusting aggregation weights in real time, ensuring that the global model adapts to changing data distributions. This approach enables continuous learning from distributed threat data while maintaining privacy and eliminating the need for historical data retention. Experimental results on real-world traffic datasets indicate that the proposed framework outperforms conventional FL baselines, reducing the overall error rate by approximately 56% and improving the detection rate for novel attack types by over 17.8%. Furthermore, the framework remains stable against performance decay while maintaining efficient communication overhead. This study provides an adaptive, privacy-preserving solution for identity authentication in zero-trust systems. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Graphical abstract

17 pages, 484 KB  
Article
A Federated Learning-Based Network Intrusion Detection System for 5G and IoT Using Mixture of Experts
by Loukas Ilias, George Doukas, Vangelis Lamprou, Spiros Mouzakitis, Christos Ntanos and Dimitris Askounis
Electronics 2026, 15(5), 1057; https://doi.org/10.3390/electronics15051057 - 3 Mar 2026
Viewed by 389
Abstract
Fifth generation (5G) networks have significantly enhanced connectivity, speed, and reliability, transforming industries with faster and more efficient communication. The Internet of Things (IoT) has introduced unprecedented convenience and automation, revolutionizing sectors such as healthcare, finance, and smart infrastructure. However, both 5G networks [...] Read more.
Fifth generation (5G) networks have significantly enhanced connectivity, speed, and reliability, transforming industries with faster and more efficient communication. The Internet of Things (IoT) has introduced unprecedented convenience and automation, revolutionizing sectors such as healthcare, finance, and smart infrastructure. However, both 5G networks and IoT environments are experiencing a high frequency of attacks. Intrusion detection systems (IDSs) built on federated learning (FL) are being proposed to boost data privacy and security. However, these IDSs are related with the inherent drawbacks of FL, namely the existence of non-independently and identically (non-IID) distributed features and the machine learning model complexity. To address these limitations, we present a study that integrates a Mixture of Experts (MoE) into an FL setting in the task of intrusion detection. Specifically, to mitigate the issues of model complexity within the FL setting, we use a sparsely gated MoE layer consisting of a router/gating network and a set of experts. Only a subset of experts is selected via applying noisy top-k gating. To alleviate the issue of non-IID data, we adopt the Label-based Dirichlet Partition method, utilizing Dirichlet sampling with a hyperparameter α to simulate a label-based non-IID data distribution. Four FL strategies are employed. We perform our experiments on the 5G-NIDD and BoT-IoT datasets. Findings show that the proposed approach achieves competitive performance across both datasets under heterogeneous federated settings. Full article
(This article belongs to the Special Issue Advances in 5G and Beyond Mobile Communication)
Show Figures

Figure 1

26 pages, 706 KB  
Article
Efficient Federated Learning Method FedLayerPrune Based on Layer Adaptive Pruning
by Wenlong He, Hui Cao, Jisai Zhang and Decao Yang
Electronics 2026, 15(5), 1049; https://doi.org/10.3390/electronics15051049 - 2 Mar 2026
Viewed by 291
Abstract
As a privacy-preserving distributed machine learning paradigm, federated learning (FL) faces serious communication bottlenecks in practical deployment. In this paper, we propose FedLayerPrune, a communication-efficient federated learning method that integrates three synergistic components: (i) a layer-adaptive pruning strategy that dynamically allocates pruning rates [...] Read more.
As a privacy-preserving distributed machine learning paradigm, federated learning (FL) faces serious communication bottlenecks in practical deployment. In this paper, we propose FedLayerPrune, a communication-efficient federated learning method that integrates three synergistic components: (i) a layer-adaptive pruning strategy that dynamically allocates pruning rates based on layer sensitivity and network depth; (ii) a heterogeneity-aware aggregation mechanism that combines sample-size weighted averaging with mask consensus voting to enhance robustness under non-IID data distributions; and (iii) a dynamic pruning rate scheduler that progressively increases compression intensity across training rounds. Unlike existing approaches that apply uniform pruning or consider these techniques in isolation, FedLayerPrune achieves a principled coordination among layer-wise importance evaluation, temporal pruning scheduling, and heterogeneous model aggregation. Extensive experiments on CIFAR-10, MNIST, and Fashion-MNIST demonstrate that FedLayerPrune reduces communication costs by up to 68.3% compared with standard FedAvg, while maintaining model accuracy within a 2% margin. Moreover, our method exhibits stronger robustness and faster convergence under severe non-IID data distributions. These results suggest that FedLayerPrune provides a practical and effective solution for deploying federated learning in resource-constrained edge computing environments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

52 pages, 2937 KB  
Review
Federated Learning: A Survey of Core Challenges, Current Methods, and Opportunities
by Madan Baduwal, Priyanka Paudel and Vini Chaudhary
Computers 2026, 15(3), 155; https://doi.org/10.3390/computers15030155 - 2 Mar 2026
Viewed by 1171
Abstract
Federated learning (FL) has emerged as a transformative distributed learning paradigm that enables collaborative model training without sharing raw data, thereby preserving privacy across large, diverse, and geographically dispersed clients. Despite its rapid adoption in mobile networks, Internet of Things (IoT) systems, healthcare, [...] Read more.
Federated learning (FL) has emerged as a transformative distributed learning paradigm that enables collaborative model training without sharing raw data, thereby preserving privacy across large, diverse, and geographically dispersed clients. Despite its rapid adoption in mobile networks, Internet of Things (IoT) systems, healthcare, finance, and edge intelligence, FL continues to face several persistent and interdependent challenges that hinder its scalability, efficiency, and real-world deployment. In this survey, we present a systematic examination of six core challenges in federated learning: heterogeneity, computation overhead, communication bottlenecks, client selection, aggregation and optimization, and privacy preservation. We analyze how these challenges manifest across the full FL pipeline, from local training and client participation to global model aggregation and distribution, and examine their impact on model performance, convergence behavior, fairness, and system reliability. Furthermore, we synthesize representative state-of-the-art approaches proposed to address each challenge and discuss their underlying assumptions, trade-offs, and limitations in practical deployments. Finally, we identify open research problems and outline promising directions for developing more robust, scalable, and efficient federated learning systems. This survey aims to serve as a comprehensive reference for researchers and practitioners seeking a unified understanding of the fundamental challenges shaping modern federated learning. Full article
Show Figures

Figure 1

32 pages, 4314 KB  
Article
A Hardware-Aware Federated Meta-Learning Framework for Intraday Return Prediction Under Data Scarcity and Edge Constraints
by Zhe Wen, Xin Cheng, Ruixin Xue, Jinao Ye, Zhongfeng Wang and Meiqi Wang
Appl. Sci. 2026, 16(5), 2319; https://doi.org/10.3390/app16052319 - 27 Feb 2026
Viewed by 298
Abstract
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, [...] Read more.
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, practical deployment is constrained by a tripartite bottleneck: real-time samples are scarce, hardware resources on edge are limited, and communication overhead between cloud and edge must be kept low to satisfy stringent latency requirements. To address these challenges, we develop a hardware-aware edge learning framework that combines federated learning (FL) and meta-learning to enable rapid few-shot personalization without exposing local data. Importantly, the framework incorporates our proposed Sleep Node Algorithm (SNA), which turns the “FL + meta-learning” combination into a practical and efficient edge solution. Specifically, SNA dynamically deactivates “inertial” (insensitive) network components during adaptation: it provides a structural regularizer that stabilizes few-shot updates and mitigates overfitting under concept drift, while inducing sparsity that reduces both on-device computation and cloud-edge communication. To efficiently leverage these unstructured zero nodes introduced by SNA, we further design a dedicated accelerator, EPAST (Energy-efficient Pipelined Accelerator for Sparse Training). EPAST adopts a heterogeneous architecture and introduces a dedicated Backward Pipeline (BPIP) dataflow that overlaps backpropagation stages, thereby improving hardware utilization under irregular sparse workloads. Experimental results demonstrate that our system consistently outperforms strong baselines, including DQN, GARCH-XGBoost, and LRU, in terms of Pearson IC. A 55 nm CMOS ASIC implementation further validates robust learning under an extreme 5-shot setting (IC = 0.1176), achieving an end-to-end training speed-up of 11.35× and an energy efficiency of 45.78 TOPS/W. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

51 pages, 3178 KB  
Review
Federated Learning in Edge Computing: Vulnerabilities, Attacks, and Defenses—A Survey
by Sahar Alhawas and Murad A. Rassam
Sensors 2026, 26(4), 1275; https://doi.org/10.3390/s26041275 - 15 Feb 2026
Viewed by 737
Abstract
Federated Learning (FL), a distributed machine learning framework, enables collaborative model training across multiple devices without sharing raw data, thereby preserving privacy and reducing communication costs. When combined with Edge Computing (EC), FL brings computations closer to data sources, enabling low-latency, real-time decision-making [...] Read more.
Federated Learning (FL), a distributed machine learning framework, enables collaborative model training across multiple devices without sharing raw data, thereby preserving privacy and reducing communication costs. When combined with Edge Computing (EC), FL brings computations closer to data sources, enabling low-latency, real-time decision-making in resource-constrained environments. However, this decentralization introduces several vulnerabilities, including data poisoning, backdoor attacks, inference leaks, and Byzantine behaviors, which are worsened by the heterogeneity of edge devices and their intermittent connectivity. This survey presents a comprehensive review of the intersection of FL and EC, focusing on vulnerabilities, attack vectors, and defense mechanisms. We analyze existing methods for robust aggregation, anomaly detection, differential privacy, and secure aggregation, with a focus on their feasibility within edge environments. Additionally, we identify open research challenges, such as scalability, resilience to heterogeneity, and energy-efficient defenses, and provide insights into the evolving landscape of FL in edge computing. This review aims to inform future research on enhancing the security, privacy, and efficiency of FL systems deployed in real-world edge environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 956 KB  
Article
Trust-Aware Federated Graph Learning for Secure and Energy-Efficient IoT Ecosystems
by Manuel J. C. S. Reis
Computers 2026, 15(2), 121; https://doi.org/10.3390/computers15020121 - 11 Feb 2026
Viewed by 396
Abstract
The integration of Federated Learning (FL) and Graph Neural Networks (GNNs) has emerged as a promising paradigm for distributed intelligence in Internet of Things (IoT) environments. However, challenges related to trust, device heterogeneity, and energy efficiency continue to hinder scalable deployment in real-world [...] Read more.
The integration of Federated Learning (FL) and Graph Neural Networks (GNNs) has emerged as a promising paradigm for distributed intelligence in Internet of Things (IoT) environments. However, challenges related to trust, device heterogeneity, and energy efficiency continue to hinder scalable deployment in real-world settings. This paper presents Trust-FedGNN, a trust-aware federated graph learning framework that jointly addresses reliability, robustness, and sustainability in IoT ecosystems. The framework combines reliability-based reputation modeling, energy-aware client scheduling, and dynamic graph pruning to reduce communication overhead and energy consumption during collaborative training, while mitigating the influence of unreliable or malicious participants. Trust evaluation is explicitly decoupled from energy availability, ensuring that honest but resource-constrained devices are not penalized during aggregation. Experimental results on benchmark IoT datasets demonstrate up to 5.8% higher accuracy, 3.1% higher F1-score, and approximately 22% lower energy consumption compared with State-of-the-Art federated baselines, while maintaining robustness under partial adversarial participation. These results confirm the effectiveness of Trust-FedGNN as a secure, robust, and energy-efficient federated graph learning solution for heterogeneous IoT networks (a proof-of-concept evaluation across 10 federated clients). Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

21 pages, 790 KB  
Article
CEVPDS: A Cooperative Emergency Vehicle Priority Driving Scheme for Improving Travel Efficiency Through V2X Communications
by Yanchi Wang, Mu Wang, Mei Yang, Chunxiao Li, Jiajun Shen, Haoyu Wang and Wasinee Noonpakdee
Symmetry 2026, 18(2), 331; https://doi.org/10.3390/sym18020331 - 11 Feb 2026
Viewed by 265
Abstract
Emergency vehicles (EmVs) play a crucial role in providing prompt services to public rescue activities. However, due to the impacts of ordinary vehicles (OVs), EmVs are often blocked and cannot reach the rescue site in time. Therefore, a cooperative emergency vehicle priority driving [...] Read more.
Emergency vehicles (EmVs) play a crucial role in providing prompt services to public rescue activities. However, due to the impacts of ordinary vehicles (OVs), EmVs are often blocked and cannot reach the rescue site in time. Therefore, a cooperative emergency vehicle priority driving scheme (CEVPDS) is proposed to ensure the EmVs’ travel efficiency. The proposed scheme is used for urban express roads under a vehicle-to-everything communications (V2X) environment and consists of two steps to ensure high-priority passage for the EmVs to reach the rescue accident site. The first step involves designing the EmV trajectory in advance. The second step requires the OVs to dynamically give way to the EmV by lane changing according to the EmV’s pre-planned trajectory established in the first step. Once the EmV trajectory is predefined, the relevant trajectory information is transmitted to surrounding OVs via V2X communication. OVs ahead of the EmV are then scheduled to provide adequate road space by lane changing, while the OVs behind it are prohibited from overtaking it. We conducted simulations using the SUMO platform. Compared with the Fixed-Lane Strategy (FLS), the proposed scheme achieves multiple improvements in multiple aspects: it drastically shortens the EmVs response time, significantly mitigates the impacts on OVs, and fewer lane changes for both EmVs and OVs. As a result, the scheme not only enhances the travel efficiency of EmVs but also guarantees the safety, symmetry, and efficiency of the overall urban traffic system. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

28 pages, 4040 KB  
Article
BE-DPFL: A Blockchain-Enhanced Privacy-Preserving Federated Learning Framework for Secure Edge Network Collaboration
by Wangjing Jia and Tao Xie
Appl. Sci. 2026, 16(4), 1791; https://doi.org/10.3390/app16041791 - 11 Feb 2026
Viewed by 240
Abstract
Against the deep integration of digital transformation and AI, cross-institutional collaborative modeling hinges on efficient data circulation, yet data silos and privacy regulations hinder traditional centralized training. Federated Learning (FL) keeps data local but faces issues like weak centralized trust, inadequate privacy protection, [...] Read more.
Against the deep integration of digital transformation and AI, cross-institutional collaborative modeling hinges on efficient data circulation, yet data silos and privacy regulations hinder traditional centralized training. Federated Learning (FL) keeps data local but faces issues like weak centralized trust, inadequate privacy protection, and poor robustness in edge networks. Existing improvements, including via differential privacy (DP) and blockchain, among others, still suffer from centralized budget allocation, low consensus efficiency, or single-point-of-failure addressing, failing to jointly optimize trust, performance, and privacy. The limitations are exacerbated in high-frequency, resource-constrained edge environments. To tackle these challenges, this paper proposes BE-DPFL, a blockchain-enhanced differentially private FL framework that integrates on-chain trusted supervision and off-chain efficient training. It builds a lightweight blockchain trust layer with FL-PBFT consensus and smart contracts, introduces Random Projection–ADMM optimization, and designs a multi-objective adaptive gradient clipping/noise injection strategy. Experiments on CIFAR-10 and ChestX-ray14 demonstrate that BE-DPFL outperforms mainstream methods in consensus efficiency, communication overhead, privacy-accuracy balance, and robustness. It reduces communication costs by over 97%, achieves 100% privacy compliance, and maintains stable performance even under high disturbances. Ablation studies confirm the significant contributions of core components. Full article
Show Figures

Figure 1

19 pages, 1407 KB  
Article
Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption
by Jing Wang and Yun Wang
Sensors 2026, 26(3), 890; https://doi.org/10.3390/s26030890 - 29 Jan 2026
Viewed by 376
Abstract
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing [...] Read more.
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model’s ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 924 KB  
Article
SeqFAL: A Federated Active Learning Framework for Private and Efficient Labeling of Security Requirements
by Waad Alhoshan
Appl. Sci. 2026, 16(2), 914; https://doi.org/10.3390/app16020914 - 15 Jan 2026
Viewed by 251
Abstract
Security requirements play a critical role in ensuring the trustworthiness and resilience of software systems; however, their automatic classification remains challenging due to limited labeled data, confidentiality constraints, and the heterogeneous nature of requirements across organizations. Existing approaches typically assume centralized access to [...] Read more.
Security requirements play a critical role in ensuring the trustworthiness and resilience of software systems; however, their automatic classification remains challenging due to limited labeled data, confidentiality constraints, and the heterogeneous nature of requirements across organizations. Existing approaches typically assume centralized access to training data and rely on costly manual annotation, making them unsuitable for distributed industrial settings. To address these challenges, we propose SeqFAL, a communication-efficient and privacy-preserving Federated Active Learning framework for natural language–based security requirements classification. SeqFAL integrates frozen pre-trained sentence embeddings, margin-based active learning, and lightweight federated aggregation of linear classifiers, enabling collaborative model training without sharing raw requirement text. We evaluate SeqFAL on a combined dataset of SeqReq dataset and the PROMISE-NFR dataset under varying federation sizes, query budgets, and communication rounds, and compare it against three baselines: centralized learning, active learning without federated aggregation, and federated learning without active querying. In addition to the proposed margin-based sampling strategy, we investigate alternative query strategies, including least-confidence and random sampling, as well as multiple linear classifiers such as LinearSVC and SGD-based classifiers with logistic and hinge losses. Results show that SeqFAL consistently outperforms FL-only and achieves performance comparable to AL-only centralized baselines, while approaching the optimal upper bound using significantly fewer labeled samples. These findings demonstrate that the joint integration of federated learning and active learning provides an effective and privacy-preserving strategy for security requirements classification in distributed software engineering environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 1919 KB  
Systematic Review
Federated Learning for Histopathology Image Classification: A Systematic Review
by Meriem Touhami, Mohammad Faizal Ahmad Fauzi, Zaka Ur Rehman and Sarina Mansor
Diagnostics 2026, 16(1), 137; https://doi.org/10.3390/diagnostics16010137 - 1 Jan 2026
Viewed by 971
Abstract
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development [...] Read more.
Background/Objective: The integration of machine learning (ML) and deep learning (DL) has significantly enhanced medical image classification, especially in histopathology, by improving diagnostic accuracy and aiding clinical decision making. However, data privacy concerns and restrictions on sharing patient data limit the development of effective DL models. Federated learning (FL) offers a promising solution by enabling collaborative model training across institutions without exposing sensitive data. This systematic review aims to comprehensively evaluate the current state of FL applications in histopathological image classification by identifying prevailing methodologies, datasets, and performance metrics and highlighting existing challenges and future research directions. Methods: Following PRISMA guidelines, 24 studies published between 2020 and 2025 were analyzed. The literature was retrieved from ScienceDirect, IEEE Xplore, MDPI, Springer Nature Link, PubMed, and arXiv. Eligible studies focused on FL-based deep learning models for histopathology image classification with reported performance metrics. Studies unrelated to FL in histopathology or lacking accessible full texts were excluded. Results: The included studies utilized 10 datasets (8 public, 1 private, and 1 unspecified) and reported classification accuracies ranging from 69.37% to 99.72%. FedAvg was the most commonly used aggregation algorithm (14 studies), followed by FedProx, FedDropoutAvg, and custom approaches. Only two studies reported their FL frameworks (Flower and OpenFL). Frequently employed model architectures included VGG, ResNet, DenseNet, and EfficientNet. Performance was typically evaluated using accuracy, precision, recall, and F1-score. Federated learning demonstrates strong potential for privacy-preserving digital pathology applications. However, key challenges remain, including communication overhead, computational demands, and inconsistent reporting standards. Addressing these issues is essential for broader clinical adoption. Conclusions: Future work should prioritize standardized evaluation protocols, efficient aggregation methods, model personalization, robustness, and interpretability, with validation across multi-institutional clinical environments to fully realize the benefits of FL in histopathological image classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 2440 KB  
Article
Robust Aggregation in Over-the-Air Computation with Federated Learning: A Semantic Anti-Interference Approach
by Jun-Cheng Ji, Chan-Tong Lam, Ke Wang and Benjamin K. Ng
Mathematics 2026, 14(1), 124; https://doi.org/10.3390/math14010124 - 29 Dec 2025
Viewed by 486
Abstract
Over-the-air federated learning (AirFL) enables distributed model training across wireless edge devices, preserving data privacy and minimizing bandwidth usage. However, challenges such as channel noise, non-identically distributed data, limited computational resources, and small local datasets lead to distorted model updates, inconsistent global models, [...] Read more.
Over-the-air federated learning (AirFL) enables distributed model training across wireless edge devices, preserving data privacy and minimizing bandwidth usage. However, challenges such as channel noise, non-identically distributed data, limited computational resources, and small local datasets lead to distorted model updates, inconsistent global models, increased training latency, and overfitting, all of which reduce accuracy and efficiency. To address these issues, we propose the Semantic Anti-Interference Aggregation (SAIA) framework, which integrates a semantic autoencoder, component-wise median aggregation, validation accuracy weighting, and data augmentation. First, a semantic autoencoder compresses model parameters into low-dimensional vectors, maintaining high signal quality and reducing communication costs. Second, component-wise median aggregation minimizes noise and outlier impact, ideal for AirFL as it avoids mean-based aggregation’s noise sensitivity and complex methods’ high computation. Third, validation accuracy weighting aligns updates from non-identically distributed data to ensure consistent global models. Fourth, data augmentation doubles dataset sizes, mitigating overfitting and reducing variance. Experiments on MNIST demonstrate that SAIA achieves an accuracy of approximately 96% and a loss of 0.16, improving accuracy by 3.3% and reducing loss by 39% compared to conventional federated learning approaches. With reduced computational and communication overhead, SAIA ensures efficient training on resource constrained IoT devices. Full article
(This article belongs to the Special Issue Federated Learning Strategies for Machine Learning)
Show Figures

Figure 1

Back to TopTop