error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (125)

Search Parameters:
Keywords = privacy-preserving inference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 1539 KB  
Article
SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments
by Devharsh Trivedi, Aymen Boudguiga, Nesrine Kaaniche and Nikos Triandopoulos
Electronics 2026, 15(2), 267; https://doi.org/10.3390/electronics15020267 - 7 Jan 2026
Abstract
While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose SplitML, a secure and privacy-preserving framework [...] Read more.
While Federated Learning (FL) and Split Learning (SL) aim to uphold data confidentiality by localized training, they remain susceptible to adversarial threats such as model poisoning and sophisticated inference attacks. To mitigate these vulnerabilities, we propose SplitML, a secure and privacy-preserving framework for Federated Split Learning (FSL). By integrating INDCPAD secure Fully Homomorphic Encryption (FHE) with Differential Privacy (DP), SplitML establishes a defense-in-depth strategy that minimizes information leakage and thwarts reconstructive inference attempts. The framework accommodates heterogeneous model architectures by allowing clients to collaboratively train only the common top layers while keeping their bottom layers exclusive to each participant. This partitioning strategy ensures that the layers closest to the sensitive input data are never exposed to the centralized server. During the training phase, participants utilize multi-key CKKS FHE to facilitate secure weight aggregation, which ensures that no single entity can access individual updates in plaintext. For collaborative inference, clients exchange activations protected by single-key CKKS FHE to achieve a consensus derived from Total Labels (TL) or Total Predictions (TP). This consensus mechanism enhances decision reliability by aggregating decentralized insights while obfuscating soft-label confidence scores that could be exploited by attackers. Our empirical evaluation demonstrates that SplitML provides substantial defense against Membership Inference (MI) attacks, reduces temporal training costs compared to standard encrypted FL, and improves inference precision via its consensus mechanism, all while maintaining a negligible impact on federation overhead. Full article
Show Figures

Figure 1

23 pages, 1101 KB  
Article
A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference
by Chi Zhang, Fenhua Bai, Jinhua Wan and Yu Chen
Electronics 2026, 15(2), 275; https://doi.org/10.3390/electronics15020275 - 7 Jan 2026
Abstract
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth [...] Read more.
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth of modern deep neural networks rapidly consumes this budget, necessitating frequent, computationally expensive bootstrapping operations to refresh the noise. This bootstrapping process has emerged as the primary performance bottleneck. Current noise management strategies are predominantly static, triggering bootstrapping at pre-defined, fixed intervals. This approach is sub-optimal for deep, complex architectures, leading to excessive computational overhead and potential accuracy degradation due to cumulative precision loss. To address this challenge, we propose a Deep Network-aware Adaptive Noise-budget Management mechanism, a novel mechanism that formulates noise budget allocation as a sequential decision problem optimized via reinforcement learning. The core of the proposed mechanism comprises two components. First, we construct a layer-aware noise consumption prediction model to accurately estimate the heterogeneous computational costs and noise accumulation across different network layers. Second, we design a Deep Q-Network-driven optimization algorithm. This Deep Q-Network agent is trained to derive a globally optimal policy, dynamically determining the optimal timing and network location for executing bootstrapping operations, based on the real-time output of the noise predictor and the current network state. This approach shifts from a static, pre-defined strategy to an adaptive, globally optimized one. Experimental validation on several typical deep neural network architectures demonstrates that the proposed mechanism significantly outperforms state-of-the-art fixed strategies, markedly reducing redundant bootstrapping overhead while maintaining model performance. Full article
(This article belongs to the Special Issue Security and Privacy in Artificial Intelligence Systems)
Show Figures

Figure 1

36 pages, 968 KB  
Review
Applications of Artificial Intelligence in Fisheries: From Data to Decisions
by Syed Ariful Haque and Saud M. Al Jufaili
Big Data Cogn. Comput. 2026, 10(1), 19; https://doi.org/10.3390/bdcc10010019 - 5 Jan 2026
Viewed by 586
Abstract
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of [...] Read more.
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of labeled data, and poorly benchmarked across operational contexts. Recent developments in technology and applications in fisheries genetics and monitoring, precision aquaculture, management, and sensing infrastructure are summarized in this paper. We studied automated species recognition, genomic trait inference, environmental DNA metabarcoding, acoustic analysis, and trait-based population modeling in fisheries genetics and monitoring. We used digital-twin frameworks for supervised learning in feed optimization, reinforcement learning for water quality control, vision-based welfare monitoring, and harvest forecasting in aquaculture. We explored automatic identification system trajectory analysis for illicit fishing detection, global effort mapping, electronic bycatch monitoring, protected species tracking, and multi-sensor vessel surveillance in fisheries management. Acoustic echogram automation, convolutional neural network-based fish detection, edge-computing architectures, and marine-domain foundation models are foundational developments in sensing infrastructure. Implementation challenges include performance degradation across habitat and seasonal transitions, insufficient standardized multi-region datasets for rare and protected taxa, inadequate incorporation of model uncertainty into management decisions, and structural inequalities in data access and technology adoption among smallholder producers. Standardized multi-region benchmarks with rare-taxa coverage, calibrated uncertainty quantification in assessment and control systems, domain-robust energy-efficient algorithms, and privacy-preserving data partnerships are our priorities. These integrated priorities enable transition from experimental prototypes to a reliable, collaborative infrastructure for sustainable wild capture and farmed aquatic systems. Full article
Show Figures

Figure 1

18 pages, 3445 KB  
Article
Effective Target Privacy Protection Against Dynamic-Link-Prediction-Based Attacks via Adversarial Learning
by Mengdi Sun and Minghu Tang
Mathematics 2026, 14(1), 113; https://doi.org/10.3390/math14010113 - 28 Dec 2025
Viewed by 124
Abstract
Graph data mining has emerged as a prominent area of research in both academic and industrial domains. Dynamic link prediction, a critical subfield within graph data mining, offers a more realistic representation of real-world networks compared to static link prediction, making dynamic link [...] Read more.
Graph data mining has emerged as a prominent area of research in both academic and industrial domains. Dynamic link prediction, a critical subfield within graph data mining, offers a more realistic representation of real-world networks compared to static link prediction, making dynamic link prediction attacks particularly threatening to privacy. While privacy protection in dynamic networks can be achieved by removing certain sensitive links, attackers can still infer hidden sensitive connections from observable network data. Moreover, existing studies seldom address target-level defense against dynamic link prediction attacks. To address these challenges, this paper proposes a Target-Level Privacy protection method against Dynamic Link Prediction attacks (TP-DLP). The method leverages temporal information in dynamic networks to implement targeted protection based on link gradient information, operating within a perturbation range that preserves the inherent characteristics of dynamic networks. Using dynamic social networks as a case study, the approach distinguishes the privacy levels of dynamic links to achieve target-level privacy protection. Extensive experimental results demonstrate that TP-DLP significantly enhances privacy protection while preserving network utility, making it well-suited for targeted defense against dynamic network link prediction. It can be concluded that our method achieves a balanced trade-off between privacy protection effectiveness and network structural fidelity. Full article
Show Figures

Figure 1

46 pages, 1279 KB  
Article
Privacy-Preserving Machine Learning Techniques: Cryptographic Approaches, Challenges, and Future Directions
by Elif Nur Kucur, Tolga Buyuktanir, Muharrem Ugurelli and Kazim Yildiz
Appl. Sci. 2026, 16(1), 277; https://doi.org/10.3390/app16010277 - 26 Dec 2025
Viewed by 473
Abstract
Privacy-preserving machine learning (PPML) constitutes a core element of responsible AI by supporting model training and inference without exposing sensitive information. This survey presents a comprehensive examination of the major cryptographic PPML techniques and introduces a unified taxonomy covering technical models, verification criteria, [...] Read more.
Privacy-preserving machine learning (PPML) constitutes a core element of responsible AI by supporting model training and inference without exposing sensitive information. This survey presents a comprehensive examination of the major cryptographic PPML techniques and introduces a unified taxonomy covering technical models, verification criteria, and evaluation dimensions. The study consolidates findings from both survey and experimental works using structured comparison tables and emphasizes that recent research increasingly adopts hybrid and verifiable PPML designs. In addition, we map PPML applications across domains such as healthcare, finance, Internet of Things (IoT), and edge systems, indicating that cryptographic approaches are progressively transitioning from theoretical constructs to deployable solutions. Finally, the survey outlines emerging trends—including the growth of zero-knowledge proofs (ZKPs)-based verification and domain-specific hybrid architectures—and identifies practical considerations that shape PPML adoption in real systems. Full article
Show Figures

Figure 1

13 pages, 2083 KB  
Article
Adaptive Privacy-Preserving Insider Threat Detection Using Generative Sequence Models
by Fatmah Bamashmoos
Future Internet 2026, 18(1), 11; https://doi.org/10.3390/fi18010011 - 26 Dec 2025
Viewed by 162
Abstract
Insider threats remain one of the most challenging security risks in modern enterprises due to their subtle behavioral patterns and the difficulty of distinguishing malicious intent from legitimate activity. This paper presents a unified and adaptive generative framework for insider threat detection that [...] Read more.
Insider threats remain one of the most challenging security risks in modern enterprises due to their subtle behavioral patterns and the difficulty of distinguishing malicious intent from legitimate activity. This paper presents a unified and adaptive generative framework for insider threat detection that integrates Variational Autoencoders (VAEs) and Transformer Autoencoder architectures to learn personalized behavioral baselines from sequential user event logs. Anomalies are identified as significant deviations from these learned baseline distributions. The proposed framework incorporates an adaptive learning mechanism to address both cold-start scenarios and concept drift, enabling continuous model refinement as user behavior evolves. In addition, we introduce a privacy-preserving latent-space design and evaluate the framework under formal privacy attacks, including membership inference and reconstruction attacks, demonstrating strong resilience against data leakage. Experiments performed on the CERT Insider Threat Dataset (v5.2) show that our approach outperforms traditional and deep learning baselines, with the Transformer Autoencoder achieving an F1-score of 0.66 and an AUPRC of 0.59. The results highlight the effectiveness of generative sequence models for privacy-conscious and adaptive insider threat detection in enterprise environments, providing a comparative analysis of two powerful architectures for practical implementation. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

24 pages, 20297 KB  
Review
Artificial Intelligence-Aided Microfluidic Cell Culture Systems
by Muhammad Sohail Ibrahim and Minseok Kim
Biosensors 2026, 16(1), 16; https://doi.org/10.3390/bios16010016 - 24 Dec 2025
Viewed by 496
Abstract
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid [...] Read more.
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid models, to recapitulate tissue-level functions and generate rich, multiparametric datasets through high-resolution imaging, integrated sensors, and biochemical assays. The heterogeneity and volume of these data introduce substantial challenges in pre-processing, feature extraction, multimodal integration, and biological interpretation. Artificial intelligence (AI), particularly machine learning and deep learning, offers solutions to these analytical bottlenecks by enabling automated phenotyping, predictive modeling, and real-time control of microfluidic environments. Recent advances also highlight the importance of technical frameworks such as dimensionality reduction, explainable feature selection, spectral pre-processing, lightweight on-chip inference models, and privacy-preserving approaches that support robust and deployable AI–microfluidic workflows. AI-enabled microfluidic and organ-on-a-chip systems now span a broad application spectrum, including cancer biology, drug screening, toxicity testing, microbial and environmental monitoring, pathogen detection, angiogenesis studies, nerve-on-a-chip models, and exosome-based diagnostics. These platforms also hold increasing potential for precision medicine, where AI can support individualized therapeutic prediction using patient-derived cells and organoids. As the field moves toward more interpretable and autonomous systems, explainable AI will be essential for ensuring transparency, regulatory acceptance, and biological insight. Recent AI-enabled applications in cancer modeling, drug screening, etc., highlight how deep learning can enable precise detection of phenotypic shifts, classify therapeutic responses with high accuracy, and support closed-loop regulation of microfluidic environments. These studies demonstrate that AI can transform microfluidic systems from static culture platforms into adaptive, data-driven experimental tools capable of enhancing assay reproducibility, accelerating drug discovery, and supporting personalized therapeutic decision-making. This narrative review synthesizes current progress, technical challenges, and future opportunities at the intersection of AI, microfluidic cell culture platforms, and advanced organ-on-a-chip systems, highlighting their emerging role in precision health and next-generation biomedical research. Full article
(This article belongs to the Collection Microsystems for Cell Cultures)
Show Figures

Figure 1

27 pages, 2013 KB  
Article
An LLM-Powered Framework for Privacy-Preserving and Scalable Labor Market Analysis
by Wei Ji and Zuobin Ying
Mathematics 2026, 14(1), 53; https://doi.org/10.3390/math14010053 - 23 Dec 2025
Viewed by 226
Abstract
Timely and reliable labor market intelligence is crucial for evidence-based policymaking, workforce planning, and economic forecasting. However, traditional data collection and centralized analytics raise growing concerns about privacy, scalability, and institutional data governance. This paper presents a large language model (LLM)-powered framework for [...] Read more.
Timely and reliable labor market intelligence is crucial for evidence-based policymaking, workforce planning, and economic forecasting. However, traditional data collection and centralized analytics raise growing concerns about privacy, scalability, and institutional data governance. This paper presents a large language model (LLM)-powered framework for privacy-preserving and scalable labor market analysis, designed to extract, structure, and interpret occupation, skill, and salary information from distributed textual sources. Our framework integrates domain-adapted LLMs with federated learning (FL) and differential privacy (DP) to enable collaborative model training across organizations without exposing sensitive data. The architecture employs secure aggregation and privacy budgets to prevent information leakage during parameter exchange, while maintaining analytical accuracy and interpretability. The system performs multi-task inference—including job classification, skill extraction, and salary estimation—and aligns outputs to standardized taxonomies (e.g., SOC, ISCO, ESCO). Empirical evaluations on both public and semi-private datasets demonstrate that our approach achieves superior performance compared to centralized baselines, while ensuring compliance with privacy and data-sharing regulations. Expert review further confirms that the generated trend analyses are accurate, explainable, and actionable for policy and research. Our results illustrate a practical pathway toward decentralized, privacy-conscious, and large-scale labor market intelligence. Full article
(This article belongs to the Special Issue Privacy-Preserving Machine Learning in Large Language Models (LLMs))
Show Figures

Figure 1

16 pages, 1100 KB  
Article
Federated Learning-Based Intrusion Detection in Industrial IoT Networks
by George Dominic Pecherle, Robert Ștefan Győrödi and Cornelia Aurora Győrödi
Future Internet 2026, 18(1), 2; https://doi.org/10.3390/fi18010002 - 19 Dec 2025
Viewed by 348
Abstract
Federated learning (FL) is a promising privacy-preserving paradigm for machine learning in distributed environments. Although FL reduces communication overhead, it does not itself provide low-latency guarantees. In IIoT environments, real-time responsiveness is primarily enabled by edge computing and local inference, while FL contributes [...] Read more.
Federated learning (FL) is a promising privacy-preserving paradigm for machine learning in distributed environments. Although FL reduces communication overhead, it does not itself provide low-latency guarantees. In IIoT environments, real-time responsiveness is primarily enabled by edge computing and local inference, while FL contributes indirectly by minimizing the need to transmit raw data across the network. This paper explores the use of FL for intrusion detection in IIoT networks and compares its performance with traditional centralized machine learning approaches. A simulated IIoT environment was developed in which each node locally trains a model on synthetic normal and attack traffic data, sharing only model parameters with a central server. The Flower framework was employed to coordinate training and model aggregation across multiple clients without exposing raw data. Experimental results show that FL achieves detection accuracy comparable to centralized models while significantly reducing privacy risks and network transmission overhead. These results demonstrate the feasibility of FL as a secure and scalable solution for IIoT intrusion detection. Future work will validate the approach on real-world datasets and heterogeneous edge devices to further assess its robustness and effectiveness. Full article
Show Figures

Graphical abstract

32 pages, 1365 KB  
Article
Risk-Aware Privacy-Preserving Federated Learning for Remote Patient Monitoring: A Multi-Layer Adaptive Security Framework
by Fatiha Benabderrahmane, Elhillali Kerkouche and Nardjes Bouchemal
Appl. Sci. 2026, 16(1), 29; https://doi.org/10.3390/app16010029 - 19 Dec 2025
Viewed by 187
Abstract
The integration of artificial intelligence into remote patient monitoring (RPM) offers significant benefits for proactive and continuous healthcare, but also raises critical concerns regarding privacy, integrity, and robustness. Federated Learning (FL) provides a decentralized approach to model training that preserves data locality, yet [...] Read more.
The integration of artificial intelligence into remote patient monitoring (RPM) offers significant benefits for proactive and continuous healthcare, but also raises critical concerns regarding privacy, integrity, and robustness. Federated Learning (FL) provides a decentralized approach to model training that preserves data locality, yet most existing solutions address only isolated security aspects and lack contextual adaptability for clinical use. This paper presents MedGuard-FL, a context-aware FL framework tailored to e-healthcare environments. Spanning device, edge, and cloud layers, it integrates encryption, adaptive differential privacy, anomaly detection, and Byzantine-resilient aggregation. At its core, a policy engine dynamically adjusts privacy and robustness parameters based on the patient’s status and the system’s risk. Evaluations on real-world clinical datasets show MedGuard-FL maintains high model accuracy while neutralizing various adversarial attacks (e.g., label-flip, poisoning, backdoor, membership inference), all with manageable latency. Compared to static defenses, it offers improved trade-offs between privacy, utility, and responsiveness. Additional edge-level privacy analyses confirm its resilience, with attack effectiveness near random. By embedding clinical risk awareness into adaptive defense mechanisms, MedGuard-FL lays a foundation for secure, real-time federated intelligence in RPM. Full article
(This article belongs to the Special Issue Applications in Neural and Symbolic Artificial Intelligence)
Show Figures

Figure 1

27 pages, 519 KB  
Article
Dual-Algorithm Framework for Privacy-Preserving Task Scheduling Under Historical Inference Attacks
by Exiang Chen, Ayong Ye and Huina Deng
Computers 2025, 14(12), 558; https://doi.org/10.3390/computers14120558 - 16 Dec 2025
Viewed by 293
Abstract
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and [...] Read more.
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and system performance under dynamic vehicular environments. First, we introduce a dynamic privacy-aware adaptation mechanism that adjusts privacy levels in real time according to vehicle mobility and network dynamics. Second, we design a dual-algorithm framework composed of two complementary solutions: a Markov Approximation-Based Online Algorithm (MAOA) that achieves near-optimal scheduling with provable convergence, and a Privacy-Aware Deep Q-Network (PAT-DQN) algorithm that leverages deep reinforcement learning to enhance adaptability and long-term decision-making. Extensive simulations demonstrate that our proposed methods effectively mitigate privacy leakage while maintaining high task completion rates and low energy consumption. In particular, PAT-DQN achieves up to 14.2% lower privacy loss and 19% fewer handovers than MAOA in high-mobility scenarios, showing superior adaptability and convergence performance. Full article
Show Figures

Figure 1

47 pages, 12434 KB  
Article
AI-Driven Blockchain and Federated Learning for Secure Electronic Health Records Sharing
by Muhammad Saeed Javed, Ali Hennache, Muhammad Imran and Muhammad Kamran Khan
Electronics 2025, 14(23), 4774; https://doi.org/10.3390/electronics14234774 - 4 Dec 2025
Viewed by 696
Abstract
The proliferation of electronic health records necessitates secure and privacy-preserving data sharing frameworks to combat escalating cybersecurity threats in healthcare. Current systems face critical limitations including centralized data repositories vulnerable to breaches, static consent mechanisms, and inadequate audit capabilities. This paper introduces an [...] Read more.
The proliferation of electronic health records necessitates secure and privacy-preserving data sharing frameworks to combat escalating cybersecurity threats in healthcare. Current systems face critical limitations including centralized data repositories vulnerable to breaches, static consent mechanisms, and inadequate audit capabilities. This paper introduces an integrated blockchain and federated learning framework that enables privacy-preserving collaborative AI across healthcare institutions without centralized data pooling. The proposed approach combines federated distillation for heterogeneous model collaboration with dynamic differential privacy that adapts noise injection to data sensitivity levels. A novel threshold key-sharing protocol ensures decentralized access control, while a dual-layer Quorum blockchain establishes immutable audit trails for all data sharing transactions. Experimental evaluation on clinical datasets (Mortality Prediction and Clinical Deterioration from eICU-CRD) demonstrates that our framework maintains diagnostic accuracy within 3.6% of centralized approaches while reducing communication overhead by 71% and providing formal privacy guarantees. For Clinical Deterioration prediction, the framework achieves 96.9% absolute accuracy on the Clinical Deterioration task with FD-DP at ϵ = 1.0, representing only 0.14% degradation from centralized performance. The solution supports HIPAA-aligned technical safeguards, mitigates inference and membership attacks, and enables secure cross-institutional data sharing with real-time auditability. This work establishes a new paradigm for privacy-preserving healthcare AI that balances data utility, regulatory requirements, and protection against emerging threats in distributed clinical environments. Full article
Show Figures

Graphical abstract

28 pages, 5315 KB  
Article
An Explainable Framework for Mental Health Monitoring Using Lightweight and Privacy-Preserving Federated Facial Emotion Recognition
by Dina Shehada, Hissam Tawfik, Ahmed Bouridane and Abir Hussain
Sensors 2025, 25(23), 7320; https://doi.org/10.3390/s25237320 - 2 Dec 2025
Viewed by 561
Abstract
The continuous analysis of emotional cues through facial emotion recognition (FER) systems can support mental health evaluation and psychological well-being monitoring systems. Most FER systems face privacy and trust concerns due to their centralized data approaches and lack of transparency, making potential deployment [...] Read more.
The continuous analysis of emotional cues through facial emotion recognition (FER) systems can support mental health evaluation and psychological well-being monitoring systems. Most FER systems face privacy and trust concerns due to their centralized data approaches and lack of transparency, making potential deployment difficult. To address these concerns, a federated, explainability-driven FER framework designed to provide trustworthy and privacy-preserving emotion recognition with potential applications in mental health monitoring is proposed in this paper. The proposed lightweight Convolutional Neural Network (CNN) enables real-time inference while preserving high accuracy. Comprehensive evaluations on RAF-DB, ExpW, and FER2013 datasets, show that the proposed model demonstrates improved cross-dataset generalization compared to related works, achieving average accuracies of 75.5% and 74.3% in centralized and federated settings, respectively. Quantitative perturbation-based metrics, including Insertion and Deletion Area Under Curve (IAUC and DAUC), Average Drop (AD), Increase in Confidence (IC), Average Drop in Accuracy (ADA), and Active Pixel Ratio, were employed to objectively evaluate the quality and reliability of the model Grad-CAM++ explanations. The results confirm that model explainability enhances transparency and is directly associated with improved model performance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 3756 KB  
Article
Browser-Based Multi-Cancer Classification Framework Using Depthwise Separable Convolutions for Precision Diagnostics
by Divine Sebukpor, Ikenna Odezuligbo, Maimuna Nagey, Michael Chukwuka, Oluwamayowa Akinsuyi and Blessing Ndubuisi
Diagnostics 2025, 15(23), 3066; https://doi.org/10.3390/diagnostics15233066 - 1 Dec 2025
Viewed by 492
Abstract
Background: Early and accurate cancer detection remains a critical challenge in global healthcare. Deep learning has shown strong diagnostic potential, yet widespread adoption is limited by dependence on high-performance hardware, centralized servers, and data-privacy risks. Methods: This study introduces a browser-based [...] Read more.
Background: Early and accurate cancer detection remains a critical challenge in global healthcare. Deep learning has shown strong diagnostic potential, yet widespread adoption is limited by dependence on high-performance hardware, centralized servers, and data-privacy risks. Methods: This study introduces a browser-based multi-cancer classification framework that performs real-time, client-side inference using TensorFlow.js—eliminating the need for external servers or specialized GPUs. The proposed model fine-tunes the Xception architecture, leveraging depthwise separable convolutions for efficient feature extraction, on a large multi-cancer dataset of over 130,000 histopathological and cytological images spanning 26 cancer types. It was benchmarked against VGG16, ResNet50, EfficientNet-B0, and Vision Transformer. Results: The model achieved a Top-1 accuracy of 99.85% and Top-5 accuracy of 100%, surpassing all comparators while maintaining lightweight computational requirements. Grad-CAM visualizations confirmed that predictions were guided by histopathologically relevant regions, reinforcing interpretability and clinical trust. Conclusions: This work represents the first fully browser-deployable, privacy-preserving deep learning framework for multi-cancer diagnosis, demonstrating that high-accuracy AI can be achieved without infrastructure overhead. It establishes a practical pathway for equitable, cost-effective global deployment of medical AI tools. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

49 pages, 1583 KB  
Review
Federated Learning for Smart Cities: A Thematic Review of Challenges and Approaches
by Laila Alterkawi and Fadi K. Dib
Future Internet 2025, 17(12), 545; https://doi.org/10.3390/fi17120545 - 28 Nov 2025
Viewed by 983
Abstract
Federated Learning (FL) offers a promising way to train machine learning models collaboratively on decentralized edge devices, addressing key privacy, communication, and regulatory challenges in smart city environments. This survey adopts a narrative approach, guided by systematic review principles such as PRISMA and [...] Read more.
Federated Learning (FL) offers a promising way to train machine learning models collaboratively on decentralized edge devices, addressing key privacy, communication, and regulatory challenges in smart city environments. This survey adopts a narrative approach, guided by systematic review principles such as PRISMA and Kitchenham, to synthesize current FL research in urban contexts. Unlike prior domain-focused surveys, this work introduces a challenge-oriented taxonomy and integrates an explicit analysis of reproducibility, including datasets and deployment artifacts, to assess real-world readiness. The review begins by examining how FL supports the privacy-preserving analysis of environmental and mobility data. It then explores strategies for resource optimization, including load balancing, model compression, and hierarchical aggregation. Applications in anomaly and event detection across power grids, water infrastructure, and surveillance systems are also discussed. In the energy sector, the survey emphasizes the role of FL in demand forecasting, renewable integration, and sustainable logistics. Particular attention is given to security issues, including defenses against poisoning attacks, Byzantine faults, and inference threats. The study identifies ongoing challenges such as data heterogeneity, scalability, resource limitations at the edge, privacy–utility trade-offs, and lack of standardization. Finally, it outlines a structured roadmap to guide the development of reliable, scalable, and sustainable FL solutions for smart cities. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

Back to TopTop