Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (213)

Search Parameters:
Keywords = privacy-aware learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5351 KB  
Article
Dual-Factor Adaptive Robust Aggregation for Secure Federated Learning in IoT Networks
by Zuan Song, Wuzheng Tan, Hailong Wang, Guilong Zhang and Jian Weng
Future Internet 2026, 18(4), 201; https://doi.org/10.3390/fi18040201 - 10 Apr 2026
Abstract
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address [...] Read more.
Federated Learning (FL) has been widely adopted in privacy-sensitive and distributed environments. However, training stability becomes significantly challenged when differential privacy (DP) noise and Byzantine client behaviors coexist, as these heterogeneous perturbations jointly introduce time-varying distortions to model updates. Existing approaches typically address privacy and robustness in isolation. Under DP constraints, noise injection increases gradient variance and obscures the distinction between benign and adversarial updates, causing many robust aggregation methods to misclassify normal clients or fail to detect malicious ones. As a result, their effectiveness degrades substantially in practical IoT environments where noise and attacks interact. In this work, we propose a dual-factor adaptive and robust aggregation framework (DARA) to improve the stability of FL under such combined disturbances. DARA adjusts the differential privacy noise scale by jointly considering local update magnitudes and training-round dynamics, aiming to mitigate noise-induced bias under a fixed privacy budget. Meanwhile, a direction-aware weighted aggregation scheme assigns continuous trust weights based on cosine similarity between updates, thereby suppressing the influence of potentially anomalous or adversarial clients. We conduct extensive experiments on multiple benchmark datasets to evaluate DARA under differential privacy constraints and Byzantine attack scenarios. The results indicate that DARA achieves favorable robustness and convergence behavior compared with representative aggregation baselines, while maintaining competitive model accuracy. Full article
(This article belongs to the Special Issue Federated Learning: Challenges, Methods, and Future Directions)
Show Figures

Figure 1

21 pages, 1405 KB  
Article
Trust-Aware and Energy-Efficient Federated Learning for Secure Sensor Networks at the Edge
by Manuel J. C. S. Reis
Sensors 2026, 26(8), 2307; https://doi.org/10.3390/s26082307 - 9 Apr 2026
Abstract
The widespread adoption of large-scale sensor networks in privacy-sensitive and safety-critical applications has intensified the demand for secure, trustworthy, and energy-efficient learning mechanisms at the network edge. Federated learning has emerged as a promising paradigm for privacy preservation by enabling collaborative model training [...] Read more.
The widespread adoption of large-scale sensor networks in privacy-sensitive and safety-critical applications has intensified the demand for secure, trustworthy, and energy-efficient learning mechanisms at the network edge. Federated learning has emerged as a promising paradigm for privacy preservation by enabling collaborative model training without sharing raw sensor data. However, most existing federated approaches inadequately address trust management, communication efficiency, and energy constraints, which are critical in real-world sensor-based systems. This paper proposes a trust-aware and energy-efficient federated learning framework specifically designed for secure sensor networks operating in resource-constrained edge environments. The proposed approach integrates lightweight trust metrics, trust-driven model aggregation, and adaptive communication scheduling to mitigate the impact of unreliable or malicious nodes while reducing unnecessary energy expenditure. By dynamically weighting client contributions based on trust and participation efficiency, the framework enhances robustness and learning stability under heterogeneous sensing conditions. Experimental results show that the proposed method maintains significantly higher accuracy under adversarial participation while reducing communication overhead and cumulative energy consumption. In particular, the framework improves model accuracy by up to 3.2% under heterogeneous conditions, reduces communication overhead by 28%, and decreases cumulative energy consumption by 31% compared with conventional federated learning approaches. Full article
(This article belongs to the Special Issue Sensor Security and Beyond)
Show Figures

Figure 1

32 pages, 722 KB  
Article
Adaptive Sensitivity-Aware Differential Privacy Accounting for Federated Smart-Meter Theft Detection
by Diego Labate, Dipanwita Thakur and Giancarlo Fortino
Big Data Cogn. Comput. 2026, 10(4), 113; https://doi.org/10.3390/bdcc10040113 - 8 Apr 2026
Abstract
Smart-meter theft detection requires learning from fine-grained electricity consumption data, whose centralized processing poses significant privacy risks. Federated learning (FL) mitigates these risks by decentralizing training, but providing rigorous user-level differential privacy (DP) under non-IID data and heterogeneous client behavior remains challenging. Existing [...] Read more.
Smart-meter theft detection requires learning from fine-grained electricity consumption data, whose centralized processing poses significant privacy risks. Federated learning (FL) mitigates these risks by decentralizing training, but providing rigorous user-level differential privacy (DP) under non-IID data and heterogeneous client behavior remains challenging. Existing DP-FL approaches rely on fixed global clipping bounds for client updates, which substantially overestimate sensitivity when privacy loss is composed using Rényi Differential Privacy (RDP), zero-Concentrated DP (zCDP), or Moments Accountant (MA) frameworks, leading to excessive noise and degraded utility. This work proposes an adaptive clipping-based RDP accountant that incorporates empirical, round-wise update magnitudes into privacy accounting by rescaling each round’s RDP contribution according to the observed clipping ratio. The method is optimizer-agnostic and is evaluated with FedAvg, FedProx, and SCAFFOLD on the SGCC smart-meter theft dataset under IID and Dirichlet non-IID partitions. Experimental results show consistently tighter privacy bounds and improved model utility compared to classical DP accountants, demonstrating the effectiveness of sensitivity-aware privacy accounting for practical differentially private FL. Full article
Show Figures

Figure 1

52 pages, 14386 KB  
Review
Trustworthy Intelligence: Split Learning–Embedded Large Language Models for Smart IoT Healthcare Systems
by Mahbuba Ferdowsi, Nour Moustafa, Marwa Keshk and Benjamin Turnbull
Electronics 2026, 15(7), 1519; https://doi.org/10.3390/electronics15071519 - 4 Apr 2026
Viewed by 174
Abstract
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to [...] Read more.
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to data privacy, computational efficiency, scalability, and regulatory compliance. Federated learning (FL) reduces reliance on centralised data aggregation but remains vulnerable to inference-based privacy risks, while edge-oriented approaches are limited by device heterogeneity and restricted computational and energy resources; the deployment of large language models (LLMs) further exacerbates concerns surrounding privacy exposure, communication overhead, and practical feasibility. This study introduces Trustworthy Intelligence (TI) as a guiding framework for privacy-preserving distributed intelligence in IoT healthcare, explicitly integrating predictive performance, privacy protection, and deployment-oriented system design. Within this framework, split learning (SL) is examined as a core architectural mechanism and extended to support split-aware LLM integration across heterogeneous devices, supported by a structured taxonomy spanning architectural configurations, system adaptation strategies, and evaluation considerations. The study establishes a systematic mapping between SL design choices and representative healthcare scenarios, including wearable monitoring, multi-modal data fusion, clinical text analytics, and cross-institutional collaboration, and analyses key technical challenges such as activation-level privacy leakage, early-round vulnerability, reconstruction risks, and communication–computation trade-offs. An energy- and resource-aware adaptive cut layer selection strategy is outlined to support efficient deployment across devices with varying capabilities. A proof-of-concept experimental evaluation compares the proposed SL–LLM framework with centralised learning (CL), federated learning (FL), and conventional SL in terms of training latency, communication overhead, model accuracy, and privacy exposure under realistic IoT constraints, providing system-level evidence for the applicability of the TI framework in distributed healthcare environments and outlining directions for clinically viable and regulation-aligned IoT healthcare intelligence. Full article
Show Figures

Figure 1

13 pages, 1960 KB  
Article
Federated Graph Representation Learning for Online Student Performance Analysis
by Rasool Seyghaly, Jordi Garcia and Xavi Masip-Bruin
Electronics 2026, 15(7), 1495; https://doi.org/10.3390/electronics15071495 - 2 Apr 2026
Viewed by 172
Abstract
The rapid growth of online learning platforms has intensified the need for privacy-aware methods that can analyze learner behavior without centralizing sensitive activity logs. This study presents a Federated Learning-Based Graph Representation Learning (FL-GRL) framework for online student performance analysis in distributed learning [...] Read more.
The rapid growth of online learning platforms has intensified the need for privacy-aware methods that can analyze learner behavior without centralizing sensitive activity logs. This study presents a Federated Learning-Based Graph Representation Learning (FL-GRL) framework for online student performance analysis in distributed learning environments. Each learner is represented through a local Student Learning Knowledge Graph (SLKG) that captures typed interactions with courses, lessons, webinars, challenges, and forum activities. Graph Neural Networks (GNNs) are used to derive relation-aware embeddings from these local graphs, while federated learning supports collaborative model optimization without sharing raw data. A federated clustering stage is then used to identify soft learner groups with partially overlapping behavioral patterns that may support exploratory personalization and confidence-aware educational follow-up. The current experiments focus on the feasibility of privacy-aware graph-based analysis rather than on a complete supervised prediction benchmark. Results across the evaluated graph-based variants indicate that the proposed framework is operationally viable, preserves relational structure better than flat-feature formulations, and provides an interpretable basis for learner-group discovery in privacy-sensitive online education settings. Full article
(This article belongs to the Special Issue Deep Learning and Data Analytics Applications in Social Networks)
Show Figures

Graphical abstract

33 pages, 6064 KB  
Article
Federated Gastrointestinal Lesion Classification with Clinical-Entropy Guided Quantum-Inspired Token Pruning in Vision Transformers
by Muhammad Awais, Ali Mustafa Qamar, Umair Khalid and Rehan Ullah Khan
Diagnostics 2026, 16(7), 1027; https://doi.org/10.3390/diagnostics16071027 - 29 Mar 2026
Viewed by 385
Abstract
Background: Gastrointestinal (GI) cancers remain a major global health concern, where timely and accurate interpretation of endoscopic findings plays a decisive role in patient outcomes. In recent years, deep learning–based decision support systems have shown considerable potential in assisting GI diagnosis; however, their [...] Read more.
Background: Gastrointestinal (GI) cancers remain a major global health concern, where timely and accurate interpretation of endoscopic findings plays a decisive role in patient outcomes. In recent years, deep learning–based decision support systems have shown considerable potential in assisting GI diagnosis; however, their broader adoption is often limited by patient privacy regulations, uneven data availability, and the fragmented nature of clinical data across institutions. Federated learning (FL) offers a practical solution by enabling collaborative model training while keeping patient data local to each hospital. Methods: Vision Transformers (ViTs) are particularly well suited for endoscopic image analysis due to their ability to capture long-range contextual information. Nevertheless, their high computational and communication costs pose a significant challenge in federated settings, especially when data distributions vary across clients. To address this issue, we propose a privacy-preserving federated framework that combines ViTs with a Clinical-Entropy Guided Quantum Evolutionary Algorithm (CEQEA) for adaptive token pruning. The CEQEA leverages the diagnostic diversity of each client’s local dataset to guide population initialization, evolutionary updates, and mutation strength, allowing the pruning strategy to adapt naturally to different clinical profiles. Results: The proposed framework was evaluated on curated upper- and lower-GI tract subsets of the HyperKVASIR dataset under realistic non-IID federated conditions. On the final test sets, the model achieved a mean micro-averaged accuracy of 92.33% for lower-GI classification and 90.19% for upper-GI classification, while maintaining high specificity across all diagnostic classes. At the same time, the adaptive pruning strategy reduced the number of tokens processed by approximately 40% and decreased the number of required federated communication rounds by 33% compared to ViT-based federated baselines. Conclusions: Overall, these results indicate that entropy-aware, quantum-inspired evolutionary optimization can effectively balance diagnostic performance and efficiency, making transformer-based models more practical for privacy-preserving, multi-institutional gastrointestinal endoscopy. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

33 pages, 792 KB  
Article
Sustainable Distance Education for All: A Mixed-Methods Study on User Experience and Universal Design Principles in MOOCs
by Seçil Kaya Gülen
Sustainability 2026, 18(7), 3215; https://doi.org/10.3390/su18073215 - 25 Mar 2026
Viewed by 266
Abstract
Massive Open Online Courses (MOOCs) serve as catalysts for sustainable education by democratizing access to lifelong learning. While this potentially positions them as a key driver of the United Nations Sustainable Development Goal 4 (SDG 4), their long-term impact depends heavily on the [...] Read more.
Massive Open Online Courses (MOOCs) serve as catalysts for sustainable education by democratizing access to lifelong learning. While this potentially positions them as a key driver of the United Nations Sustainable Development Goal 4 (SDG 4), their long-term impact depends heavily on the implementation of inclusive design and ethical governance. This study evaluates the social sustainability of the AKADEMA platform—defined through equity of access, institutional trust, and long-term learner retention—using Badrul Khan’s e-learning framework. Employing a multi-layered mixed-methods design, the study triangulates subjective user perceptions—gathered via quantitative surveys (N = 209; a convenience sample of 6140 contacted users) and qualitative insights (n = 122)—with objective structural evidence from a technical accessibility audit. Although the results indicate high satisfaction with pedagogical quality, the findings reveal specific structural nuances regarding platform inclusivity and user diversity. Specifically, data triangulation highlights a notable ‘privacy awareness gap’—where working professionals demonstrate higher sensitivity regarding data governance than learners—alongside structural barriers hindering ‘Universal Design’ for learners with disabilities. Consequently, to strengthen the sustainability of open education models, future strategies should emphasize digital equity and institutional trust, ensuring that technical environments align with the promise of inclusive quality education. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

25 pages, 3673 KB  
Systematic Review
Recent Advances in Multi-Camera Computer Vision for Industry 4.0 and Smart Cities: A Systematic Review
by Carlos Julio Fierro-Silva, Carolina Del-Valle-Soto, Samih M. Mostafa and José Varela-Aldás
Algorithms 2026, 19(4), 249; https://doi.org/10.3390/a19040249 - 25 Mar 2026
Viewed by 424
Abstract
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and [...] Read more.
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and enable consistent tracking of people and objects across non-overlapping views, thereby improving robustness against occlusions and viewpoint changes. This article presents a comprehensive review of multi-camera vision systems published between 2020 and 2025, covering application domains including public security and biometrics, intelligent transportation, smart cities and IoT, healthcare monitoring, precision agriculture, industry and robotics, pan–tilt–zoom (PTZ) camera networks, and emerging areas such as retail and forensic analysis. The review synthesizes predominant technical approaches, including deep-learning-based detection, multi-target multi-camera tracking (MTMCT), re-identification (Re-ID), spatiotemporal fusion, and edge computing architectures. Persistent challenges are identified, particularly in inter-camera data association, scalability, computational efficiency, privacy preservation, and dataset availability. Emerging trends such as distributed edge AI, cooperative camera networks, and active perception are discussed to outline future research directions toward scalable, privacy-aware, and intelligent multi-camera infrastructures. Full article
Show Figures

Figure 1

24 pages, 1460 KB  
Perspective
From Sensing to Sense-Making: A Framework for On-Person Intelligence with Wearable Biosensors and Edge LLMs
by Tad T. Brunyé, Mitchell V. Petrimoulx and Julie A. Cantelon
Sensors 2026, 26(7), 2034; https://doi.org/10.3390/s26072034 - 25 Mar 2026
Viewed by 527
Abstract
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the [...] Read more.
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the constraint is rarely data availability but the cognitive effort required to convert noisy signals into timely, actionable decisions. We argue for on-person cognitive co-pilots: systems that integrate multimodal sensing, compute probabilistic state estimates on devices, synthesize those states with task and environmental context using locally hosted large language models (LLMs), and deliver recommendations through attention-appropriate cues that preserve autonomy. Enabling conditions include mature wearable sensing, edge artificial intelligence (AI) accelerators, tiny machine learning (TinyML) pipelines, privacy-preserving learning, and open-weight LLMs capable of local deployment with retrieval and guardrails. However, critical research gaps remain across layers: sensor validity under real-world conditions, uncertainty calibration and fusion under distribution shift, verification of LLM-mediated reasoning, interaction design that avoids alarm fatigue and automation bias, and governance models that protect privacy and consent in constrained settings. We propose a layered technical framework and research agenda grounded in cognitive engineering and human–automation interaction. Our core claim is that local, uncertainty-aware reasoning is an architectural prerequisite for trustworthy, low-latency augmentation in isolated, confined, and extreme environments. Full article
(This article belongs to the Special Issue Sensors in 2026)
Show Figures

Figure 1

28 pages, 25057 KB  
Article
A Cross-Institutional Financial Fraud Collaborative Detection Algorithm Based on FedGAT Federated Graph Attention Network
by Qichun Wu, Muhammad Shahbaz, Samariddin Makhmudov, Weijian Huang, Ziyang Liu and Yuan Lei
Symmetry 2026, 18(3), 546; https://doi.org/10.3390/sym18030546 - 23 Mar 2026
Viewed by 274
Abstract
Cross-institutional collaborative fraud detection is essential for combating increasingly sophisticated financial fraud, yet privacy regulations and data silos severely constrain knowledge sharing among institutions. This study aims to develop a privacy-preserving framework that enables effective collaborative fraud detection while protecting raw data, with [...] Read more.
Cross-institutional collaborative fraud detection is essential for combating increasingly sophisticated financial fraud, yet privacy regulations and data silos severely constrain knowledge sharing among institutions. This study aims to develop a privacy-preserving framework that enables effective collaborative fraud detection while protecting raw data, with particular emphasis on exploiting symmetry properties in federated architectures and graph topology analysis. We propose an Adaptive Federated Graph Attention Network (FedGAT), which employs spatio-temporal graph attention mechanisms to capture topological structures and dynamic fraud patterns within institutional transaction networks. The framework introduces a symmetric similarity matrix derived from graph topological features, where the symmetry property (sij=sji) ensures consistent and unbiased measurement of structural relationships between any pair of institutions. Based on this symmetric similarity metric, an adaptive weighted aggregation mechanism is designed for cross-institutional parameter fusion, enabling balanced knowledge transfer that respects the symmetric collaborative relationship among participating institutions. The symmetric information exchange protocol between local institutions and the central server further guarantees equitable contribution and benefit distribution throughout the federated learning process. The framework is evaluated on the Elliptic Bitcoin transaction dataset and the IEEE-CIS fraud detection dataset, with recall rate and false positive rate as primary performance metrics. Results show that FedGAT achieves a recall of 0.85 and a false-positive rate of 0.038 in single-institution detection, representing approximately 40% and 70% improvements over existing methods, respectively. In collaborative detection across five virtual institutions, the symmetry-aware adaptive aggregation mechanism enables all participants to achieve performance gains exceeding 15% while completely eliminating negative transfer effects observed in simple averaging approaches. This work contributes a novel symmetry-based federated learning framework that balances privacy protection with detection performance, advancing the literature on cross-institutional financial risk management. Full article
Show Figures

Figure 1

36 pages, 6452 KB  
Review
Explainable and Federated Recommender Systems: A Survey and Conceptual Framework for Trustworthy Personalization
by Alexandra Vultureanu-Albiși and Costin Bădică
Electronics 2026, 15(6), 1292; https://doi.org/10.3390/electronics15061292 - 19 Mar 2026
Viewed by 375
Abstract
Federated recommender systems (FRS) enable privacy-preserving collaborative training without sharing raw user data, while explainable recommender systems (XRS) aim to improve transparency, trust, and accountability. However, research that integrates federation and explainability remains limited and fragmented. This survey reviews recent work at the [...] Read more.
Federated recommender systems (FRS) enable privacy-preserving collaborative training without sharing raw user data, while explainable recommender systems (XRS) aim to improve transparency, trust, and accountability. However, research that integrates federation and explainability remains limited and fragmented. This survey reviews recent work at the intersection of Federated Learning (FL), Explainable Artificial Intelligence (XAI), and recommender systems, referred to as Explainable Federated Recommender Systems (XFRS). We analyze architectures, learning paradigms, personalization strategies, and explainability mechanisms, and discuss their trade-offs in explainability, privacy, and trustworthiness. We propose a unified conceptual framework that links these components in decentralized recommendation settings. Combining bibliometric analysis with a systematic categorization of the literature, we identify key gaps and emerging trends, including the limited adoption of explainability in federated settings. Finally, we summarize open challenges and future directions toward trustworthy, privacy-aware personalized recommender systems. Full article
Show Figures

Graphical abstract

20 pages, 2673 KB  
Article
TAFL-UWSN: A Trust-Aware Federated Learning Framework for Securing Underwater Sensor Networks
by Raja Waseem Anwar, Mohammad Abrar, Abdu Salam and Faizan Ullah
Network 2026, 6(1), 18; https://doi.org/10.3390/network6010018 - 19 Mar 2026
Viewed by 326
Abstract
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient [...] Read more.
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient collaborative learning. Federated learning (FL) offers a privacy-preserving method for decentralized model training but is inherently vulnerable to Byzantine threats and malicious participants. This paper proposes trust-aware FL for underwater sensor networks (TAFL-UWSN), a trust-aware FL framework designed to improve security, reliability, and energy efficiency in UASNs by incorporating trust evaluation directly into the FL process. The goal is to mitigate the impact of adversarial nodes while maintaining model performance in low-resource underwater environments. TAFL-UWSN integrates continuous trust scoring based on packet forwarding reliability, sensing consistency, and model deviation. Trust scores are used to weight or filter model updates both at the node level and the edge layer, where Autonomous Underwater Vehicles (AUVs) act as mobile aggregators. A trust-aware federated averaging algorithm is implemented, and extensive simulations are conducted in a custom Python-based environment, comparing TAFL-UWSN to standard FedAvg and Byzantine-resilient FL approaches under various attack conditions. TAFL-UWSN achieved a model accuracy exceeding 92% with up to 30% malicious nodes while maintaining a false positive rate below 5.5%. Communication overhead was reduced by 28%, and energy usage per node dropped by 33% compared to baseline methods. The TAFL-UWSN framework demonstrates that integrating trust into FL enables secure, efficient, and resilient underwater intelligence, validating its potential for broader application in distributed, resource-constrained environments. Full article
Show Figures

Figure 1

43 pages, 6922 KB  
Article
Multi-Flow Hybrid Task Offloading Scheme for Multimodal High-Load V2I Services
by Weiqi Luo, Yaqi Hu, Maoqiang Wu, Yijie Zhou, Rong Yu and Junbin Qin
Electronics 2026, 15(6), 1229; https://doi.org/10.3390/electronics15061229 - 16 Mar 2026
Viewed by 386
Abstract
In the Internet of Vehicles (IoV), connected vehicles generate high-load perception tasks with large-scale and multimodal sensitive data, imposing strict requirements on latency, computing, and privacy. Existing solutions still suffer from high task service latency and privacy risks. To address these issues, this [...] Read more.
In the Internet of Vehicles (IoV), connected vehicles generate high-load perception tasks with large-scale and multimodal sensitive data, imposing strict requirements on latency, computing, and privacy. Existing solutions still suffer from high task service latency and privacy risks. To address these issues, this paper proposes an integrated framework that jointly considers multi-flow task offloading, adaptive privacy preservation, and latency-aware resource incentive mechanism. Specifically, we propose a Location-Aware and Trust-based (LA-Trust) dual-node task offloading algorithm based on deep reinforcement learning (DRL), which treats pre-partitioned subtasks as multiple parallel flows and enables flow-level collaborative offloading optimization across neighboring nodes, allows subtask data uploading and processing to proceed concurrently, and incorporates node security into decision making. To further enhance privacy protection, a Distribution-Aware Local Differential Privacy (DA-LDP) algorithm is designed to adaptively inject artificial noise according to data heterogeneity, balancing privacy protection and task execution accuracy. In addition, a Delay-Cost Reverse Auction (DC-RA) algorithm is proposed to further reduce latency by introducing wireless channel modeling between idle vehicles and edge nodes into the incentive mechanism. Experimental results show that the proposed framework improves task execution accuracy by 38% and reduces offloading cost, delay, incentive cost, and auction communication latency by 64.41%, 64.64%, 19%, and 44%, respectively, while more than 60% of tasks are offloaded to high-trust nodes. Full article
Show Figures

Figure 1

20 pages, 3141 KB  
Article
Differentially Private Federated Learning for Remaining Useful Life Prediction
by Arturs Nikulins, Kārlis Freivalds, Ivars Namatēvs, Kaspars Sudars, Audris Arzovs, Wilhelm Söderkvist Vermelin, Madhav Mishra and Kaspars Ozols
Appl. Sci. 2026, 16(6), 2784; https://doi.org/10.3390/app16062784 - 13 Mar 2026
Viewed by 385
Abstract
Accurate remaining useful life (RUL) prediction is essential for the safe and cost-effective operation of safety-critical systems such as electronic components and engines. While data-driven machine learning approaches have demonstrated strong performance for RUL estimation, their effectiveness is limited by the lack of [...] Read more.
Accurate remaining useful life (RUL) prediction is essential for the safe and cost-effective operation of safety-critical systems such as electronic components and engines. While data-driven machine learning approaches have demonstrated strong performance for RUL estimation, their effectiveness is limited by the lack of full run-to-failure data and by strict privacy and intellectual property constraints in industrial settings. Federated learning (FL) enables collaborative model training across multiple data owners without direct data sharing, but it does not, by itself, provide formal privacy guarantees and remains vulnerable to information leakage. This paper presents a privacy-preserving DP-enhanced FL setup for RUL prediction that combines federated learning with differential privacy (DP). We describe an end-to-end implementation based on the Opacus DP library, highlight practical challenges arising from the integration of DP into recurrent neural network architectures, and propose solutions to address them. Using two representative RUL datasets (CMAPSS and SiC MOSFET), we analyze the effect of DP noise on prediction performance and on the functional dependence between the predicted RUL and the already lived life feature. The results demonstrate that differential privacy can be integrated into federated RUL prediction with limited degradation in predictive performance, providing practical insights for deploying privacy-aware collaborative models in industrial environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 2057 KB  
Review
Clinical AI in Radiology: Foundations, Trends, Applications, and Emerging Directions
by Iryna Hartsock, Nikolas Koutsoubis, Sabeen Ahmed, Nathan Parker, Matthew B. Schabath, Cyrillo Araujo, Aliya Qayyum, Cesar Lam, Robert A. Gatenby and Ghulam Rasool
Cancers 2026, 18(6), 942; https://doi.org/10.3390/cancers18060942 - 13 Mar 2026
Viewed by 1212
Abstract
Artificial intelligence (AI) is at the vanguard of transforming radiology in several ways, including augmenting diagnoses, improving workflows, and increasing operational efficiency. Several integration challenges, including concerns over privacy, clinical usability, and workflow compatibility, still remain. This review discusses the foundations and current [...] Read more.
Artificial intelligence (AI) is at the vanguard of transforming radiology in several ways, including augmenting diagnoses, improving workflows, and increasing operational efficiency. Several integration challenges, including concerns over privacy, clinical usability, and workflow compatibility, still remain. This review discusses the foundations and current trends of clinical AI in radiology to provide essential context for ongoing developments. To illustrate translational potential, we describe representative applications, including: (1) local deployment of large language models (LLMs) for restructuring and streamlining radiology reports, improving clarity and consistency without relying on external resources; (2) multimodal AI frameworks combining CT images, clinical data, laboratory biomarkers, and LLM-extracted features from clinical notes for early detection of cachexia in pancreatic cancer; (3) privacy-preserving federated learning (FL) infrastructure enabling collaborative AI model development across institutions without sharing raw patient data; and (4) an uncertainty-aware de-identification pipeline for removing Protected Health Information (PHI) from radiology images and clinical reports to support secure data analysis and sharing. We further discuss emerging opportunities for tumor board decision support, clinical trial matching, radiology report quality assurance, and the development of an imaging complexity index. Collectively, these applications highlight the importance of local deployment, multimodal reasoning, privacy preservation, and human-in-the-loop oversight in translating AI models from research to oncology radiology practice. Full article
(This article belongs to the Special Issue Advances in Medical Imaging for Cancer Detection and Diagnosis)
Show Figures

Figure 1

Back to TopTop