Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (165)

Search Parameters:
Keywords = context privacy preservation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7527 KB  
Article
Heterogeneous Multi-Domain Dataset Synthesis to Facilitate Privacy and Risk Assessments in Smart City IoT
by Matthew Boeding, Michael Hempel, Hamid Sharif and Juan Lopez
Electronics 2026, 15(3), 692; https://doi.org/10.3390/electronics15030692 - 5 Feb 2026
Abstract
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly [...] Read more.
The emergence of the Smart Cities paradigm and the rapid expansion and integration of Internet of Things (IoT) technologies within this context have created unprecedented opportunities for high-resolution behavioral analytics, urban optimization, and context-aware services. However, this same proliferation intensifies privacy risks, particularly those arising from cross-modal data linkage across heterogeneous sensing platforms. To address these challenges, this paper introduces a comprehensive, statistically grounded framework for generating synthetic, multimodal IoT datasets tailored to Smart City research. The framework produces behaviorally plausible synthetic data suitable for preliminary privacy risk assessment and as a benchmark for future re-identification studies, as well as for evaluating algorithms in mobility modeling, urban informatics, and privacy-enhancing technologies. As part of our approach, we formalize probabilistic methods for synthesizing three heterogeneous and operationally relevant data streams—cellular mobility traces, payment terminal transaction logs, and Smart Retail nutrition records—capturing the behaviors of a large number of synthetically generated urban residents over a 12-week period. The framework integrates spatially explicit merchant selection using K-Dimensional (KD)-tree nearest-neighbor algorithms, temporally correlated anchor-based mobility simulation reflective of daily urban rhythms, and dietary-constraint filtering to preserve ecological validity in consumption patterns. In total, the system generates approximately 116 million mobility pings, 5.4 million transactions, and 1.9 million itemized purchases, yielding a reproducible benchmark for evaluating multimodal analytics, privacy-preserving computation, and secure IoT data-sharing protocols. To show the validity of this dataset, the underlying distributions of these residents were successfully validated against reported distributions in published research. We present preliminary uniqueness and cross-modal linkage indicators; comprehensive re-identification benchmarking against specific attack algorithms is planned as future work. This framework can be easily adapted to various scenarios of interest in Smart Cities and other IoT applications. By aligning methodological rigor with the operational needs of Smart City ecosystems, this work fills critical gaps in synthetic data generation for privacy-sensitive domains, including intelligent transportation systems, urban health informatics, and next-generation digital commerce infrastructures. Full article
Show Figures

Figure 1

27 pages, 1633 KB  
Review
Transformer Models, Graph Networks, and Generative AI in Gut Microbiome Research: A Narrative Review
by Yan Zhu, Yiteng Tang, Xin Qi and Xiong Zhu
Bioengineering 2026, 13(2), 144; https://doi.org/10.3390/bioengineering13020144 - 27 Jan 2026
Viewed by 360
Abstract
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven [...] Read more.
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven gut microbiome research and to evaluate their translational relevance for therapeutic optimization, personalized nutrition, and precision medicine. Methods: A narrative literature review was conducted using PubMed, Google Scholar, Web of Science, and IEEE Xplore, focusing on peer-reviewed studies published between approximately 2015 and early 2025. Representative articles were selected based on relevance to AI methodologies applied to gut microbiome analysis, including machine learning, deep learning, transformer-based models, graph neural networks, generative AI, and multi-omics integration frameworks. Additional seminal studies were identified through manual screening of reference lists. Results: The reviewed literature demonstrates that AI enables robust identification of diagnostic microbial signatures, prediction of individual responses to microbiome-targeted therapies, and design of personalized nutritional and pharmacological interventions using in silico simulations and digital twin models. AI-driven multi-omics integration—encompassing metagenomics, metatranscriptomics, metabolomics, proteomics, and clinical data—has improved functional interpretation of host–microbiome interactions and enhanced predictive performance across diverse disease contexts. For example, AI-guided personalized nutrition models have achieved AUC exceeding 0.8 for predicting postprandial glycemic responses, while community-scale metabolic modeling frameworks have accurately forecast individualized short-chain fatty acid production. Conclusions: Despite substantial progress, key challenges remain, including data heterogeneity, limited model interpretability, population bias, and barriers to clinical deployment. Future research should prioritize standardized data pipelines, explainable and privacy-preserving AI frameworks, and broader population representation. Collectively, these advances position AI as a cornerstone technology for translating gut microbiome data into actionable insights for diagnostics, therapeutics, and precision nutrition. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Complex Diseases)
Show Figures

Figure 1

26 pages, 3900 KB  
Review
A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges
by Panagiotis K. Gkonis, Anastasios Giannopoulos, Nikolaos Nomikos, Lambros Sarakis, Vasileios Nikolakakis, Gerasimos Patsourakis and Panagiotis Trakadas
Sensors 2026, 26(3), 799; https://doi.org/10.3390/s26030799 - 25 Jan 2026
Viewed by 245
Abstract
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, [...] Read more.
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

31 pages, 1140 KB  
Review
A Survey of Multi-Layer IoT Security Using SDN, Blockchain, and Machine Learning
by Reorapetse Molose and Bassey Isong
Electronics 2026, 15(3), 494; https://doi.org/10.3390/electronics15030494 - 23 Jan 2026
Viewed by 294
Abstract
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across [...] Read more.
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across device, control, network, and application layers. The analysis reveals that BC technology ensures decentralised trust, immutability, and secure access validation, while SDN enables programmability, load balancing, and real-time monitoring. In addition, ML/deep learning (DL) techniques, including federated and hybrid learning, strengthen anomaly detection, predictive security, and adaptive mitigation. Reported evaluations show similar gains in detection accuracy, latency, throughput, and energy efficiency, with effective defence against threats, though differing experimental contexts limit direct comparison. It also shows that the solutions’ effectiveness depends on ecosystem factors such as SDN controllers, BC platforms, cryptographic protocols, and ML frameworks. However, most studies rely on simulations or small-scale testbeds, leaving large-scale and heterogeneous deployments unverified. Significant challenges include scalability, computational and energy overhead, dataset dependency, limited adversarial resilience, and the explainability of ML-driven decisions. Based on the findings, future research should focus on lightweight consensus mechanisms for constrained devices, privacy-preserving ML/DL, and cross-layer adversarial-resilient frameworks. Advancing these directions will be important in achieving scalable, interoperable, and trustworthy SDN-IoT/IIoT security solutions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 462 KB  
Article
ARIA: An AI-Supported Adaptive Augmented Reality Framework for Cultural Heritage
by Markos Konstantakis and Eleftheria Iakovaki
Information 2026, 17(1), 90; https://doi.org/10.3390/info17010090 - 15 Jan 2026
Viewed by 245
Abstract
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and [...] Read more.
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and emotional diversity. This paper presents ARIA (Augmented Reality for Interpreting Artefacts), a conceptual and architectural framework for AI-supported, adaptive AR experiences in cultural heritage settings. ARIA is designed to address current limitations in personalization, affect-awareness, and ethical governance by integrating multimodal context sensing, lightweight affect recognition, and AI-driven content personalization within a unified system architecture. The framework combines Retrieval-Augmented Generation (RAG) for controlled, knowledge-grounded narrative adaptation, continuous user modeling, and interoperable Digital Asset Management (DAM), while embedding Human-Centered Design (HCD) and Fairness, Accountability, Transparency, and Ethics (FATE) principles at its core. Emphasis is placed on accountable personalization, privacy-preserving data handling, and curatorial oversight of narrative variation. ARIA is positioned as a design-oriented contribution rather than a fully implemented system. Its architecture, data flows, and adaptive logic are articulated through representative museum use-case scenarios and a structured formative validation process including expert walkthrough evaluation and feasibility analysis, providing a foundation for future prototyping and empirical evaluation. The framework aims to support the development of scalable, ethically grounded, and emotionally responsive AR experiences for next-generation digital museology. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
Show Figures

Graphical abstract

31 pages, 2412 KB  
Article
Privacy-Preserving User Profiling Using MLP-Based Data Generalization
by Dardan Maraj, Renato Šoić, Antonia Žaja and Marin Vuković
Appl. Sci. 2026, 16(2), 848; https://doi.org/10.3390/app16020848 - 14 Jan 2026
Viewed by 178
Abstract
The rapid growth in Internet-based services has increased the demand for user data to enable personalized and adaptive digital experiences. These services typically require users to disclose various types of personal information, which are organized into user profiles and used to tailor content, [...] Read more.
The rapid growth in Internet-based services has increased the demand for user data to enable personalized and adaptive digital experiences. These services typically require users to disclose various types of personal information, which are organized into user profiles and used to tailor content, recommendations, and accessibility settings. However, achieving an effective balance between personalization accuracy and user data protection remains a persistent and complex challenge. Excessive data disclosure raises the risk of re-identification and privacy breaches, while excessive anonymization can significantly diminish personalization and overall service quality. In this paper, we address this trade-off by proposing a context-aware learning-based data generalization framework that preserves user privacy while maintaining the functional usefulness of personal data. We first conduct a systematic classification of user data commonly collected into five main categories: demographic, location, accessibility, preference, and behavior data. To generalize these data categories dynamically and adaptively, we use a Multi-Layer Perceptron (MLP) model that learns patterns across heterogeneous data types. Unlike traditional rule-based generalization techniques, the MLP-based approach captures nonlinear relationships, adapts to heterogeneous data distributions, and scales efficiently with large datasets. The proposed MLP-based generalization method reduces the granularity of personal data, preserving privacy without significantly compromising information usefulness. Experimental results show that the proposed method reduces the risk of re-identification to approximately 35%, compared to non-anonymized data, where the re-identification risk is about 80–90%. These findings highlight the potential of learning-based data generalization as a strategy for privacy-preserving personalization in modern Internet services. They also show how the proposed generalization method can be applied in practice to transform user data while maintaining both utility and confidentiality. Full article
(This article belongs to the Special Issue Advances in Technologies for Data Privacy and Security)
Show Figures

Figure 1

36 pages, 968 KB  
Review
Applications of Artificial Intelligence in Fisheries: From Data to Decisions
by Syed Ariful Haque and Saud M. Al Jufaili
Big Data Cogn. Comput. 2026, 10(1), 19; https://doi.org/10.3390/bdcc10010019 - 5 Jan 2026
Viewed by 1311
Abstract
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of [...] Read more.
AI enhances aquatic resource management by automating species detection, optimizing feed, forecasting water quality, protecting species interactions, and strengthening the detection of illegal, unreported, and unregulated fishing activities. However, these advancements are inconsistently employed, subject to domain shifts, limited by the availability of labeled data, and poorly benchmarked across operational contexts. Recent developments in technology and applications in fisheries genetics and monitoring, precision aquaculture, management, and sensing infrastructure are summarized in this paper. We studied automated species recognition, genomic trait inference, environmental DNA metabarcoding, acoustic analysis, and trait-based population modeling in fisheries genetics and monitoring. We used digital-twin frameworks for supervised learning in feed optimization, reinforcement learning for water quality control, vision-based welfare monitoring, and harvest forecasting in aquaculture. We explored automatic identification system trajectory analysis for illicit fishing detection, global effort mapping, electronic bycatch monitoring, protected species tracking, and multi-sensor vessel surveillance in fisheries management. Acoustic echogram automation, convolutional neural network-based fish detection, edge-computing architectures, and marine-domain foundation models are foundational developments in sensing infrastructure. Implementation challenges include performance degradation across habitat and seasonal transitions, insufficient standardized multi-region datasets for rare and protected taxa, inadequate incorporation of model uncertainty into management decisions, and structural inequalities in data access and technology adoption among smallholder producers. Standardized multi-region benchmarks with rare-taxa coverage, calibrated uncertainty quantification in assessment and control systems, domain-robust energy-efficient algorithms, and privacy-preserving data partnerships are our priorities. These integrated priorities enable transition from experimental prototypes to a reliable, collaborative infrastructure for sustainable wild capture and farmed aquatic systems. Full article
Show Figures

Figure 1

31 pages, 3629 KB  
Article
Guardians of the Grid: A Collaborative AI System for DDoS Detection in Autonomous Vehicles Infrastructure
by Amir Djenna, Saida Tamadartaza, Riham Oucief and Usman Javed Butt
Information 2026, 17(1), 34; https://doi.org/10.3390/info17010034 - 3 Jan 2026
Viewed by 361
Abstract
Distributed Denial-of-Service (DDoS) attacks represent a pervasive and critical threat to autonomous vehicles, jeopardizing their operational functionality and passenger safety. The ease with which these attacks can be launched contrasts sharply with the difficulty of their detection and mitigation, necessitating advanced defensive solutions. [...] Read more.
Distributed Denial-of-Service (DDoS) attacks represent a pervasive and critical threat to autonomous vehicles, jeopardizing their operational functionality and passenger safety. The ease with which these attacks can be launched contrasts sharply with the difficulty of their detection and mitigation, necessitating advanced defensive solutions. This study proposes a novel deep-learning framework for accurate DDoS detection within automotive networks. We implement and compare multiple artificial neural network architectures, including Convolutional Neural Networks, Recurrent Neural Networks, and Deep Neural Networks, enhanced with an active learning component to maximize data efficiency. The most performant model is subsequently deployed within a federated learning paradigm to facilitate collaborative, privacy-preserving training across distributed clients. The study is evaluated against three primary DDoS attack vectors: volumetric, state-exhaustion, and amplification. Experimental results on the CIC-DDoS2019 benchmark dataset demonstrate the efficacy of our approach, achieving a 99.98% accuracy in attack classification. This indicates a promising solution for real-time DDoS detection in the safety-critical context of autonomous driving. Full article
Show Figures

Graphical abstract

29 pages, 1050 KB  
Article
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC
by Wukjae Cha, Hyang Jin Lee, Sangjin Kook, Keunok Kim and Dongho Won
Sensors 2026, 26(1), 217; https://doi.org/10.3390/s26010217 - 29 Dec 2025
Viewed by 411
Abstract
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential [...] Read more.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol’s robustness under the Dolev–Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA’s resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF–cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

23 pages, 392 KB  
Review
From Pilots to Practices: A Scoping Review of GenAI-Enabled Personalization in Computer Science Education
by Iman Reihanian, Yunfei Hou and Qingquan Sun
AI 2026, 7(1), 6; https://doi.org/10.3390/ai7010006 - 23 Dec 2025
Viewed by 1413
Abstract
Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts. [...] Read more.
Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts. We identify five application domains—intelligent tutoring, personalized materials, formative feedback, AI-augmented assessment, and code review—and analyze how design choices shape learning outcomes. Designs incorporating explanation-first guidance, solution withholding, graduated hint ladders, and artifact grounding (student code, tests, and rubrics) consistently show more positive learning processes than unconstrained chat interfaces. Successful implementations share four patterns: context-aware tutoring anchored in student artifacts, multi-level hint structures requiring reflection, composition with traditional CS infrastructure (autograders and rubrics), and human-in-the-loop quality assurance. We propose an exploration-firstadoption framework emphasizing piloting, instrumentation, learning-preserving defaults, and evidence-based scaling. Four recurrent risks—academic integrity, privacy, bias and equity, and over-reliance—are paired with operational mitigation. Critical evidence gaps include longitudinal effects on skill retention, comparative evaluations of guardrail designs, equity impacts at scale, and standardized replication metrics. The evidence supports generative AI as a mechanism for precision scaffolding when embedded in exploration-first, audit-ready workflows that preserve productive struggle while scaling personalized support. Full article
(This article belongs to the Topic Generative Artificial Intelligence in Higher Education)
Show Figures

Figure 1

43 pages, 1272 KB  
Article
A Responsible Generative Artificial Intelligence Based Multi-Agent Framework for Preserving Data Utility and Privacy
by Abhinav Tiwari and Hany E. Z. Farag
AI 2026, 7(1), 1; https://doi.org/10.3390/ai7010001 - 21 Dec 2025
Viewed by 590
Abstract
The exponential growth in the usage of textual data across industries and data sharing across institutions underscores the critical need for frameworks that effectively balance data utility and privacy. This paper proposes an innovative agentic AI-based framework specifically tailored for textual data, integrating [...] Read more.
The exponential growth in the usage of textual data across industries and data sharing across institutions underscores the critical need for frameworks that effectively balance data utility and privacy. This paper proposes an innovative agentic AI-based framework specifically tailored for textual data, integrating user-driven qualitative inputs, differential privacy, and generative AI methodologies. The framework comprises four interlinked topics: (1) A novel quantitative approach that translates qualitative user inputs, such as textual completeness, relevance, or coherence, into precise, context-aware utility thresholds through semantic embedding and adaptive metric mapping. (2) A differential privacy-driven mechanism optimizing text embedding perturbations, dynamically balancing semantic fidelity against rigorous privacy constraints. (3) An advanced generative AI approach to synthesize and augment textual datasets, preserving semantic coherence while minimizing sensitive information leakage. (4) An adaptable dataset-dependent optimization system that autonomously profiles textual datasets, selects dataset-specific privacy strategies (e.g., anonymization, paraphrasing), and adapts in real-time to evolving privacy and utility requirements. Each topic is operationalized via specialized agentic modules with explicit mathematical formulations and inter-agent coordination, establishing a robust and adaptive solution for modern textual data challenges. Full article
Show Figures

Figure 1

29 pages, 7487 KB  
Article
Efficient Privacy-Preserving Face Recognition Based on Feature Encoding and Symmetric Homomorphic Encryption
by Limengnan Zhou, Qinshi Li, Hui Zhu, Yanxia Zhou and Hanzhou Wu
Entropy 2026, 28(1), 5; https://doi.org/10.3390/e28010005 - 19 Dec 2025
Viewed by 362
Abstract
In the context of privacy-preserving face recognition systems, entropy plays a crucial role in determining the efficiency and security of computational processes. However, existing schemes often encounter challenges such as inefficiency and high entropy in their computational models. To address these issues, we [...] Read more.
In the context of privacy-preserving face recognition systems, entropy plays a crucial role in determining the efficiency and security of computational processes. However, existing schemes often encounter challenges such as inefficiency and high entropy in their computational models. To address these issues, we propose a privacy-preserving face recognition method based on the Face Feature Coding Method (FFCM) and symmetric homomorphic encryption, which reduces computational entropy while enhancing system efficiency and ensuring facial privacy protection. Specifically, to accelerate the matching speed during the authentication phase, we construct an N-ary feature tree using a neural network-based FFCM, significantly improving ciphertext search efficiency. Additionally, during authentication, the server computes the cosine similarity of the matched facial features in ciphertext form using lightweight symmetric homomorphic encryption, minimizing entropy in the computation process and reducing overall system complexity. Security analysis indicates that critical template information remains secure and resilient against both passive and active attacks. Experimental results demonstrate that the facial authentication efficiency with FFCM classification is 4% to 6% higher than recent state-of-the-art solutions. This method provides an efficient, secure, and entropy-aware approach for privacy-preserving face recognition, offering substantial improvements in large-scale applications. Full article
(This article belongs to the Special Issue Information-Theoretic Methods for Trustworthy Machine Learning)
Show Figures

Figure 1

32 pages, 1365 KB  
Article
Risk-Aware Privacy-Preserving Federated Learning for Remote Patient Monitoring: A Multi-Layer Adaptive Security Framework
by Fatiha Benabderrahmane, Elhillali Kerkouche and Nardjes Bouchemal
Appl. Sci. 2026, 16(1), 29; https://doi.org/10.3390/app16010029 - 19 Dec 2025
Cited by 1 | Viewed by 337
Abstract
The integration of artificial intelligence into remote patient monitoring (RPM) offers significant benefits for proactive and continuous healthcare, but also raises critical concerns regarding privacy, integrity, and robustness. Federated Learning (FL) provides a decentralized approach to model training that preserves data locality, yet [...] Read more.
The integration of artificial intelligence into remote patient monitoring (RPM) offers significant benefits for proactive and continuous healthcare, but also raises critical concerns regarding privacy, integrity, and robustness. Federated Learning (FL) provides a decentralized approach to model training that preserves data locality, yet most existing solutions address only isolated security aspects and lack contextual adaptability for clinical use. This paper presents MedGuard-FL, a context-aware FL framework tailored to e-healthcare environments. Spanning device, edge, and cloud layers, it integrates encryption, adaptive differential privacy, anomaly detection, and Byzantine-resilient aggregation. At its core, a policy engine dynamically adjusts privacy and robustness parameters based on the patient’s status and the system’s risk. Evaluations on real-world clinical datasets show MedGuard-FL maintains high model accuracy while neutralizing various adversarial attacks (e.g., label-flip, poisoning, backdoor, membership inference), all with manageable latency. Compared to static defenses, it offers improved trade-offs between privacy, utility, and responsiveness. Additional edge-level privacy analyses confirm its resilience, with attack effectiveness near random. By embedding clinical risk awareness into adaptive defense mechanisms, MedGuard-FL lays a foundation for secure, real-time federated intelligence in RPM. Full article
(This article belongs to the Special Issue Applications in Neural and Symbolic Artificial Intelligence)
Show Figures

Figure 1

33 pages, 353 KB  
Article
Integration of Artificial Intelligence into Criminal Procedure Law and Practice in Kazakhstan
by Gulzhan Nusupzhanovna Mukhamadieva, Akynkozha Kalenovich Zhanibekov, Nurdaulet Mukhamediyaruly Apsimet and Yerbol Temirkhanovich Alimkulov
Laws 2025, 14(6), 98; https://doi.org/10.3390/laws14060098 - 12 Dec 2025
Cited by 1 | Viewed by 1484
Abstract
Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative [...] Read more.
Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative mechanisms ensuring lawful and rights-based application of AI in criminal proceedings are required to maintain procedural balance. Comparative legal analysis, formal legal research, and a systemic approach reveal gaps in existing legislation: absence of clear definitions, insufficient regulation, and lack of accountability for AI use. Legal recognition of AI and the establishment of procedural safeguards are essential. The novelty of the study lies in the development of concrete approaches to the introduction of artificial intelligence technologies into criminal procedure, taking into account Kazakhstan’s practical experience with the digitalization of criminal case management. Unlike existing research, which examines AI in the legal profession primarily from a theoretical perspective, this work proposes detailed mechanisms for integrating models and algorithms into the processing of criminal cases. The implementation of AI in criminal justice enhances the efficiency, transparency, and accuracy of case handling by automating document preparation, data analysis, and monitoring compliance with procedural deadlines. At the same time, several constraints persist, including dependence on the quality of training datasets, the impossibility of fully replacing human legal judgment, and the need to uphold the principles of the presumption of innocence, the right to privacy, and algorithmic transparency. The findings of the study underscore the potential of AI, provided that procedural safeguards are strictly observed and competent authorities exercise appropriate oversight. Two potential approaches are outlined: selective amendments to the Criminal Procedure Code concerning rights protection, privacy, and judicial powers; or adoption of a separate provision on digital technologies and AI. Implementation of these measures would create a balanced legal framework that enables effective use of AI while preserving core procedural guarantees. Full article
(This article belongs to the Special Issue Criminal Justice: Rights and Practice)
24 pages, 1444 KB  
Review
Federated Learning for Environmental Monitoring: A Review of Applications, Challenges, and Future Directions
by Tymoteusz Miller, Irmina Durlik, Ewelina Kostecka and Arkadiusz Puszkarek
Appl. Sci. 2025, 15(23), 12685; https://doi.org/10.3390/app152312685 - 29 Nov 2025
Viewed by 864
Abstract
Federated learning (FL) is emerging as a pivotal paradigm for environmental monitoring, enabling decentralized model training across edge devices without exposing raw data. This review provides the first structured synthesis of 361 peer-reviewed studies, offering a comprehensive overview of how FL has been [...] Read more.
Federated learning (FL) is emerging as a pivotal paradigm for environmental monitoring, enabling decentralized model training across edge devices without exposing raw data. This review provides the first structured synthesis of 361 peer-reviewed studies, offering a comprehensive overview of how FL has been implemented across environmental domains such as air and water quality, climate modeling, smart agriculture, and biodiversity assessment. We further provide comparative insights into model architectures, energy-aware strategies, and edge-device trade-offs, elucidating how system design choices influence model stability, scalability, and sustainability. The analysis traces the technological evolution of FL from communication-efficient prototypes to robust, context-aware deployments that integrate domain knowledge, physical modeling, and ethical considerations. Persistent challenges remain, including data heterogeneity, limited benchmarking, and inequitable access to computational infrastructure. Addressing these requires advances in hybrid physics–AI frameworks, privacy-preserving sensing, and participatory governance. Overall, this review positions FL not merely as a technical mechanism but as a socio-technical shift—one that aligns distributed intelligence with the complexity, uncertainty, and urgency of contemporary environmental science. Full article
Show Figures

Figure 1

Back to TopTop