Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (231)

Search Parameters:
Keywords = privacy shift

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1299 KB  
Review
The Evolution of Cardiac Rehabilitation from Supervised Models to New Frontiers in Digital Health
by Alfredo Mauriello, Adriana Correra, Anna Chiara Maratea, Vincenzo Russo, Biagio Liccardo, Felice Gragnano, Vincenzo Acerbo, Arturo Cesaro, Mario Pacileo, Carmine Riccio, Paolo Calabrò and Antonello D’Andrea
J. Clin. Med. 2026, 15(7), 2515; https://doi.org/10.3390/jcm15072515 - 25 Mar 2026
Abstract
Background/Objectives: Cardiac rehabilitation (CR) is a cornerstone of secondary prevention, traditionally delivered through supervised center-based models. However, significant logistical barriers and high healthcare costs necessitate a paradigm shift. This review aims to assess the impact of emerging digital frontiers, specifically telerehabilitation (CTR) [...] Read more.
Background/Objectives: Cardiac rehabilitation (CR) is a cornerstone of secondary prevention, traditionally delivered through supervised center-based models. However, significant logistical barriers and high healthcare costs necessitate a paradigm shift. This review aims to assess the impact of emerging digital frontiers, specifically telerehabilitation (CTR) and artificial intelligence (AI), on overcoming these challenges and improving clinical outcomes. Methods: This study is a narrative, clinically oriented review informed by a structured search of PubMed/MEDLINE and EMBASE for literature published between January 2015 and January 2026. Results: Evidence indicates that CTR is non-inferior to center-based programs in terms of exercise capacity and quality of life (QoL). Digital tools, such as wearable devices and mobile health (mHealth) applications, have significantly increased program participation and improved adherence to lifestyle modifications. Furthermore, the integration of AI facilitates early detection of cardiac events and personalized exercise prescription, while prehabilitation models have been shown to reduce postoperative hospital stays. Conclusions: Digitalization of CR may represent a cost-effective alternative that bridges the gap in global access. While technology serves as an essential diagnostic partner, a robust regulatory and privacy framework is required to protect data sovereignty. Ultimately, multidisciplinary synergy between human expertise and digital innovation is important for providing an equitable and personalized pathway to recovery. Full article
(This article belongs to the Section Clinical Rehabilitation)
Show Figures

Figure 1

24 pages, 1460 KB  
Perspective
From Sensing to Sense-Making: A Framework for On-Person Intelligence with Wearable Biosensors and Edge LLMs
by Tad T. Brunyé, Mitchell V. Petrimoulx and Julie A. Cantelon
Sensors 2026, 26(7), 2034; https://doi.org/10.3390/s26072034 - 25 Mar 2026
Abstract
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the [...] Read more.
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the constraint is rarely data availability but the cognitive effort required to convert noisy signals into timely, actionable decisions. We argue for on-person cognitive co-pilots: systems that integrate multimodal sensing, compute probabilistic state estimates on devices, synthesize those states with task and environmental context using locally hosted large language models (LLMs), and deliver recommendations through attention-appropriate cues that preserve autonomy. Enabling conditions include mature wearable sensing, edge artificial intelligence (AI) accelerators, tiny machine learning (TinyML) pipelines, privacy-preserving learning, and open-weight LLMs capable of local deployment with retrieval and guardrails. However, critical research gaps remain across layers: sensor validity under real-world conditions, uncertainty calibration and fusion under distribution shift, verification of LLM-mediated reasoning, interaction design that avoids alarm fatigue and automation bias, and governance models that protect privacy and consent in constrained settings. We propose a layered technical framework and research agenda grounded in cognitive engineering and human–automation interaction. Our core claim is that local, uncertainty-aware reasoning is an architectural prerequisite for trustworthy, low-latency augmentation in isolated, confined, and extreme environments. Full article
(This article belongs to the Special Issue Sensors in 2026)
Show Figures

Figure 1

24 pages, 1810 KB  
Article
Homomorphic ReLU with Full-Domain Bootstrapping
by Yuqun Lin, Yi Huang, Xiaomeng Tang, Jingjing Fan, Qifei Xu, Zoe-Lin Jiang, Xiaosong Zhang and Junbin Fang
Cryptography 2026, 10(2), 21; https://doi.org/10.3390/cryptography10020021 - 24 Mar 2026
Abstract
Fully homomorphic encryption (FHE) offers a promising solution for privacy-preserving machine learning by enabling arbitrary computations on encrypted data. However, the efficient evaluation of non-linear functions—such as the ReLU activation function over large integers—remains a major obstacle in practical deployments, primarily due to [...] Read more.
Fully homomorphic encryption (FHE) offers a promising solution for privacy-preserving machine learning by enabling arbitrary computations on encrypted data. However, the efficient evaluation of non-linear functions—such as the ReLU activation function over large integers—remains a major obstacle in practical deployments, primarily due to high bootstrapping overhead and limited precision support in existing schemes. In this paper, we propose LargeIntReLU, a novel framework that enables efficient homomorphic ReLU evaluation over large integers (7–11 bits) via full-domain bootstrapping. Central to our approach is a signed digit decomposition algorithm, SignedDecomp, that partitions a large integer ciphertext into signed 6-bit segments using three new low-level primitives: LeftShift, HomMod, and CipherClean. This decomposition preserves arithmetic consistency, avoids cross-segment carry propagation, and allows parallelized bootstrapping. By segmenting the large integer and processing each chunk independently with optimized small-integer bootstrapping, we achieve homomorphic ReLU with full-domain bootstrapping, which significantly reduces the total number of sequential bootstrapping operations required. The security of our scheme is guaranteed by TFHE. Experimental results demonstrate that the proposed method reduces the bootstrapping cost by an average of 28.58% compared to state-of-the-art approaches while maintaining 95.2% accuracy. With execution times ranging from 1.16 s to 1.62 s across 7–11 bit integers, our work bridges a critical gap toward a scalable and efficient homomorphic ReLU function, which is useful in privacy-preserving machine learning. Furthermore, an end-to-end encrypted inference test on a CNN model with the MNIST dataset confirms its practicality, achieving 88.85% accuracy and demonstrating a complete pipeline for privacy-preserving neural network evaluation. Full article
(This article belongs to the Special Issue Information Security and Privacy—ACISP 2025)
20 pages, 1326 KB  
Systematic Review
Reimagining Traditional Workspaces Through Digitalisation and Hybrid Perspective: A Systematic Review
by Ayogeboh Epizitone and Smangele Pretty Moyane
Informatics 2026, 13(4), 46; https://doi.org/10.3390/informatics13040046 - 24 Mar 2026
Abstract
Workspace digitalisation presents a transformative shift from traditional, physically bounded offices to virtual, technology-enabled environments. Digital technologies like cloud computing, artificial intelligence, and the Internet of Things enable remote collaboration, data accessibility, and operational efficiency, thereby accelerating this transformation. Digital workspaces transcend geographical [...] Read more.
Workspace digitalisation presents a transformative shift from traditional, physically bounded offices to virtual, technology-enabled environments. Digital technologies like cloud computing, artificial intelligence, and the Internet of Things enable remote collaboration, data accessibility, and operational efficiency, thereby accelerating this transformation. Digital workspaces transcend geographical limitations, enabling a more flexible, inclusive, and adaptive work culture. They offer better work–life balance, with flexible options, reduced commuting time, and increased personal autonomy and control over commitments, compared to traditional workspaces. Despite these benefits, digitalisation creates cybersecurity, data privacy, and digital divide issues, where unequal access to digital tools and skills can exacerbate social and economic inequalities. The lack of physical interaction affects team cohesion and company culture. Hence, this paper explores these phenomena to uncover their implications and consider possible strategies to optimise workspace digitalisation, providing a comprehensive systematic review of extant literature within the study context, offering pragmatic insights and recommendations for workspaces. This study has found workspace digitalisation to be a complex, multifaceted phenomenon that provides flexibility, efficiency, and innovation, but also poses challenges that must be carefully managed. It postulates that as technology and work progress, a hybrid model that blends digital and traditional workspaces would be suited to each organisation’s needs and goals. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

68 pages, 5341 KB  
Systematic Review
Utilizing Building Automation Systems for Indoor Environmental Quality Optimization: A Review of the Current Literature, Challenges, and Opportunities
by Qinghao Zeng, Marwan Shagar, Kamyar Fatemifar, Pardis Pishdad and Eunhwa Yang
Buildings 2026, 16(6), 1267; https://doi.org/10.3390/buildings16061267 - 23 Mar 2026
Viewed by 40
Abstract
Indoor Environmental Quality (IEQ) plays a vital role in occupant health and productivity. However, current Building Management Systems (BMS) often struggle in sustaining optimal IEQ levels due to limitations in data management and lack of occupant-centric feedback loops. To address these gaps, this [...] Read more.
Indoor Environmental Quality (IEQ) plays a vital role in occupant health and productivity. However, current Building Management Systems (BMS) often struggle in sustaining optimal IEQ levels due to limitations in data management and lack of occupant-centric feedback loops. To address these gaps, this research synthesizes the state-of-the-art methods for IEQ monitoring, assessment, and control within Building Automation Systems (BAS), identifying both technological and methodological advancements, as well as highlighting the challenges and potential opportunities for future innovations. Employing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, this multi-stage literature review analyzes 176 publications from 1997 to 2024, with a focus on the decade of rapid technological evolution from 2014 to 2024. The review focuses on high-impact journals indexed in Scopus to ensure quality while acknowledging the potential bias inherent in a single-database search. The synthesis reveals a methodological shift in monitoring from sparse, zone-level sensing towards dense, multi-modal systems that incorporate physiological data via wearables and behavioral recognition through computer vision. Assessment techniques are evolving from static models such as the Predicted Mean Vote (PMV) towards adaptive, personalized frameworks supported by Digital Twins and integrated simulations. Furthermore, control logic is transitioning toward Reinforcement Learning and Model Predictive Control to proactively manage occupancy surges and environmental variables. This evolution of monitoring approaches, assessment techniques, and control strategies is represented within the study’s Three-Tiered Developmental Trajectory, providing a novel Body of Knowledge (BOK) for mapping the transition of building systems from reactive tools to autonomous, occupant-centric agents. This study also introduces a Cross-Modal Interaction Matrix to systematically analyze the systemic trade-offs between IEQ domains. Furthermore, by establishing the “Implementation Frontier,” this work identifies the specific technical and ethical bottlenecks, such as “false vacancy” sensing errors, fragmented data silos, and the ethical complexities of high-resolution data collection that prevent academic innovations from becoming industry standards. To bridge these gaps, we conclude that the next generation of “cognitive buildings” must prioritize three pillars: resolving binary sensing limitations, harmonizing data via vendor-neutral APIs, and adopting privacy-preserving architectures to ensure scalable, interoperable, and occupant-centric optimization. Full article
Show Figures

Figure 1

46 pages, 2822 KB  
Review
Generative AI and the Foundation Model Era: A Comprehensive Review
by Abdussalam Elhanashi, Siham Essahraui, Pierpaolo Dini, Davide Paolini, Qinghe Zheng and Sergio Saponara
Big Data Cogn. Comput. 2026, 10(3), 94; https://doi.org/10.3390/bdcc10030094 - 20 Mar 2026
Viewed by 182
Abstract
Generative artificial intelligence and foundation models have changed machine learning by allowing systems to produce readable text, realistic images, and other multimodal content with little direct input from a user. Foundation models are large neural networks trained on very large and varied datasets, [...] Read more.
Generative artificial intelligence and foundation models have changed machine learning by allowing systems to produce readable text, realistic images, and other multimodal content with little direct input from a user. Foundation models are large neural networks trained on very large and varied datasets, and they form the core of many current generative AI (GenAI) systems. Their rapid development has led to major advances in areas like natural language processing, computer vision, multimodal learning, and robotics. Examples include GPT, LLaMA, and diffusion-based architectures, such as models often used for image generation. Systems such as Stable Diffusion show this shift by illustrating how AI can interpret information, draw basic inferences, and produce new outputs using more than one type of data. This review surveys common foundation model architectures and examines what they can do in generative tasks. It reviews Transformer, diffusion, and multimodal architectures, focusing on methods that support scaling and transfer across domains. The paper also reviews key approaches to pretraining and fine-tuning, including self-supervised learning, instruction tuning, and parameter-efficient adaptation, which support these systems’ ability to generalize across tasks. In addition to the technical details, this review discusses how GenAI is being used for text generation, image synthesis, robotics, and biomedical research. The study also notes continuing challenges, such as the high computing and energy demands of large models, ethical concerns about data bias and misinformation, and worries about privacy, reliability, and responsible use of AI in real settings. This review brings together ideas about model design, training methods, and social implications to point future research toward GenAI systems that are efficient, easy to interpret, and reliable, while supporting scientific progress and ethical responsibility. Full article
(This article belongs to the Special Issue Multimodal Deep Learning and Its Applications)
Show Figures

Figure 1

35 pages, 743 KB  
Systematic Review
Affective Intelligent Systems in Healthcare: A Systematic Review
by Analúcia Schiaffino Morales, Thiago de Luca Reis, Alison R. Panisson, Fabrício Ourique and Iwens G. Sene
Technologies 2026, 14(3), 188; https://doi.org/10.3390/technologies14030188 - 20 Mar 2026
Viewed by 143
Abstract
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A [...] Read more.
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A total of 170 peer-reviewed articles were selected from PubMed, IEEE Xplore, Scopus, and Web of Science based on predefined inclusion and exclusion criteria, with the sample restricted to full-text studies in English addressing affective computing in healthcare. No formal risk-of-bias tool was applied due to the computational nature of the studies, and the findings were synthesized descriptively. Discussion: The findings reveal a clear shift from classical machine learning (e.g., SVM, k-NN) toward deep learning and hybrid architectures such as CNN–LSTM and attention-based models for processing complex physiological signals. Recent years have shown a growing interest in multimodal data fusion and privacy-preserving mechanisms such as homomorphic encryption. Evidence remains limited by methodological heterogeneity and inconsistent reporting across studies. A significant gap persists in regulatory compliance, as 57% of recent publications do not adequately address data security or ethical risks associated with sensitive biometric footprints. Conclusions: Although affective computing has reached a certain level of technical maturity, future research must prioritize lightweight, secure, and privacy-by-design architectures to enable ethically aligned and trustworthy deployment in real-world healthcare scenarios. Full article
Show Figures

Figure 1

47 pages, 4135 KB  
Article
Adaptive Compressed Sensing Differential Privacy Federated Learning Based on Orbital Spatiotemporal Characteristics in Space–Air–Ground Networks
by Weibang Li, Ling Li and Lidong Zhu
Sensors 2026, 26(6), 1874; https://doi.org/10.3390/s26061874 - 16 Mar 2026
Viewed by 183
Abstract
With the development of 6G communication technology, Space–Air–Ground Integrated Networks (SAGINs) have become critical infrastructure for global intelligent collaborative computing. However, federated learning deployment in SAGINs faces three severe challenges: the high dynamics of satellite orbital motion, node resource heterogeneity, and privacy vulnerabilities [...] Read more.
With the development of 6G communication technology, Space–Air–Ground Integrated Networks (SAGINs) have become critical infrastructure for global intelligent collaborative computing. However, federated learning deployment in SAGINs faces three severe challenges: the high dynamics of satellite orbital motion, node resource heterogeneity, and privacy vulnerabilities in data transmission. This paper proposes an adaptive compressed sensing differential privacy federated learning framework based on orbital spatiotemporal characteristics. First, we design orbital periodicity-driven time-varying sparse sensing matrices that dynamically adjust compression strategies according to satellite orbital positions, achieving intelligent communication efficiency optimization. Second, we propose an orbital predictability-based privacy budget temporal allocation mechanism and perform differential privacy noise injection in the compressed domain, establishing a compression–privacy joint optimization algorithm. Furthermore, we construct an energy–communication–privacy ternary collaborative mechanism that achieves multi-objective dynamic balance through model predictive control. Finally, we design reinforcement learning-based dynamic routing scheduling and hierarchical aggregation strategies to effectively handle the time-varying characteristics of network topology. Simulation experiments demonstrate that compared to existing methods, the proposed approach achieves 3–12% improvement in model accuracy and 30–50% enhancement in communication efficiency while maintaining differential privacy protection with dynamic privacy budget ε[0.1,10.0] and compression ratio ρ[0.2,0.8]. Unlike static compressed sensing approaches that ignore orbital periodicity, the proposed orbital-driven time-varying sensing matrices reduce reconstruction error by up to 19.4% compared to fixed-matrix baselines, validating the synergistic effectiveness of integrating orbital spatiotemporal characteristics with federated learning in 6G SAGIN deployments. The framework assumes reliable orbital propagation via SGP4/SDP4 models and does not account for Doppler frequency shifts or inter-satellite link handover delays; future extensions include scalability to mega-constellations and integration of quantum-resistant privacy mechanisms. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

26 pages, 11061 KB  
Article
CTSTSpace: A Framework for Behavior Pattern Recognition and Perturbation Analysis Based on Campus Traffic Semantic Trajectories
by Lin Lin, Mengjie Jin, Zhiju Chen, Wenhao Men, Yefei Shi and Guoqing Wang
ISPRS Int. J. Geo-Inf. 2026, 15(3), 127; https://doi.org/10.3390/ijgi15030127 - 14 Mar 2026
Viewed by 272
Abstract
In smart campus construction, behavior pattern recognition and perturbation analysis serve as the cornerstones for achieving a transition from passive response to dynamic regulation, with intelligent perception and anomaly diagnosis methods based on campus traffic flow underpinning transportation system resilience. Traditional research methods [...] Read more.
In smart campus construction, behavior pattern recognition and perturbation analysis serve as the cornerstones for achieving a transition from passive response to dynamic regulation, with intelligent perception and anomaly diagnosis methods based on campus traffic flow underpinning transportation system resilience. Traditional research methods suffer from issues such as privacy risks, coarse modeling, and limitations from single data formats, labeling difficulties, and coverage gaps. This study proposes a refined semantic trajectory construction method that integrates multi-source data (e.g., mobile signaling data, maps and weather conditions), known as the Campus Transportation Semantic Trajectories Space (CTSTSpace) framework. It enables the precise identification of semantic origin–destination points from dynamic personnel trajectories, quantifies service performance through real-time road network mapping, and models multidimensional perturbations, achieving full campus coverage without complex labeling while ensuring robust privacy protection. Under clear weather conditions, the analysis demonstrates accurate recognition of travel behavior patterns (dwelling, aggregation, mobility, and congestion) that synchronize with class schedules, where vehicle speeds drop by over 50% during peak hours. Under rainy weather perturbations, it captured demand shifts (e.g., peak hour offsets of 30–60 min and a 6.8–9.2% reduction in long-distance dining trips) and speed reductions (52.15–73.74%). This approach provides critical insights for resilient smart campus traffic management. Full article
Show Figures

Figure 1

15 pages, 252 KB  
Article
Blockchain and the Ethics of Transformation—A Critical Theory of Technology Perspective on the Loss of Legacy Institutions
by Mosa Motea and Pius Oba
Philosophies 2026, 11(2), 34; https://doi.org/10.3390/philosophies11020034 - 11 Mar 2026
Viewed by 298
Abstract
Blockchain is frequently presented as a decentralised infrastructure capable of enhancing efficiency and trust by replacing or bypassing legacy institutions. Such accounts, however, often treat blockchain as a neutral technical system and overlook the ethical and political consequences of institutional transformation through code. [...] Read more.
Blockchain is frequently presented as a decentralised infrastructure capable of enhancing efficiency and trust by replacing or bypassing legacy institutions. Such accounts, however, often treat blockchain as a neutral technical system and overlook the ethical and political consequences of institutional transformation through code. This perspective article applies Andrew Feenberg’s Critical Theory of Technology to examine blockchain as a normative socio-technical system shaping institutional transformation, governance practices, and moral expectations. Using a conceptual, critical-theoretical methodology supported by illustrative cases from decentralised finance, blockchain-based land registries, and decentralised autonomous organisations, the paper illustrates how blockchain design and governance embed values that may reinforce exclusion, obscure accountability, and constrain democratic contestation. In response, the article proposes a set of normative principles intended to guide ethical reflection on blockchain-based institutional change: participatory co-design; reflexivity and reversibility; moral pluralism through modular governance; and embedded ethical impact assessment. These principles are advanced as evaluative criteria for ethically responsible blockchain-based institutional transformation. By extending Feenberg’s framework into the domain of blockchain ethics, the paper shifts ethical debate beyond privacy and compliance toward questions of institutional legitimacy, democratic rationalisation, and context-sensitive innovation. Full article
24 pages, 3704 KB  
Article
Source-Free Active Domain Adaptation for Brain Tumor Segmentation via Mamba and Region-Level Uncertainty
by Haowen Zheng, Che Wang, Yudan Zhou, Congbo Cai and Zhong Chen
Brain Sci. 2026, 16(3), 300; https://doi.org/10.3390/brainsci16030300 - 8 Mar 2026
Viewed by 311
Abstract
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches [...] Read more.
Background/Objectives: Accurate brain tumor segmentation from MRI is crucial for diagnosis but faces challenges like domain shifts across medical centers, data privacy constraints, and high annotation costs. While source-free active domain adaptation (SFADA) emerges as a promising solution to these issues, existing approaches often overlook the inherent structural complexity in tumor regions. Methods: We propose a novel SFADA framework composed of two major contributions. First, we introduce a Region-level Uncertainty-Guided Sample Selection (RUGS) strategy, enabling the identification of the most informative target-domain samples in a single inference pass. Second, we present the Source-Free Active Domain Adaptation Network (SFADA-Net), a Mamba-driven segmentation model equipped with a dual-path multi-kernel convolution module for enhanced local feature interaction and a structure-aware prompted Mamba module for capturing global spatial relationships. Results: Extensive evaluations across one source domain dataset (BraTS-2021) and three target domain datasets (BraTS-SSA, BraTS-PED, and BraTS-MEN 2023) demonstrate the superior adaptability of the proposed method, achieving consistently high segmentation accuracy across domains. With only 5% annotation budget, our framework consistently outperforms state-of-the-art segmentation and domain adaptation methods, achieving robust segmentation accuracy across diverse domains and approaching the performance of fully supervised learning. Conclusions: The proposed method achieves superior accuracy in brain tumor region segmentation and precise boundary delineation under a limited annotation budget. It effectively mitigates domain shift while fully complying with data privacy regulations. Consequently, our framework relieves manual annotation bottlenecks and accelerates the cross-center deployment of accurate diagnostic tools, facilitating the clinical application of domain adaptation. Full article
Show Figures

Figure 1

45 pages, 2170 KB  
Systematic Review
From Precision Agriculture to Intelligent Agricultural Ecosystems: A Systematic Review of Machine Learning and Big Data Applications
by Ania Cravero, Samuel Sepúlveda, Fernanda Gutiérrez and Lilia Muñoz
Agronomy 2026, 16(5), 516; https://doi.org/10.3390/agronomy16050516 - 27 Feb 2026
Viewed by 737
Abstract
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted [...] Read more.
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted across Scopus, Web of Science, IEEE Xplore, the ACM Digital Library, SpringerLink, and MDPI, following the PRISMA 2020 guidelines. After duplicate removal and a two-stage screening process (title/abstract screening followed by full-text assessment), eligible peer-reviewed studies were systematically extracted using a structured coding matrix encompassing six analytical domains: crops, soil, weather and water, land use, animal systems, and farmer decision-making. The findings reveal a substantial increase in ML-driven agricultural analytics. Although Random Forest and Convolutional Neural Networks remain widely adopted, recent studies demonstrate a marked shift toward advanced Deep Learning architectures, integrated cloud–edge–device infrastructures, Federated Learning frameworks for privacy-preserving collaboration, Explainable AI techniques to enhance transparency, and governance-oriented mechanisms to ensure interoperability. Notwithstanding these advances, several persistent challenges remain, including limited generalizability across diverse agroclimatic contexts, the high costs associated with high-quality data annotation, the integration of heterogeneous and multimodal datasets, and infrastructural constraints related to connectivity. These developments are synthesized within the IAE conceptual framework, underscoring governance- and lifecycle-aware orchestration MLOps as a critical differentiator that transcends purely technology-centric approaches. Full article
Show Figures

Figure 1

22 pages, 909 KB  
Review
Artificial Intelligence in the Diagnosis and Prognostic Stratification of Hepatocellular Carcinoma: Current Evidence, Clinical Applications, and Future Perspectives
by Emily L. Pfahl, Nooruddin S. Pracha, Mohamed H. Emlemdi, Phuoc-Hanh D. Le and Mina S. Makary
Biomedicines 2026, 14(3), 505; https://doi.org/10.3390/biomedicines14030505 - 25 Feb 2026
Viewed by 364
Abstract
The integration of artificial intelligence (AI) into medicine, oncology, and radiology represents a marked shift in the diagnosis, prognostication, and management of hepatocellular carcinoma (HCC), a malignancy with high global incidence and poor prognosis. This review examines the application of AI, including machine [...] Read more.
The integration of artificial intelligence (AI) into medicine, oncology, and radiology represents a marked shift in the diagnosis, prognostication, and management of hepatocellular carcinoma (HCC), a malignancy with high global incidence and poor prognosis. This review examines the application of AI, including machine learning (ML) and deep learning (DL), across the spectrum of HCC care. As AI advances, new convolutional neural networks (CNNs) and other models are enhancing diagnostic accuracy, reducing interpretation times, and improving the characterization of liver lesions across major imaging modalities including ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI). Beyond diagnosis, the transformative role of AI in prognostication is also improving, where AI can now noninvasively predict critical factors such as microvascular invasion, genetic mutation status, tumor recurrence, and treatment response. Furthermore, AI has shown promise in facilitating patient-specific treatment planning by stratifying patients for interventions such as transarterial chemoembolization (TACE) and stereotactic body radiation therapy (SBRT). The review also addresses the emerging fields of pathomics and the use of AI in positron emission tomography (PET), while critically evaluating the cost-effectiveness of these technologies. Despite its promise, the widespread clinical adoption of AI faces challenges, including limited generalizability, maintaining patient privacy, ethical considerations, and the need for robust prospective validation. Ultimately, this review illustrates that the future of HCC management lies in a collaborative, hybrid-intelligence model, where AI-driven insights augment clinical expertise to optimize diagnostic pathways, personalize therapy, and improve patient outcomes. Full article
(This article belongs to the Special Issue Advances in Hepatology)
Show Figures

Figure 1

21 pages, 17407 KB  
Article
Toward Self-Sovereign Management of Subscriber Identities in 5G/6G Core Networks
by Paul Scalise, Michael Hempel and Hamid Sharif
Telecom 2026, 7(1), 23; https://doi.org/10.3390/telecom7010023 - 16 Feb 2026
Viewed by 396
Abstract
5G systems have delivered on their promise of seamless connectivity and efficiency improvements since their global rollout began in 2020. However, maintaining subscriber identity privacy on the network remains a critical challenge. The 3GPP specifications define numerous identifiers associated with the subscriber and [...] Read more.
5G systems have delivered on their promise of seamless connectivity and efficiency improvements since their global rollout began in 2020. However, maintaining subscriber identity privacy on the network remains a critical challenge. The 3GPP specifications define numerous identifiers associated with the subscriber and their activity, all of which are critical to the operations of cellular networks. While the introduction of the Subscription Concealed Identifier (SUCI) protects users across the air interface, the 5G Core Network (CN) continues to operate largely on the basis of the Subscription Permanent Identifier (SUPI)—the 5G-equivalent to the IMSI from prior generations—for functions such as authentication, billing, session management, emergency services, and lawful interception. Furthermore, the SUPI relies solely on the transport layer’s encryption for protection from malicious observation and tracking of the SUPI across activities. The crucial role of the largely unprotected SUPI and other closely related identifiers creates a high-value target for insider threats, malware campaigns, and data exfiltration, effectively rendering the Mobile Network Operator (MNO) a single point of failure for identity privacy. In this paper, we analyze the architectural vulnerabilities of identity persistence within the CN, challenging the legacy “honest-but-curious” trust model. To quantify the extent of subscriber identities being utilized and exchange within various API calls in the CN, we conducted a study of the occurrence of SUPI as a parameter throughout the collection of 5G SBI (Service-Based Interface) Core VNF (Virtual Network Function) API (Application Programming Interface) schemas. Our extensive analysis of the 3GPP specifications for 3GPP Release 18 revealed a total of 4284 distinct parameter names being used across all API calls, with a total of 171,466 occurrences across the API schema. More importantly, it revealed a highly skewed distribution in which subscriber identity plays a pivotal role. Specifically, the “supi” parameter ranks 57th with 397 occurrences. We found that SUPI occurs both as a direct parameter (“supi”) and within 72 other parameter names that contain subscriber identifiers as defined in 3GPP TS 23.003. For these 73 parameter names, we identified a total of 8757 occurrences. At over 5.11% of all parameter occurrences, this constitutes a disproportionately large share of total references. We also detail scenarios where subscriber privacy can be compromised by internal actors and review future privacy-preserving frameworks that aim to decouple subscriber identity from network operations. By suggesting a shift towards a zero-trust model for CN architecture and providing subscribers with greater control over their identity management, this work also offers a potential roadmap for mitigating insider threats in current deployments and influencing specific standardization and regulatory requirements for future 6G and Beyond-6G networks. Full article
Show Figures

Figure 1

33 pages, 5023 KB  
Article
Recommender Systems: Emerging Trends from Four Decades of Research Using Bibliometric Analysis and Transformer-Based Models
by Simona-Vasilica Oprea, Adela Bâra and Tudor Ghinea
Electronics 2026, 15(4), 763; https://doi.org/10.3390/electronics15040763 - 11 Feb 2026
Viewed by 1107
Abstract
Recommender systems represent an essential infrastructure for digital platforms. To understand their evolution, we analyze 15,944 Web of Science publications (1980–2025) using bibliometric techniques, generative and transformer models for sentiment analysis and latent topic modeling. Our analysis yields three major findings. First, e-commerce [...] Read more.
Recommender systems represent an essential infrastructure for digital platforms. To understand their evolution, we analyze 15,944 Web of Science publications (1980–2025) using bibliometric techniques, generative and transformer models for sentiment analysis and latent topic modeling. Our analysis yields three major findings. First, e-commerce recommendation research exhibits rapid growth in advanced representation techniques, with compound annual growth rates for contrastive learning (187%), graph neural networks (89%) and federated learning (72%). Second, algorithmic fairness and privacy preservation have emerged as critical research directions. Third, collaborative networks indicate a geographical shift, with Asia–Pacific regions becoming influential research hubs. The methodology integrates CAGR analysis with Latent Dirichlet Allocation (LDA, coherence score = 0.687) and BERTopic for thematic mapping and network analysis. Additionally, we employ sentiment analysis (VADER, TextBlob and a sentiment analysis pipeline from Hugging Face Transformers) and temporal heatmaps to capture research narratives. Topic modeling with LDA identifies five core themes: (1) Collaborative Filtering; (2) Machine Learning and Educational Systems; (3) Web Services and Business Applications; (4) Content and Multimedia Recommendations; (5) Graph Neural Networks and Advanced Models. BERTopic provides eight more nuanced themes based on semantics. Citation patterns follow the Pareto principle, where the top 1% of articles account for 29.1% of all citations, confirming a highly skewed impact distribution. Notably, established keywords show declining trajectories, indicating a methodological evolution toward newer, deep learning and generative AI-based paradigms. Full article
(This article belongs to the Special Issue Data Mining and Recommender Systems)
Show Figures

Figure 1

Back to TopTop