Previous Issue
Volume 14, November
 
 

Computers, Volume 14, Issue 12 (December 2025) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 3073 KB  
Review
Relevance and Evolution of Benchmarking in Computer Systems: A Comprehensive Historical and Conceptual Review
by Isaac Zablah, Lilian Sosa-Díaz and Antonio Garcia-Loureiro
Computers 2025, 14(12), 516; https://doi.org/10.3390/computers14120516 (registering DOI) - 26 Nov 2025
Abstract
Benchmarking has been central to performance evaluation for more than four decades. Reinhold P. Weicker’s 1990 survey in IEEE Computer offered an early, rigorous critique of standard benchmarks, warning about pitfalls that continue to surface in contemporary practice. This review synthesizes the evolution [...] Read more.
Benchmarking has been central to performance evaluation for more than four decades. Reinhold P. Weicker’s 1990 survey in IEEE Computer offered an early, rigorous critique of standard benchmarks, warning about pitfalls that continue to surface in contemporary practice. This review synthesizes the evolution from classical synthetic benchmarks (Whetstone, Dhrystone) and application kernels (LINPACK) to modern suites (SPEC CPU2017), domain-specific metrics (TPC), data-intensive and graph workloads (Graph500), and Artificial Intelligence/Machine Learning (AI/ML) benchmarks (MLPerf, TPCx-AI). We emphasize energy and sustainability (Green500, SPECpower, MLPerf Power), reproducibility (artifacts, environments, rules), and domain-specific representativeness, especially in biomedical and bioinformatics contexts. Building upon Weicker’s methodological cautions, we formulate a concise checklist for fair, multidimensional, reproducible benchmarking and identify open challenges and future directions. Full article
Show Figures

Figure 1

36 pages, 2334 KB  
Article
Fair and Explainable Multitask Deep Learning on Synthetic Endocrine Trajectories for Real-Time Prediction of Stress, Performance, and Neuroendocrine States
by Abdullah, Zulaikha Fatima, Carlos Guzman Sánchez Mejorada, Muhammad Ateeb Ather, José Luis Oropeza Rodríguez and Grigori Sidorov
Computers 2025, 14(12), 515; https://doi.org/10.3390/computers14120515 - 25 Nov 2025
Abstract
Cortisol and testosterone are key digital biomarkers reflecting neuroendocrine activity across the hypothalamic–pituitary–adrenal (HPA) and hypothalamic–pituitary–gonadal (HPG) axes, encoding stress adaptation and behavioral regulation. Continuous real-world monitoring remains challenging due to the sparsity of sensing and the complexity of multimodal data. This study [...] Read more.
Cortisol and testosterone are key digital biomarkers reflecting neuroendocrine activity across the hypothalamic–pituitary–adrenal (HPA) and hypothalamic–pituitary–gonadal (HPG) axes, encoding stress adaptation and behavioral regulation. Continuous real-world monitoring remains challenging due to the sparsity of sensing and the complexity of multimodal data. This study introduces a synthetic sensor-driven computational framework that models hormone variability through data-driven simulation and predictive learning, eliminating the need for continuous biosensor input. A hybrid deep ensemble integrates biological, behavioral, and contextual data using bidirectional multitask learning with one-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM) branches, meta-gated expert fusion, Bayesian variational layers with Monte Carlo Dropout, and adversarial debiasing. Synthetically derived longitudinal hormone profiles that were validated by Kolmogorov–Smirnov (KS), Wasserstein, maximum mean discrepancy (MMD), and dynamic time warping (DTW) metrics account for class imbalance and temporal sparsity. Our framework achieved up to 99.99% macro F1-score on augmented samples and more than 97% for unseen data with ECE below 0.001. Selective prediction further maximized the convergence of predictions for low-confidence cases, achieving 99.9992–99.9998% accuracy on 99.5% of samples, which were smaller than 5 MB in size so that they can be employed in real time when mounted on wearable devices. Explainability investigations revealed the most important features on both the physiological and behavioral levels, demonstrating framework capabilities for adaptive clinical or organizational stress monitoring. Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
Show Figures

Figure 1

39 pages, 3307 KB  
Article
DEVS Closure Under Coupling, Universality, and Uniqueness: Enabling Simulation and Software Interoperability from a System-Theoretic Foundation
by Bernard P. Zeigler, Robert Kewley and Gabriel Wainer
Computers 2025, 14(12), 514; https://doi.org/10.3390/computers14120514 - 24 Nov 2025
Abstract
This article explores the foundational mechanisms of the Discrete Event System Specification (DEVS) theory—closure under coupling, universality, and uniqueness—and their critical role in enabling interoperability through modular, hierarchical simulation frameworks. Closure under coupling empowers modelers to compose interconnected models, both atomic and coupled, [...] Read more.
This article explores the foundational mechanisms of the Discrete Event System Specification (DEVS) theory—closure under coupling, universality, and uniqueness—and their critical role in enabling interoperability through modular, hierarchical simulation frameworks. Closure under coupling empowers modelers to compose interconnected models, both atomic and coupled, into unified systems without departing from the DEVS formalism. We show how this modular approach supports the scalable and flexible construction of complex simulation architectures on a firm system-theoretic foundation. Also, we show that facilitating the transformation from non-modular to modular and hierarchical structures endows a major benefit in that existing non-modular models can be accommodated by simply wrapping them in DEVS-compliant format. Therefore, DEVS theory simplifies model maintenance, integration, and extension, thereby promoting interoperability and reuse. Additionally, we demonstrate how DEVS universality and uniqueness guarantee that any system with discrete event interfaces can be structurally represented with the DEVS formalism, ensuring consistency across heterogeneous platforms. We propose that these mechanisms collectively can streamline simulator design and implementation for advancing simulation interoperability. Full article
Show Figures

Figure 1

17 pages, 892 KB  
Article
Effectiveness Evaluation Method for Hybrid Defense of Moving Target Defense and Cyber Deception
by Fangbo Hou, Fangrun Hou, Xiaodong Zang, Ziyang Hua, Zhang Liu and Zhe Wu
Computers 2025, 14(12), 513; https://doi.org/10.3390/computers14120513 - 24 Nov 2025
Abstract
Moving Target Defense (MTD) has been proposed as a dynamic defense strategy to address the static and isomorphic vulnerabilities of networks. Recent research in MTD has focused on enhancing its effectiveness by combining it with cyber deception techniques. However, there is limited research [...] Read more.
Moving Target Defense (MTD) has been proposed as a dynamic defense strategy to address the static and isomorphic vulnerabilities of networks. Recent research in MTD has focused on enhancing its effectiveness by combining it with cyber deception techniques. However, there is limited research on evaluating and quantifying this hybrid defence framework. Existing studies on MTD evaluation often overlook the deployment of deception, which can expand the potential attack surface and introduce additional costs. Moreover, a unified model that simultaneously measures security, reliability, and defense cost is lacking. We propose a novel hybrid defense effectiveness evaluation method that integrates queuing and evolutionary game theories to tackle these challenges. The proposed method quantifies the safety, reliability, and defense cost. Additionally, we construct an evolutionary game model of MTD and deception, jointly optimizing triggering and deployment strategies to minimize the attack success rate. Furthermore, we introduce a hybrid strategy selection algorithm to evaluate the impact of various strategy combinations on security, resource consumption, and availability. Simulation and experimental results demonstrate that the proposed approach can accurately evaluate and guide the configuration of hybrid defenses. Demonstrating that hybrid defense can effectively reduce the attack success rate and unnecessary overhead while maintaining Quality of Service (QoS). Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

21 pages, 317 KB  
Review
The Latest Diagnostic Imaging Technologies and AI: Applications for Melanoma Surveillance Toward Precision Oncology
by Alessandro Valenti, Fabio Valenti, Stefano Giuliani, Simona di Martino, Luca Neroni, Cristina Sorino, Pietro Sollena, Flora Desiderio, Fulvia Elia, Maria Teresa Maccallini, Michelangelo Russillo, Italia Falcone and Antonino Guerrisi
Computers 2025, 14(12), 512; https://doi.org/10.3390/computers14120512 - 24 Nov 2025
Abstract
In recent years, the medical field has witnessed the rapid expansion and refinement of omics and imaging technologies, which have profoundly transformed patient surveillance and monitoring strategies, with stage-adapted protocols and cross-sectional imaging important in high-risk follow-up. In the melanoma context, diagnostic imaging [...] Read more.
In recent years, the medical field has witnessed the rapid expansion and refinement of omics and imaging technologies, which have profoundly transformed patient surveillance and monitoring strategies, with stage-adapted protocols and cross-sectional imaging important in high-risk follow-up. In the melanoma context, diagnostic imaging plays a pivotal role in disease staging, follow-up and evaluation of therapeutic response. Moreover, the emergence of Artificial Intelligence (AI) has further driven the transition toward precision medicine, emphasizing the complexity and individuality of each patient: AI/Radiomics pipelines are increasingly supporting lesion characterization and response prediction within clinical workflows. Consequently, it is essential to emphasize the significant potential of quantitative imaging techniques and radiomic applications, as well as the role of AI in improving diagnostic accuracy and enabling personalized oncologic treatment. Early evidence demonstrates increased sensitivity and specificity, along with a reduction in unnecessary biopsies and imaging procedures, within selected care approaches. In this review, we will outline the current clinical guidelines for the management of melanoma patients and use them as a framework to explore and evaluate advanced imaging approaches and their potential contributions. Specifically, we compare the recommendations of major societies such as NCCN, which advocates more intensive imaging for stages IIB–IV; ESMO and AIOM, which recommend symptom-driven surveillance; and SIGN, which discourages routine imaging in the absence of clinical suspicion. Furthermore, we will describe the latest imaging technologies and the integration of AI-based tools for developing predictive models to actively support therapeutic decision-making and patient care. The conclusions will focus on the prospective role of novel imaging modalities in advancing precision oncology, improving patient outcomes and optimizing the allocation of clinical resources. Overall, the current evidences support a stage-adapted surveillance strategy (ultrasound ± elastography for lymph node regions, targeted brain MRI in high-risk patients, selective use of DECT or total-body MRI) combined with rigorously validated AI-based decision support systems to personalize follow-up, streamline workflows and optimize resource utilization. Full article
Show Figures

Graphical abstract

25 pages, 9162 KB  
Article
Image-Based Threat Detection and Explainability Investigation Using Incremental Learning and Grad-CAM with YOLOv8
by Zeynel Kutlu and Bülent Gürsel Emiroğlu
Computers 2025, 14(12), 511; https://doi.org/10.3390/computers14120511 - 24 Nov 2025
Abstract
Real-world threat detection systems face critical challenges in adapting to evolving operational conditions while providing transparent decision making. Traditional deep learning models suffer from catastrophic forgetting during continual learning and lack interpretability in security-critical deployments. This study proposes a distributed edge–cloud framework integrating [...] Read more.
Real-world threat detection systems face critical challenges in adapting to evolving operational conditions while providing transparent decision making. Traditional deep learning models suffer from catastrophic forgetting during continual learning and lack interpretability in security-critical deployments. This study proposes a distributed edge–cloud framework integrating YOLOv8 object detection with incremental learning and Gradient-weighted Class Activation Mapping (Grad-CAM) for adaptive, interpretable threat detection. The framework employs distributed edge agents for inference on unlabeled surveillance data, with a central server validating detections through class verification and localization quality assessment (IoU ≥ 0.5). A lightweight YOLOv8-nano model (3.2 M parameters) was incrementally trained over five rounds using sequential fine tuning with weight inheritance, progressively incorporating verified samples from an unlabeled pool. Experiments on a 5064 image weapon detection dataset (pistol and knife classes) demonstrated substantial improvements: F1-score increased from 0.45 to 0.83, mAP@0.5 improved from 0.518 to 0.886 and minority class F1-score rose 196% without explicit resampling. Incremental learning achieved a 74% training time reduction compared to one-shot training while maintaining competitive accuracy. Grad-CAM analysis revealed progressive attention refinement quantified through the proposed Heatmap Focus Score, reaching 92.5% and exceeding one-shot-trained models. The framework provides a scalable, memory-efficient solution for continual threat detection with superior interpretability in dynamic security environments. The integration of Grad-CAM visualizations with detection outputs enables operator accountability by establishing auditable decision records in deployed systems. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence (2nd Edition))
Show Figures

Figure 1

14 pages, 2101 KB  
Article
IoT-Enabled Indoor Real-Time Tracking Using UWB for Smart Warehouse Management
by Bahareh Masoudi, Nazila Razi and Javad Rezazadeh
Computers 2025, 14(12), 510; https://doi.org/10.3390/computers14120510 - 24 Nov 2025
Abstract
The Internet of Things (IoT) is transforming industrial operations, particularly under Industry 4.0, by enabling real-time connectivity and automation. Accurate indoor localization is critical for warehouse management, where GPS-based solutions are ineffective due to signal obstruction. This paper presents a smart indoor positioning [...] Read more.
The Internet of Things (IoT) is transforming industrial operations, particularly under Industry 4.0, by enabling real-time connectivity and automation. Accurate indoor localization is critical for warehouse management, where GPS-based solutions are ineffective due to signal obstruction. This paper presents a smart indoor positioning system (IPS) integrating Ultra-Wideband (UWB) sensors with Long Short-Term Memory (LSTM) neural networks and Kalman filtering, employing a tailored data fusion sequence and parameter optimization for real-time object tracking. The system was deployed in a 54 m2 warehouse section on forklifts equipped with UWB modules and QR scanners. Experimental evaluation considered accuracy, reliability, and noise resilience in cluttered industrial conditions. Results, validated with RMSE, MAE, and standard deviation, demonstrate that the hybrid Kalman–LSTM model improves localization accuracy by up to 4% over baseline methods, outperforming conventional sensor fusion approaches. Comparative analysis with standard benchmarks highlights the system’s robustness under interference and its scalability for larger warehouse operations. These findings confirm that combining temporal pattern learning with advanced sensor fusion significantly enhances tracking precision. This research contributes a reproducible and adaptable framework for intelligent warehouse management, offering practical benefits aligned with Industry 4.0 objectives. Full article
Show Figures

Figure 1

17 pages, 1542 KB  
Article
Classification of Drowsiness and Alertness States Using EEG Signals to Enhance Road Safety: A Comparative Analysis of Machine Learning Algorithms and Ensemble Techniques
by Masoud Sistaninezhad, Saman Rajebi, Siamak Pedrammehr, Arian Shajari, Hussain Mohammed Dipu Kabir, Thuong Hoang, Stefan Greuter and Houshyar Asadi
Computers 2025, 14(12), 509; https://doi.org/10.3390/computers14120509 - 24 Nov 2025
Abstract
Drowsy driving is a major contributor to road accidents, as reduced vigilance degrades situational awareness and reaction control. Reliable assessment of alertness versus drowsiness can therefore support accident prevention. Key gaps remain in physiology-based detection, including robust identification of microsleep and transient vigilance [...] Read more.
Drowsy driving is a major contributor to road accidents, as reduced vigilance degrades situational awareness and reaction control. Reliable assessment of alertness versus drowsiness can therefore support accident prevention. Key gaps remain in physiology-based detection, including robust identification of microsleep and transient vigilance shifts, sensitivity to fatigue-related changes, and resilience to motion-related signal artifacts; practical sensing solutions are also needed. Using Electroencephalogram (EEG) recordings from the MIT-BIH Polysomnography Database (18 records; >80 h of clinically annotated data), we framed wakefulness–drowsiness discrimination as a binary classification task. From each 30 s segment, we extracted 61 handcrafted features spanning linear, nonlinear, and frequency descriptors designed to be largely robust to signal-quality variations. Three classifiers were evaluated—k-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT)—alongside a DT-based bagging ensemble. KNN achieved 99% training and 80.4% test accuracy; SVM reached 80.0% and 78.8%; and DT obtained 79.8% and 78.3%. Data standardization did not improve performance. The ensemble attained 100% training and 84.7% test accuracy. While these results indicate strong discriminative capability, the training–test gap suggests overfitting and underscores the need for validation on larger, more diverse cohorts to ensure generalizability. Overall, the findings demonstrate the potential of machine learning to identify vigilance states from EEG. We present an interpretable EEG-based classifier built on clinically scored polysomnography and discuss translation considerations; external validation in driving contexts is reserved for future work. Full article
(This article belongs to the Special Issue AI for Humans and Humans for AI (AI4HnH4AI))
Show Figures

Figure 1

21 pages, 1847 KB  
Article
NewsSumm: The World’s Largest Human-Annotated Multi-Document News Summarization Dataset for Indian English
by Manish Motghare, Megha Agarwal and Avinash Agrawal
Computers 2025, 14(12), 508; https://doi.org/10.3390/computers14120508 - 23 Nov 2025
Abstract
The rapid growth of digital journalism has heightened the need for reliable multi-document summarization (MDS) systems, particularly in underrepresented, low-resource, and culturally distinct contexts. However, current progress is hindered by a lack of large-scale, high-quality non-Western datasets. Existing benchmarks—such as CNN/DailyMail, XSum, and [...] Read more.
The rapid growth of digital journalism has heightened the need for reliable multi-document summarization (MDS) systems, particularly in underrepresented, low-resource, and culturally distinct contexts. However, current progress is hindered by a lack of large-scale, high-quality non-Western datasets. Existing benchmarks—such as CNN/DailyMail, XSum, and MultiNews—are limited by language, regional focus, or reliance on noisy, auto-generated summaries. We introduce NewsSumm, the largest human-annotated MDS dataset for Indian English, curated by over 14,000 expert annotators through the Suvidha Foundation. Spanning 36 Indian English newspapers from 2000 to 2025 and covering more than 20 topical categories, NewsSumm includes over 317,498 articles paired with factually accurate, professionally written abstractive summaries. We detail its robust collection, annotation, and quality control pipelines, and present extensive statistical, linguistic, and temporal analyses that underscore its scale and diversity. To establish benchmarks, we evaluate PEGASUS, BART, and T5 models on NewsSumm, reporting aggregate and category-specific ROUGE scores, as well as factual consistency metrics. All NewsSumm dataset materials are openly released via Zenodo. NewsSumm offers a foundational resource for advancing research in summarization, factuality, timeline synthesis, and domain adaptation for Indian English and other low-resource language settings. Full article
Show Figures

Figure 1

26 pages, 2481 KB  
Article
Formal Analysis of Bakery-Based Mutual Exclusion Algorithms
by Libero Nigro
Computers 2025, 14(12), 507; https://doi.org/10.3390/computers14120507 - 23 Nov 2025
Abstract
Lamport’s Bakery algorithm (LBA) represents a general and elegant solution to the mutual exclusion (ME) problem posed by Dijkstra in 1965. Its correctness is usually based on intuitive reasoning. LBA rests on an unbounded number of tickets, which prevents correctness assessment by model [...] Read more.
Lamport’s Bakery algorithm (LBA) represents a general and elegant solution to the mutual exclusion (ME) problem posed by Dijkstra in 1965. Its correctness is usually based on intuitive reasoning. LBA rests on an unbounded number of tickets, which prevents correctness assessment by model checking. Several variants are proposed in the literature to bound the number of exploited tickets. This paper is based on a formal method centered on Uppaal for reasoning about general shared-memory ME algorithms. A model can (hopefully) be verified by the exhaustive model checker (MC), and/or by the statistical model checker (SMC) through stochastic simulations. To overcome the scalability problems of SMC, a model can be reduced to actors and simulated in Java. The paper formalizes LBA and demonstrates, through simulations, that it is correct with atomic and non-atomic memory registers. Then, some representative variants with bounded tickets are studied, which prove to be accurate with atomic registers, or which confirm their correctness under atomic or non-atomic registers. Full article
Show Figures

Figure 1

22 pages, 765 KB  
Article
Evaluating Deployment of Deep Learning Model for Early Cyberthreat Detection in On-Premise Scenario Using Machine Learning Operations Framework
by Andrej Ralbovský, Ivan Kotuliak and Dennis Sobolev
Computers 2025, 14(12), 506; https://doi.org/10.3390/computers14120506 - 23 Nov 2025
Viewed by 83
Abstract
Modern on-premises threat detection increasingly relies on deep learning over network and system logs, yet organizations must balance infrastructure and resource constraints with maintainability and performance. We investigate how adopting MLOps influences deployment and runtime behavior of a recurrent-neural-network–based detector for malicious event [...] Read more.
Modern on-premises threat detection increasingly relies on deep learning over network and system logs, yet organizations must balance infrastructure and resource constraints with maintainability and performance. We investigate how adopting MLOps influences deployment and runtime behavior of a recurrent-neural-network–based detector for malicious event sequences. Our investigation includes surveying modern open-source platforms to select a suitable candidate, its implementation over a two-node setup with a CPU-centric control server and a GPU worker and performance evaluation for a containerized MLOps-integrated setup vs. bare metal. For evaluation, we use four scenarios that cross the deployment model (bare metal vs. containerized) with two different versions of software stack, using a sizable training corpus and a held-out inference subset representative of operational traffic. For training and inference, we measured execution time, CPU and RAM utilization, and peak GPU memory to find notable patterns or correlations providing insights for organizations adopting the on-premises-first approach. Our findings prove that MLOps can be adopted even in resource-constrained environments without inherent performance penalties; thus, platform choice should be guided by operational concerns (reproducibility, scheduling, tracking), while performance tuning should prioritize pinning and validating the software stack, which has surprisingly large impact on resource utilization and execution process. Our study offers a reproducible blueprint for on-premises cyber-analytics and clarifies where optimization yields the greatest return. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
Show Figures

Figure 1

20 pages, 792 KB  
Review
Lightweight Encryption Algorithms for IoT
by Cláudio Silva, Nelson Tenório and Jorge Bernardino
Computers 2025, 14(12), 505; https://doi.org/10.3390/computers14120505 - 21 Nov 2025
Viewed by 163
Abstract
The exponential growth of the Internet of Things (IoT) has increased the demand for robust security solutions that are tailored to devices with limited resources. This paper presents a systematic review of recent literature on lightweight encryption algorithms designed to meet this challenge. [...] Read more.
The exponential growth of the Internet of Things (IoT) has increased the demand for robust security solutions that are tailored to devices with limited resources. This paper presents a systematic review of recent literature on lightweight encryption algorithms designed to meet this challenge. Through an analysis of 22 distinct ciphers, the study identifies the main algorithms proposed and catalogues the key metrics used for their evaluation. The most common performance criteria are execution speed, memory usage, and energy consumption, while security is predominantly assessed using techniques such as differential and linear cryptanalysis, alongside statistical tests such as the avalanche effect. However, the most critical finding is the profound lack of standardized frameworks for both performance benchmarking and security validation. This methodological fragmentation severely hinders objective, cross-study comparisons, making evidence-based algorithm selection a significant challenge and impeding the development of verifiably secure IoT systems. Full article
Show Figures

Figure 1

20 pages, 2958 KB  
Article
pFedKA: Personalized Federated Learning via Knowledge Distillation with Dual Attention Mechanism
by Yuanhao Jin, Kaiqi Zhang, Chao Ma, Xinxin Cheng, Luogang Zhang and Hongguo Zhang
Computers 2025, 14(12), 504; https://doi.org/10.3390/computers14120504 - 21 Nov 2025
Viewed by 145
Abstract
Federated learning in heterogeneous data scenarios faces two key challenges. First, the conflict between global models and local personalization complicates knowledge transfer and leads to feature misalignment, hindering effective personalization for clients. Second, the lack of dynamic adaptation in standard federated learning makes [...] Read more.
Federated learning in heterogeneous data scenarios faces two key challenges. First, the conflict between global models and local personalization complicates knowledge transfer and leads to feature misalignment, hindering effective personalization for clients. Second, the lack of dynamic adaptation in standard federated learning makes it difficult to handle highly heterogeneous and changing client data, reducing the global model’s generalization ability. To address these issues, this paper proposes pFedKA, a personalized federated learning framework integrating knowledge distillation and a dual-attention mechanism. On the client-side, a cross-attention module dynamically aligns global and local feature spaces using adaptive temperature coefficients to mitigate feature misalignment. On the server-side, a Gated Recurrent Unit-based attention network adaptively adjusts aggregation weights using cross-round historical states, providing more robust aggregation than static averaging in heterogeneous settings. Experimental results on CIFAR-10, CIFAR-100, and Shakespeare datasets demonstrate that pFedKA converges faster and with greater stability in heterogeneous scenarios. Furthermore, it significantly improves personalization accuracy compared to state-of-the-art personalized federated learning methods. Additionally, we demonstrate privacy guarantees by integrating pFedKA with DP-SGD, showing comparable privacy protection to FedAvg while maintaining high personalization accuracy. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Show Figures

Graphical abstract

22 pages, 687 KB  
Article
MacHa: Multi-Aspect Controllable Text Generation Based on a Hamiltonian System
by Delong Xu, Min Lin and Yurong Wang
Computers 2025, 14(12), 503; https://doi.org/10.3390/computers14120503 - 21 Nov 2025
Viewed by 175
Abstract
Multi-faceted controllable text generation can be viewed as an extension and combination of controllable text generation tasks. It requires the generation of fluent text while controlling multiple different attributes (e.g., negative emotions and environmental protection in themes). Current research either estimates compact latent [...] Read more.
Multi-faceted controllable text generation can be viewed as an extension and combination of controllable text generation tasks. It requires the generation of fluent text while controlling multiple different attributes (e.g., negative emotions and environmental protection in themes). Current research either estimates compact latent spaces for multiple attributes, reducing interference between different attributes but making it difficult to control the balance between multiple attributes, or controls the balance between multiple attributes but requires complex searches for decoding. Based on these issues, we propose a new method called MacHa, which trains an attribute latent space using multiple loss functions and establishes a mapping between the attribute latent space and attributes in sentences using a VAE network. An energy model based on the Hamilton function is defined in the potential space to control the balance between multiple attributes. Subsequently, in order to reduce the complexity of the decoding process, we extract samples using the RL sampling method and send them to the VAE decoder to generate the final text. The experimental results show that the MacHa method generates text with higher accuracy than the baseline models after balancing multiple attributes and has a fast decoding speed. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop