Skip Content
You are currently on the new version of our website. Access the old version .

Computers

Computers is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.

Quartile Ranking JCR - Q2 (Computer Science, Interdisciplinary Applications)

All Articles (2,007)

Artificial Intelligence-Based Models for Predicting Disease Course Risk Using Patient Data

  • Rafiqul Chowdhury,
  • Wasimul Bari and
  • Minhajur Rahman
  • + 2 authors

Nowadays, longitudinal data are common—typically high-dimensional, large, complex, and collected using various methods, with repeated outcomes. For example, the growing elderly population experiences health deterioration, including limitations in Instrumental Activities of Daily Living (IADLs), thereby increasing demand for long-term care. Understanding the risk of repeated IADLs and estimating the trajectory risk by identifying significant predictors will support effective care planning. Such data analysis requires a complex modeling framework. We illustrated a regressive modeling framework employing statistical and machine learning (ML) models on the Health and Retirement Study data to predict the trajectory of IADL risk as a function of predictors. Based on the accuracy measure, the regressive logistic regression (RLR) and the Decision Tree (DT) models showed the highest prediction accuracy: 0.90 to 0.93 for follow-ups 1–6; and 0.89 and 0.90 for follow-up 7, respectively. The Area Under the Curve and Receiver Operating Characteristics curve also showed similar findings. Depression scores, mobility score, large muscle score, and Difficulties of Activities of Daily Living (ADLs) score showed a significant positive association with IADLs (p < 0.05). The proposed modeling framework simplifies the analysis and risk prediction of repeated outcomes from complex datasets and could be automated by leveraging Artificial Intelligence (AI).

6 February 2026

Sample trajectory of conditional probabilities for a portion of selected elderly individuals.

Student performance is an important factor for any education process to succeed; as a result, early detection of students at risk is critical for enabling timely and effective educational interventions. However, most educational datasets are complex and do not have a stable number of features. As a result, in this paper, we propose a new algorithm called MOHHO-NSGA-III, which is a multi-objective feature-selection framework that jointly optimizes classification performance, feature subset compactness, and prediction stability with cross-validation folds. The algorithm combines Harris Hawks Optimization (HHO) to obtain a good balance between exploration and exploitation, with NSGA-III to preserve solution diversity along the Pareto front. Moreover, we control the diversity management strategy to figure out a new solution to overcome the issue, thereby reducing the premature convergence status. We validated the algorithm on Portuguese and Mathematics datasets obtained from the UCI Student Performance repository. Selected features were evaluated with five classifiers (k-NN, Decision Tree, Naive Bayes, SVM, LDA) through 10-fold cross-validation repeated over 21 independent runs. MOHHO-NSGA-III consistently selected 12 out of 30 features (60% reduction) while achieving 4.5% higher average accuracy than the full feature set (Wilcoxon test; p<0.01 across all classifiers). The most frequently selected features were past failures, absences, and family support aligning with educational research on student success factors. This suggests the proposed algorithm produces not just accurate but also interpretable models suitable for deployment in institutional early warning systems.

6 February 2026

Proposed MOHHO-NSGA-III approach for multi-objective feature selection.

Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026)

  • Itahisa Pérez-Pérez,
  • Miriam Catalina González-Afonso and
  • David Pérez-Jorge
  • + 1 author

The integration of generative AI in higher education has reignited debates around authorship and academic integrity, prompting approaches that emphasize transparency. This study identifies and synthesizes the transparency mechanisms described for assessment involving generative AI, recognizes implementation patterns, and analyzes the available evidence regarding compliance monitoring, rigor, workload, and acceptability. A scoping review (PRISMA 2020) was conducted using searches in Scopus, Web of Science, ERIC, and IEEE Xplore (2022–2026). Out of 92 records, 11 studies were included, and four dimensions were coded: compliance assessment approach, specified requirements, implementation patterns, and reported evidence. The results indicate limited operationalization: the absence of explicit assessment (27.3%) and unverified self-disclosure (18.2%) are predominant, along with implicit instructor judgment (18.2%). Requirements are often poorly specified (45.5%), and evidence concerning workload and acceptability is rarely reported (63.6%). Overall, the literature suggests that transparency is more feasible when it is proportionate, grounded in clear expectations, and aligned with the assessment design, while avoiding punitive or overly surveillant dynamics. The review protocol was prospectively registered in PROSPERO (CRD420261287226).

6 February 2026

Search flow diagram of selected studies.

Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that underpin adaptive expertise. Macrocognition encompasses higher-order cognitive processes that are essential for performance transfer beyond controlled training conditions. When these processes are insufficiently supported, training systems risk fostering brittle strategies and negative training effects. This paper introduces a macrocognitive design taxonomy for simulation-based training systems derived from a large-scale meta-analysis examining the transfer of macrocognitive skills from immersive simulations to real-world training environments. Drawing on evidence synthesized from 111 studies spanning healthcare, industrial safety, skilled trades, and defense contexts, the taxonomy links macrocognitive theory to human–computer interaction (HCI) design affordances, computational data traces, and feedback and adaptation mechanisms shown to support transfer. Grounded in joint cognitive systems theory and learning engineering practice, the taxonomy treats macrocognition as a designable and computable system concern informed by empirical transfer effects rather than as an abstract explanatory construct.

6 February 2026

Macrocognitive design taxonomy for simulation-based training.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Advanced Image Processing and Computer Vision
Reprint

Advanced Image Processing and Computer Vision

Editors: Selene Tomassini, M. Ali Akber Dewan

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Computers - ISSN 2073-431X