Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (848)

Search Parameters:
Keywords = attention bias

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2937 KB  
Article
Attention-Driven Deep Learning for News-Based Prediction of Disease Outbreaks
by Avneet Singh Gautam, Zahid Raza, Maria Lapina and Mikhail Babenko
Big Data Cogn. Comput. 2025, 9(11), 291; https://doi.org/10.3390/bdcc9110291 - 14 Nov 2025
Abstract
Natural Language Processing is being used for Disease Outbreak Prediction using news data. However, the available research focuses on predicting outbreaks for only specific diseases using disease-specific data such as COVID-19, Zika, SARS, MERS, and Ebola, etc. To address the challenge of disease [...] Read more.
Natural Language Processing is being used for Disease Outbreak Prediction using news data. However, the available research focuses on predicting outbreaks for only specific diseases using disease-specific data such as COVID-19, Zika, SARS, MERS, and Ebola, etc. To address the challenge of disease outbreak prediction without relying on prior knowledge or introducing bias, this research proposes a model that leverages a news dataset devoid of specific disease names. This approach ensures generalizability and domain independence in identifying potential outbreaks. To facilitate supervised learning, spaCy was employed to annotate the dataset, enabling the classification of articles as either related or unrelated to disease outbreaks. LSTM, Bi-LSTM, and Bi-LSTM with a Multi-Head Attention mechanism, and transformer have been used and compared for the purpose of classification. Experimental results exhibit good prediction accuracy with Bi-LSTM with Multi-Head Attention and transformer on the test dataset. The work serves as a pro-active and unbiased approach to predict any disease outbreak without being specific to any disease. Full article
23 pages, 16307 KB  
Article
Improving EFDD with Neural Networks in Damping Identification for Structural Health Monitoring
by Yuanqi Zheng, Chin-Long Lee, Jia Guo, Renjie Shen, Feifei Sun, Jiaqi Yang and Alejandro Saenz Calad
Sensors 2025, 25(22), 6929; https://doi.org/10.3390/s25226929 - 13 Nov 2025
Abstract
Damping has attracted increasing attention as an indicator for structural health monitoring (SHM), owing to its sensitivity to subtle damage that may not be reflected in natural frequencies. However, the practical application of damping-based SHM remains limited by the accuracy and robustness of [...] Read more.
Damping has attracted increasing attention as an indicator for structural health monitoring (SHM), owing to its sensitivity to subtle damage that may not be reflected in natural frequencies. However, the practical application of damping-based SHM remains limited by the accuracy and robustness of damping identification methods. Enhanced Frequency Domain Decomposition (EFDD), a widely used operational modal analysis technique, offers efficiency and user-friendliness, but suffers from intrinsic deficiencies in damping identification due to bias introduced at several signal-processing stages. This study proposes to improve EFDD by integrating neural networks, replacing heuristic parameter choices with data-driven modules. Two strategies are explored: a step-wise embedding of neural modules into the EFDD workflow, and an end-to-end grid-weight framework that aggregates candidate damping estimates using a lightweight multilayer perceptron. Both approaches are validated through numerical simulations on synthetic response datasets. Their applicability was further validated through shaking-table experiments on an eight-storey steel frame and a five-storey steel–concrete hybrid structure. The proposed grid-weight EFDD demonstrated superior robustness and sensitivity in capturing early-stage damping variations, confirming its potential for practical SHM applications. The findings also revealed that the effectiveness of damping-based indicators is strongly influenced by the structural material system. This study highlights the feasibility of integrating neural network training into EFDD to replace human heuristics, thereby improving the reliability and interpretability of damping-based damage detection. Full article
(This article belongs to the Special Issue Intelligent Sensors and Artificial Intelligence in Building)
Show Figures

Figure 1

15 pages, 1138 KB  
Systematic Review
Diagnostic Support in Dentistry Through Artificial Intelligence: A Systematic Review
by Alessio Danilo Inchingolo, Grazia Marinelli, Arianna Fiore, Liviana Balestriere, Claudio Carone, Francesco Inchingolo, Massimo Corsalini, Daniela Di Venere, Andrea Palermo, Angelo Michele Inchingolo and Gianna Dipalma
Bioengineering 2025, 12(11), 1244; https://doi.org/10.3390/bioengineering12111244 - 13 Nov 2025
Abstract
Background/Objectives: The integration of artificial intelligence (AI) into dental diagnostics is rapidly evolving, offering opportunities to improve diagnostic precision, reproducibility, and accessibility of care. This systematic review examined the clinical performance of AI-based diagnostic tools in dentistry compared with traditional methods, with [...] Read more.
Background/Objectives: The integration of artificial intelligence (AI) into dental diagnostics is rapidly evolving, offering opportunities to improve diagnostic precision, reproducibility, and accessibility of care. This systematic review examined the clinical performance of AI-based diagnostic tools in dentistry compared with traditional methods, with particular attention to radiographic assessment, orthodontic classification, periodontal disease detection, and other relevant specialties. Methods: Comprehensive searches of PubMed, Scopus, and Web of Science were carried out for articles published from January 2015 to June 2025, in accordance with PRISMA guidelines. Only English-language clinical studies investigating AI applications in dental diagnostics were included. Fifteen studies fulfilled the inclusion criteria and underwent quality appraisal and risk-of-bias assessment. Results: Across diverse dental fields, AI systems showed encouraging diagnostic capabilities. Radiographic algorithms enhanced lesion detection and anatomical landmark identification, while machine learning models successfully classified malocclusions and periodontal status. Photographic image analysis demonstrated potential in geriatric and preventive care. However, methodological variability, limited sample sizes, and the absence of external validation constrained generalizability. Study quality ranged from high to moderate, with some reports affected by bias or incomplete data reporting. Conclusions: AI holds considerable promise as an adjunct in dental diagnostics, particularly for imaging-based evaluation and clinical decision support. Broader clinical adoption will require methodological harmonization, rigorous multicenter trials, and validation of AI systems across diverse patient populations. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biotechnology)
Show Figures

Figure 1

17 pages, 654 KB  
Article
IntentGraphRec: Dual-Level Fusion of Co-Intent Graphs and Shift-Aware Sequence Encoding Under Full-Catalog Evaluation
by Doo-Yong Park and Sang-Min Choi
Mathematics 2025, 13(22), 3632; https://doi.org/10.3390/math13223632 - 12 Nov 2025
Abstract
Sequential recommendations seek to predict the next item a user will interact with by modeling historical behavior, yet most approaches emphasize either temporal dynamics or item relationships and thus miss how structural co-intents interact with dynamic preference shifts under realistic evaluation. IntentGraphRec introduces [...] Read more.
Sequential recommendations seek to predict the next item a user will interact with by modeling historical behavior, yet most approaches emphasize either temporal dynamics or item relationships and thus miss how structural co-intents interact with dynamic preference shifts under realistic evaluation. IntentGraphRec introduces a dual-level framework that builds an intent graph from session co-occurrences to learn intent-aware item representations with a lightweight GNN, paired with a shift-aware Transformer that adapts attention to evolving preferences via a learnable fusion gate. To avoid optimistic bias, evaluation is performed with a leakage-free, full-catalog ranking protocol that forms prefixes strictly before the last target occurrence and scores against the entire item universe while masking PAD and prefix items. On MovieLens-1M and Gowalla, IntentGraphRec is competitive but does not surpass strong Transformer baselines (SASRec/BERT4Rec); controlled analyses indicate that late fusion is often dominated by sequence representations and that local co-intent graphs provide limited gains unless structural signals are injected earlier or regularized. These findings provide a reproducible view of when structural signals help, and when they do not, in sequential recommendations and offer guidance for future graph–sequence hybrids. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

26 pages, 2003 KB  
Review
Artificial Intelligence in Floating Offshore Wind Turbines: A Critical Review of Applications in Design, Monitoring, Control, and Digital Twins
by Ewelina Kostecka, Tymoteusz Miller, Irmina Durlik and Arkadiusz Nerć
Energies 2025, 18(22), 5937; https://doi.org/10.3390/en18225937 - 11 Nov 2025
Viewed by 149
Abstract
Floating offshore wind turbines (FOWTs) face complex aero-hydro-servo-elastic interactions that challenge conventional modeling, monitoring, and control. This review critically examines how artificial intelligence (AI) is being applied across four domains—design and surrogate modeling, structural health monitoring, control and operations, and digital twins—with explicit [...] Read more.
Floating offshore wind turbines (FOWTs) face complex aero-hydro-servo-elastic interactions that challenge conventional modeling, monitoring, and control. This review critically examines how artificial intelligence (AI) is being applied across four domains—design and surrogate modeling, structural health monitoring, control and operations, and digital twins—with explicit attention to uncertainty and reliability. Using PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), a Scopus search identified 412 records; after filtering for articles, conference papers, and open access, 115 studies were analyzed. We organize the literature into a taxonomy covering classical supervised learning, deep neural surrogates, physics-informed and hybrid models, reinforcement learning, digital twins with online learning, and uncertainty-aware approaches. Neural surrogates accelerate coupled simulations; probabilistic encoders improve structural health monitoring; model predictive control and trust-region reinforcement learning enhance adaptive control; and digital twins integrate reduced-order physics with data-driven calibration for lifecycle management. The corpus reveals progress but also recurring limitations: simulation-heavy validation, inconsistent metrics, and insufficient field-scale evidence. We conclude with a bias-aware synthesis and propose priorities for future work, including shared benchmarks, safe RL with stability guarantees, twin-in-the-loop testing, and uncertainty-to-decision standards that connect model outputs to certification and operational risk. Full article
(This article belongs to the Special Issue Computation Modelling for Offshore Wind Turbines and Wind Farms)
Show Figures

Figure 1

17 pages, 999 KB  
Review
Convergent Evolution and the Epigenome
by Sebastian Gaston Alvarado, Annaliese Chang and Maral Tajerian
Epigenomes 2025, 9(4), 45; https://doi.org/10.3390/epigenomes9040045 - 11 Nov 2025
Viewed by 165
Abstract
Background: Trait convergence or parallelism is widely seen across the animal and plant kingdoms. For example, the evolution of eyes in cephalopods and vertebrate lineages, wings in bats and insects, or shark and dolphin body shapes are examples of convergent evolution. Such traits [...] Read more.
Background: Trait convergence or parallelism is widely seen across the animal and plant kingdoms. For example, the evolution of eyes in cephalopods and vertebrate lineages, wings in bats and insects, or shark and dolphin body shapes are examples of convergent evolution. Such traits develop as a function of environmental pressures or opportunities that lead to similar outcomes despite the independent origins of underlying tissues, cells, and gene transcriptional patterns. Our current understanding of the molecular processes underlying these phenomena is gene-centric and focuses on how convergence involves the recruitment of novel genes, the recombination of gene products, and the duplication and divergence of genetic substrates. Scope: Despite the independent origins of a given trait, these model organisms still possess some form of epigenetic processes conserved in eukaryotes that mediate gene-by-environment interactions. These traits evolve under similar environmental pressures, so attention should be given to plastic molecular processes that shape gene function along these evolutionary paths. Key Mechanisms: Here, we propose that epigenetic processes such as histone-modifying machinery are essential in mediating the dialog between environment and gene function, leading to trait convergence across disparate lineages. We propose that epigenetic modifications not only mediate gene-by-environment interactions but also bias the distribution of de novo mutations and recombination, thereby channeling evolutionary trajectories toward convergence. An inclusive view of the epigenetic landscape may provide a parsimonious understanding of trait evolution. Full article
(This article belongs to the Collection Feature Papers in Epigenomes)
Show Figures

Figure 1

43 pages, 4264 KB  
Article
Generative AI Integration: Key Drivers and Factors Enhancing Productivity of Engineering Faculty and Students for Sustainable Education
by Humaid Al Naqbi, Zied Bahroun and Vian Ahmed
Sustainability 2025, 17(21), 9914; https://doi.org/10.3390/su17219914 - 6 Nov 2025
Viewed by 433
Abstract
Generative Artificial Intelligence (GAI) technologies are revolutionizing productivity and creativity across educational and engineering contexts. This study addresses a critical gap by examining the key factors influencing the successful integration of GAI tools to enhance faculty and student productivity, with a focus on [...] Read more.
Generative Artificial Intelligence (GAI) technologies are revolutionizing productivity and creativity across educational and engineering contexts. This study addresses a critical gap by examining the key factors influencing the successful integration of GAI tools to enhance faculty and student productivity, with a focus on higher education and its role in advancing sustainable development. Specifically, it investigates challenges, opportunities, and essential conditions for effective GAI adoption that support not only academic excellence but also the preparation of engineers capable of addressing global sustainability challenges in line with the United Nations Sustainable Development Goals (SDGs), particularly SDG 4 (Quality Education), SDG 9 (Industry, Innovation, and Infrastructure), and SDG 12 (Responsible Consumption and Production). A preliminary literature review identified significant factors requiring attention, further refined through interviews with 14 students and 13 faculty members, and expanded upon via a survey involving 54 students and 42 faculty members. Participants rated the significance of various factors on a five-point Likert scale, allowing for the calculation of the Relative Importance Index (RII). The findings reveal that while compliance with ethical standards and bias mitigation emerged as the most significant concerns, mid-level considerations such as institutional support, training, and explainability are critical for fostering GAI adoption in sustainable learning environments. Foundational elements, including robust technical infrastructure, data security, and scalability, are vital for long-term success and alignment with responsible and sustainable innovation. Notably, this study highlights a divergence in perspectives between faculty and students regarding GAI’s impact on productivity, with faculty emphasizing ethical considerations and students focusing on efficiency gains. This study offers a comprehensive set of considerations and insights for guiding GAI integration in educational and engineering settings. It emphasizes the need for multidisciplinary collaboration, continuous training, and strong governance to balance innovation, responsibility, and sustainability. The findings advance theoretical understanding and provide practical insights for academia, policymakers, and technology developers aiming to harness GAI’s full potential in fostering sustainable engineering education and development. Full article
(This article belongs to the Special Issue Advances in Engineering Education and Sustainable Development)
Show Figures

Figure 1

20 pages, 1349 KB  
Article
DATTAMM: Domain-Aware Test-Time Adaptation for Multimodal Misinformation Detection
by Kaicheng Xu, Shasha Wang and Zipeng Diao
Appl. Sci. 2025, 15(21), 11832; https://doi.org/10.3390/app152111832 - 6 Nov 2025
Viewed by 484
Abstract
The rapid proliferation of multimodal misinformation across diverse news categories poses unprecedented challenges to digital ecosystems, where existing detection systems exhibit critical limitations in domain adaptation and fairness. Current methods suffer from two fundamental flaws: (1) severe performance variance (>35% accuracy drop in [...] Read more.
The rapid proliferation of multimodal misinformation across diverse news categories poses unprecedented challenges to digital ecosystems, where existing detection systems exhibit critical limitations in domain adaptation and fairness. Current methods suffer from two fundamental flaws: (1) severe performance variance (>35% accuracy drop in education/science categories) due to category-specific semantic shifts; (2) systemic real/fake detection bias causing up to 68.3% false positives in legitimate content—risking suppression of factual reporting especially in high-stakes domains like public health discourse. To address these dual challenges, this paper proposes the DATTAMM (Domain-Adaptive Tensorized Multimodal Model), a novel framework integrating category-aware attention mechanisms and adversarial debiasing modules. Our approach dynamically aligns textual–visual features while suppressing domain-irrelevant noise through the following: (a) semantic disentanglement layers extracting category-invariant patterns; (b) cross-modal verification units resolving inter-modal conflicts; (c) real/fake gradient alignment regularizers. Extensive experiments on nine news categories demonstrate that the DATTAMM achieves an average F1-score of 0.854, outperforming state-of-the-art baselines by 32.7%. The model maintains consistent performance with less than 5.4% variance across categories, significantly reducing accuracy drops in education and science content where baselines degrade by over 35%. Crucially, the DATTAMM narrows the real/fake F1 gap to merely 0.017, compared to 0.243–0.547 in baseline models, while cutting false positives in high-stakes domains like health news to 5.8% versus the 38.2% baseline average. These advances lower societal costs of misclassification by 79.7%, establishing a new paradigm for robust and equitable misinformation detection in evolving information ecosystems. Full article
Show Figures

Figure 1

26 pages, 1484 KB  
Article
Enhancing Ransomware Threat Detection: Risk-Aware Classification via Windows API Call Analysis and Hybrid ML/DL Models
by Sarah Alhuwayshil, Sundaresan Ramachandran and Kyounggon Kim
J. Cybersecur. Priv. 2025, 5(4), 96; https://doi.org/10.3390/jcp5040096 - 5 Nov 2025
Viewed by 317
Abstract
Ransomware attacks pose a serious threat to computer networks, causing widespread disruption to individual, corporate, governmental, and critical national infrastructures. To mitigate their impact, extensive research has been conducted to analyze ransomware operations. However, most prior studies have focused on decryption, post-infection response, [...] Read more.
Ransomware attacks pose a serious threat to computer networks, causing widespread disruption to individual, corporate, governmental, and critical national infrastructures. To mitigate their impact, extensive research has been conducted to analyze ransomware operations. However, most prior studies have focused on decryption, post-infection response, or general family-level classification for performance evaluation, with limited attention to linking classification accuracy to each family’s threat level and behavioral patterns. In this study, we propose a classification framework for the most dangerous ransomware families targeting Windows systems, correlating model performance with defined threat levels (high, medium, and low) based on API call patterns. Two independent datasets were used, extracted from VirusTotal and Cuckoo Sandbox, and a cross-source evaluation strategy was applied, alternating training and testing roles between datasets to assess generalization ability and minimize source bias. The results show that the proposed approach, particularly when using XGBoost and LightGBM, achieved accuracy rates ranging from 84 to 100% across datasets. These findings confirm the effectiveness of our method in accurately classifying ransomware families while accounting for their severity and behavioral characteristics. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Figure 1

22 pages, 3487 KB  
Article
Research and Optimization of Ultra-Short-Term Photovoltaic Power Prediction Model Based on Symmetric Parallel TCN-TST-BiGRU Architecture
by Tengjie Wang, Zian Gong, Zhiyuan Wang, Yuxi Liu, Yahong Ma, Feng Wang and Jing Li
Symmetry 2025, 17(11), 1855; https://doi.org/10.3390/sym17111855 - 3 Nov 2025
Viewed by 258
Abstract
(1) Background: Ultra-short-term photovoltaic (PV) power prediction is crucial for optimizing grid scheduling and enhancing energy utilization efficiency. Existing prediction methods face challenges of missing data, noise interference, and insufficient accuracy. (2) Methods: This study proposes a single-step hybrid neural network model integrating [...] Read more.
(1) Background: Ultra-short-term photovoltaic (PV) power prediction is crucial for optimizing grid scheduling and enhancing energy utilization efficiency. Existing prediction methods face challenges of missing data, noise interference, and insufficient accuracy. (2) Methods: This study proposes a single-step hybrid neural network model integrating Temporal Convolutional Network (TCN), Temporal Shift Transformer (TST), and Bidirectional Gated Recurrent Unit (BiGRU) to achieve high-precision 15-minute-ahead PV power prediction, with a design aligned with symmetry principles. Data preprocessing uses Variational Mode Decomposition (VMD) and random forest interpolation to suppress noise and repair missing values. A symmetric parallel dual-branch feature extraction module is built: TCN-TST extracts local dynamics and long-term dependencies, while BiGRU captures global features. This symmetric structure matches the intra-day periodic symmetry of PV power (e.g., symmetric irradiance patterns around noon) and avoids bias from single-branch models. Tensor concatenation and an adaptive attention mechanism realize feature fusion and dynamic weighted output. (3) Results: Experiments on real data from a Xinjiang PV power station, with hyperparameter optimization (BiGRU units, activation function, TCN kernels, TST parameters), show that the model outperforms comparative models in MAE and R2—e.g., the MAE is 26.53% and 18.41% lower than that of TCN and Transforme. (4) Conclusions: The proposed method achieves a balance between accuracy and computational efficiency. It provides references for PV station operation, system scheduling, and grid stability. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

27 pages, 1889 KB  
Systematic Review
Clinical Effectiveness of Treatments for Mild Cognitive Impairment in Adults: A Systematic Review
by Daniel Cepeda-Pineda, Gabriela Sequeda, Sandra-Milena Carrillo-Sierra, Kevin Silvera-Cruz, Johanna Redondo-Chamorro, Astrid Rozo-Sánchez, Valmore Bermúdez, Julio César Contreras-Velásquez, Yulineth Gómez-Charris and Diego Rivera-Porras
Eur. J. Investig. Health Psychol. Educ. 2025, 15(11), 226; https://doi.org/10.3390/ejihpe15110226 - 3 Nov 2025
Viewed by 395
Abstract
Background/Objectives: Mild cognitive impairment (MCI) represents an intermediate stage between normal ageing and dementia, with a high annual progression rate. Despite its clinical relevance, no pharmacological treatment has been definitively approved for this condition; however, multiple pharmacological and non-pharmacological strategies have been [...] Read more.
Background/Objectives: Mild cognitive impairment (MCI) represents an intermediate stage between normal ageing and dementia, with a high annual progression rate. Despite its clinical relevance, no pharmacological treatment has been definitively approved for this condition; however, multiple pharmacological and non-pharmacological strategies have been investigated for their potential benefits. This systematic review assessed the effectiveness of both types of interventions in adults with MCI, aiming to identify effective strategies to preserve cognitive function. Methods: A systematic search (2017–2025) was conducted in PubMed, Scopus, ScienceDirect, SpringerLink, and WOS, following PRISMA guidelines. Randomised controlled trials and quasi-experimental studies involving adults aged ≥ 50 years with a diagnosis of MCI were included. Outcomes were evaluated in terms of cognitive, functional, behavioural, and quality-of-life improvements. Risk of bias was assessed using the RoB 2 and ROBINS-I tools. Results: Of 108,700 records screened, 40 studies were included. Non-pharmacological interventions, such as cognitive training (conventional, computerised, or virtual reality-based), consistently improved memory, attention, and executive functions (e.g., MoCA: +3.84 points; p < 0.001). Transcranial magnetic stimulation combined with physical exercise also demonstrated significant benefits (p = 0.025). Among pharmacological treatments, only vortioxetine and choline alfoscerate showed modest improvements; cholinesterase inhibitors had limited effects and frequent adverse events. Complementary therapies (yoga, probiotics, and acupuncture) yielded promising outcomes but require further validation. Conclusions: Non-pharmacological strategies, particularly cognitive training and physical exercise, emerge as the most effective and safe approaches for managing MCI. The inclusion of pharmacological interventions with preliminary evidence of benefit should be considered within a personalised, multimodal approach, while recognising the current absence of approved drug treatments for MCI. Further research is needed in underrepresented populations, such as those in Latin America. Full article
Show Figures

Figure 1

34 pages, 9628 KB  
Article
Modeling Interaction Patterns in Visualizations with Eye-Tracking: A Characterization of Reading and Information Styles
by Angela Locoro and Luigi Lavazza
Future Internet 2025, 17(11), 504; https://doi.org/10.3390/fi17110504 - 3 Nov 2025
Viewed by 340
Abstract
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information [...] Read more.
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information disposition. To fill this gap, this paper proposes a model-based method to analyze and interpret those features from eye-tracking data. To this end, the bias-noise model is applied to a data visualization eye-tracking dataset available online, and enriched with areas of interest labels. The positive results of this method are as follows: (i) the identification of users’ reading styles like meticulous, systematic, and serendipitous; (ii) the characterization of information disposition as gathered or scattered, and of information complexity as more or less dense; (iii) the discovery of a behavioural pattern of efficiency, given that the more visualizations were read by a participant, the greater their reading speed, consistency, and predictability of reading; (iv) the identification of encoding and title areas of interest as the primary loci of attention in visualizations, with a peculiar back-and-forth reading pattern; (v) the identification of the encoding area of interest as the fastest to read in less dense visualization types, such as bars, circles, and lines charts. Future experiments involving participants from diverse cultural backgrounds could not only validate the observed behavioural patterns, but also enrich the experimental framework with additional perspectives. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

23 pages, 2693 KB  
Article
Deep Learning for Student Behavior Detection in Smart Classroom Environments
by Jue Wang, Yuchen Sun and Shasha Tian
Information 2025, 16(11), 949; https://doi.org/10.3390/info16110949 - 3 Nov 2025
Viewed by 425
Abstract
The ongoing integration of information technology in education has rendered the monitoring of student behavior in smart classrooms essential for improving teaching quality and student engagement. Classroom environments frequently provide many problems, such as heterogeneous student behaviors, significant obstructions, loss of intricate details, [...] Read more.
The ongoing integration of information technology in education has rendered the monitoring of student behavior in smart classrooms essential for improving teaching quality and student engagement. Classroom environments frequently provide many problems, such as heterogeneous student behaviors, significant obstructions, loss of intricate details, and complications in recognizing diminutive targets. These limitations lead to current approaches remaining inadequate in accuracy and stability. This paper enhances YOLOv11 with the following improvements: developed the CSP-PMSA module to enhance contextual modeling in complex backgrounds, developed a scale-aware head (SAH) to improve the perception and localization of small targets via channel unification and scale adaptation, and introduced a Multi-Head Self-Attention (MHSA) mechanism to model global dependencies and positional bias across various subspaces, thereby enhancing the discrimination of visually analogous behaviors. The experimental findings indicate that in intricate classroom settings, the model attains mAP@50 and mAP@50–95 scores of 91.6% and 75.7%, respectively. This indicates enhancements of 2.7% and 2.6% compared to YOLOv11, and 4.6% and 3.6% relative to DETR, demonstrating remarkable detection precision and dependability. Additionally, the model was implemented on the Jetson Orin Nano platform, confirming its viability for real-time detection on edge devices and offering substantial assistance for practical implementations in smart classrooms. Full article
Show Figures

Graphical abstract

17 pages, 680 KB  
Article
Perceiving Digital Threats and Artificial Intelligence: A Psychometric Approach to Cyber Risk
by Diana Carbone, Francesco Marcatto, Francesca Mistichelli and Donatella Ferrante
J. Cybersecur. Priv. 2025, 5(4), 93; https://doi.org/10.3390/jcp5040093 - 3 Nov 2025
Viewed by 337
Abstract
The rapid digitalization of work and daily life has introduced a wide range of online threats, from common hazards such as malware and phishing to emerging challenges posed by artificial intelligence (AI). While technical aspects of cybersecurity have received extensive attention, less is [...] Read more.
The rapid digitalization of work and daily life has introduced a wide range of online threats, from common hazards such as malware and phishing to emerging challenges posed by artificial intelligence (AI). While technical aspects of cybersecurity have received extensive attention, less is known about how individuals perceive digital risks and how these perceptions shape protective behaviors. Building on the psychometric paradigm, this study investigated the perception of seven digital threats among a sample of 300 Italian workers employed in IT and non-IT sectors. Participants rated each hazard on dread and unknown risk dimensions and reported their cybersecurity expertise. Optimism bias and proactive awareness were also detected. Cluster analyses revealed four profiles based on different levels of dread and unknown risk ratings. The four profiles also differed in reported levels of expertise, optimism bias, and proactive awareness. Notably, AI was perceived as the least familiar and most uncertain hazard across groups, underscoring its salience in shaping digital risk perceptions. These findings highlight the heterogeneity of digital risk perception and suggest that tailored communication and training strategies, rather than one-size-fits-all approaches, are essential to fostering safer online practices. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

35 pages, 720 KB  
Review
Neural Correlates of Restless Legs Syndrome (RLS) Based on Electroencephalogram (EEG)—A Mechanistic Review
by James Chmiel and Donata Kurpas
Int. J. Mol. Sci. 2025, 26(21), 10675; https://doi.org/10.3390/ijms262110675 - 2 Nov 2025
Viewed by 487
Abstract
Restless legs syndrome (RLS) is a sensorimotor disorder with evening-predominant symptoms; convergent models implicate brain iron dysregulation and alter dopaminergic/glutamatergic signaling. Because EEG provides millisecond-scale access to cortical dynamics, we synthesized waking EEG/ERP findings in RLS (sleep EEG excluded). A structured search across [...] Read more.
Restless legs syndrome (RLS) is a sensorimotor disorder with evening-predominant symptoms; convergent models implicate brain iron dysregulation and alter dopaminergic/glutamatergic signaling. Because EEG provides millisecond-scale access to cortical dynamics, we synthesized waking EEG/ERP findings in RLS (sleep EEG excluded). A structured search across major databases (1980–July 2025) identified clinical EEG studies meeting prespecified criteria. Across small, mostly mid- to late-adult cohorts, four reproducible signatures emerged: (i) cortical hyperarousal at rest (fronto-central beta elevation with a dissociated vigilance profile); (ii) attentional/working memory ERPs with attenuated and delayed P300 (and reduced frontal P2), pointing to fronto-parietal dysfunction; (iii) network inefficiency (reduced theta/gamma synchrony and lower clustering/longer path length) that scales with symptom burden; and (iv) motor system abnormalities with exaggerated post-movement beta rebound and peri-movement cortical–autonomic co-activation, together with evening-vulnerable early visual processing during cognitive control. Dopamine agonist therapy partially normalizes behavior and ERP amplitudes. These converging EEG features provide candidate biomarkers for disease burden and treatment response and are consistent with models linking brain iron deficiency to thalamo-cortical timing failures. This mechanistic review did not adhere to PRISMA or PICO frameworks and did not include a formal risk-of-bias or quantitative meta-analysis; samples were small, heterogeneous, and English-only. Full article
(This article belongs to the Special Issue Biological Research of Rhythms in the Nervous System)
Show Figures

Figure 1

Back to TopTop