Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,387)

Search Parameters:
Keywords = task dependencies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4761 KB  
Article
ACR: Adaptive Confidence Re-Scoring for Reliable Answer Selection Among Multiple Candidates
by Eunhye Jeong and Yong Suk Choi
Appl. Sci. 2025, 15(17), 9587; https://doi.org/10.3390/app15179587 (registering DOI) - 30 Aug 2025
Abstract
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final [...] Read more.
With the improved reasoning capabilities of large language models (LLMs), their applications have rapidly expanded across a wide range of tasks. In recent question answering tasks, performance gains have been achieved through Self-Consistency, where LLMs generate multiple reasoning paths and determine the final answer via majority voting. However, this approach can fail when the correct answer is generated but does not appear frequently enough to be selected, highlighting its vulnerability to inconsistent generations. To address this, we propose Adaptive Confidence Re-scoring (ACR)—a method that adaptively evaluates and re-scores candidate answers to select the most trustworthy one when LLMs fail to generate consistent reasoning. Experiments on arithmetic and logical reasoning benchmarks show that ACR maintains or improves answer accuracy while significantly reducing inference cost. Compared to existing verification methods such as FOBAR, ACR reduces the number of inference calls by up to 95%, while improving inference efficiency—measured as accuracy gain per inference call—by a factor of 2× to 17×, depending on the dataset and model. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
19 pages, 928 KB  
Article
Putamen Stiffness Declines with Age and Is Associated with Implicit Sequence Learning Outcomes
by Hyeon Jung Heselton, Aaron T. Anderson, Curtis L. Johnson, Neal J. Cohen, Bradley P. Sutton and Hillary Schwarb
Brain Sci. 2025, 15(9), 947; https://doi.org/10.3390/brainsci15090947 - 29 Aug 2025
Abstract
Background/Objectives: Sequence learning, the ability to pick up on regularities in our environment to facilitate behavior, is critically dependent on striatal structures in the brain, with the putamen emerging as a critical hub for implicit sequence learning. As the putamen is known to [...] Read more.
Background/Objectives: Sequence learning, the ability to pick up on regularities in our environment to facilitate behavior, is critically dependent on striatal structures in the brain, with the putamen emerging as a critical hub for implicit sequence learning. As the putamen is known to shrink with age, and age-related declines in sequence learning abilities are common, it has been hypothesized that the structural integrity of the putamen is likely related to sequence learning outcomes. However, the structural literature is sparse. One reason may be that traditional structural imaging measures, like volume, are not sufficiently sensitive to measure changes that are related to performance outcomes. We propose that magnetic resonance elastography (MRE), an emerging neuroimaging tool that provides quantitative measures of microstructural integrity, may fill this gap. Methods: In this study, both sequence learning abilities and the structural integrity of the putamen were assessed in 61 cognitively healthy middle-aged and older adults (range: 45–78 years old). Sequence learning was measured via performance on the Serial Reaction Time Task. Putamen integrity was assessed in two ways: first, via standard structural volume assessments, and second, via MRE measures of tissue integrity. Results: Age significantly correlated with both putamen volume and stiffness but not sequence learning scores. While sequence learning scores did not correlate with volume, MRE-derived measures of putamen stiffness were significantly correlated with learning outcomes such that individuals with stiffer putamen showed higher learning scores. A series of control analyses were performed to highlight the specificity and sensitivity of this putamen stiffness–sequence learning relationship. Conclusions: Together these data indicate that microstructural changes that occur in the putamen as we age may contribute to changes in sequence learning outcomes. Full article
49 pages, 6649 KB  
Article
A Sequence-Aware Surrogate-Assisted Optimization Framework for Precision Gyroscope Assembly Based on AB-BiLSTM and SEG-HHO
by Donghuang Lin, Yongbo Jian and Haigen Yang
Electronics 2025, 14(17), 3470; https://doi.org/10.3390/electronics14173470 - 29 Aug 2025
Abstract
High-precision assembly plays a central role in aerospace, defense, and precision instrumentation, where errors in bolt preload or tightening sequences can directly degrade product reliability and lead to costly rework. Traditional finite element analysis (FEA) offers accuracy but is too computationally expensive for [...] Read more.
High-precision assembly plays a central role in aerospace, defense, and precision instrumentation, where errors in bolt preload or tightening sequences can directly degrade product reliability and lead to costly rework. Traditional finite element analysis (FEA) offers accuracy but is too computationally expensive for iterative or real-time optimization. Surrogate models are a promising alternative, yet conventional machine learning methods often neglect the sequential and constraint-aware nature of multi-bolt assembly. To overcome these limitations, this paper introduces an integrated framework that combines an Attention-based Bidirectional Long Short-Term Memory (AB-BiLSTM) surrogate with a stratified version of the Harris Hawks Optimizer (SEG-HHO). The AB-BiLSTM captures temporal dependencies in preload evolution while providing interpretability through attention–weight visualization, linking model focus to physical assembly dynamics. SEG-HHO employs an encoding–decoding mechanism to embed engineering constraints, enabling efficient search in complex and constrained design spaces. Validation on a gyroscope assembly task demonstrates that the framework achieves high predictive accuracy (Mean Absolute Error of 3.59 × 10−5), reduces optimization cost by orders of magnitude compared with FEA, and reveals physically meaningful patterns in bolt interactions. These results indicate a scalable and interpretable solution for precision assembly optimization. Full article
23 pages, 7214 KB  
Article
Remaining Useful Life Prediction of Rolling Bearings Based on Empirical Mode Decomposition and Transformer Bi-LSTM Network
by Chun Jin, Bo Li, Yanli Yang, Xiaodong Yuan, Rang Tu, Linbin Qiu and Xu Chen
Appl. Sci. 2025, 15(17), 9529; https://doi.org/10.3390/app15179529 (registering DOI) - 29 Aug 2025
Abstract
Remaining useful life (RUL) prediction is critical for ensuring the reliability and safety of industrial equipment. In recent years, Transformer-based models have been widely employed in RUL prediction tasks for rolling bearings, owing to their superior capability in capturing global features. However, Transformers [...] Read more.
Remaining useful life (RUL) prediction is critical for ensuring the reliability and safety of industrial equipment. In recent years, Transformer-based models have been widely employed in RUL prediction tasks for rolling bearings, owing to their superior capability in capturing global features. However, Transformers exhibit limitations in extracting local temporal features, making it challenging to fully model the degradation process. To address this issue, this paper proposes a parallel hybrid prediction approach based on Transformer and Long Short-Term Memory (LSTM) networks. The proposed method begins by applying Empirical Mode Decomposition (EMD) to the raw vibration signals of rolling bearings, decomposing them into a series of Intrinsic Mode Functions (IMFs), from which statistical features are extracted. These features are then normalized and used to construct the input dataset for the model. In the model architecture, the LSTM network is employed to capture local temporal dependencies, while the Transformer module is utilized to model long-range relationships for RUL prediction. The performance of the proposed method is evaluated using mean absolute error (MAE) and root mean square error (RMSE). Experimental validation is conducted on the PHM2012 dataset, along with generalization experiments on the XJTU-SY dataset. The results demonstrate that the proposed Transformer–LSTM approach achieves high prediction accuracy and strong generalization performance, outperforming conventional methods such as LSTM and GRU. Full article
Show Figures

Figure 1

24 pages, 4429 KB  
Article
Average Voltage Prediction of Battery Electrodes Using Transformer Models with SHAP-Based Interpretability
by Mary Vinolisha Antony Dhason, Indranil Bhattacharya, Ernest Ozoemela Ezugwu and Adeloye Ifeoluwa Ayomide
Energies 2025, 18(17), 4587; https://doi.org/10.3390/en18174587 - 29 Aug 2025
Abstract
Batteries are ubiquitous, with their presence ranging from electric vehicles to portable electronics. Research focused on increasing average voltage, improving stability, and extending cycle longevity of batteries is pivotal for the advancement of battery technology. These advancements can be accelerated through research into [...] Read more.
Batteries are ubiquitous, with their presence ranging from electric vehicles to portable electronics. Research focused on increasing average voltage, improving stability, and extending cycle longevity of batteries is pivotal for the advancement of battery technology. These advancements can be accelerated through research into battery chemistries. The traditional approach, which examines each material combination individually, poses significant challenges in terms of resources and financial investment. Physics-based simulations, while detailed, are both time-consuming and resource-intensive. Researchers aim to mitigate these concerns by employing Machine Learning (ML) techniques. In this study, we propose a Transformer-based deep learning model for predicting the average voltage of battery electrodes. Transformers, known for their ability to capture complex dependencies and relationships, are adapted here for tabular data and regression tasks. The model was trained on data from the Materials Project database. The results demonstrated strong predictive performance, with lower mean absolute error (MAE) and mean squared error (MSE), and higher R2 values, indicating high accuracy in voltage prediction. Additionally, we conducted detailed per-ion performance analysis across ten working ions and apply sample-wise loss weighting to address data imbalance, significantly improving accuracy on rare-ion systems (e.g., Rb and Y) while preserving overall performance. Furthermore, we performed SHAP-based feature attribution to interpret model predictions, revealing that gravimetric energy and capacity dominate prediction influence, with architecture-specific differences in learned feature importance. This work highlights the potential of Transformer architectures in accelerating the discovery of advanced materials for sustainable energy storage. Full article
18 pages, 2884 KB  
Article
Research on Multi-Path Feature Fusion Manchu Recognition Based on Swin Transformer
by Yu Zhou, Mingyan Li, Hang Yu, Jinchi Yu, Mingchen Sun and Dadong Wang
Symmetry 2025, 17(9), 1408; https://doi.org/10.3390/sym17091408 - 29 Aug 2025
Abstract
Recognizing Manchu words can be challenging due to their complex character variations, subtle differences between similar characters, and homographic polysemy. Most studies rely on character segmentation techniques for character recognition or use convolutional neural networks (CNNs) to encode word images for word recognition. [...] Read more.
Recognizing Manchu words can be challenging due to their complex character variations, subtle differences between similar characters, and homographic polysemy. Most studies rely on character segmentation techniques for character recognition or use convolutional neural networks (CNNs) to encode word images for word recognition. However, these methods can lead to segmentation errors or a loss of semantic information, which reduces the accuracy of word recognition. To address the limitations in the long-range dependency modeling of CNNs and enhance semantic coherence, we propose a hybrid architecture to fuse the spatial features of original images and spectral features. Specifically, we first leverage the Short-Time Fourier Transform (STFT) to preprocess the raw input images and thereby obtain their multi-view spectral features. Then, we leverage a primary CNN block and a pair of symmetric CNN blocks to construct a symmetric spectral enhancement module, which is used to encode the raw input features and the multi-view spectral features. Subsequently, we design a feature fusion module via Swin Transformer to fuse multi-view spectral embedding and thereby concat it with the raw input embedding. Finally, we leverage a Transformer decoder to obtain the target output. We conducted extensive experiments on Manchu words benchmark datasets to evaluate the effectiveness of our proposed framework. The experimental results demonstrated that our framework performs robustly in word recognition tasks and exhibits excellent generalization capabilities. Additionally, our model outperformed other baseline methods in multiple writing-style font-recognition tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 640 KB  
Article
Distance-Based Compression Method for Large Language Models
by Hongxin Shen and Baokun Hu
Appl. Sci. 2025, 15(17), 9482; https://doi.org/10.3390/app15179482 - 29 Aug 2025
Abstract
The computational cost of the Transformer architecture is highly dependent on the length of the input sequence, with a computational complexity of O(n2) due to the self-attention mechanism. As a result, Transformer-based models, such as Large Language Models, incur [...] Read more.
The computational cost of the Transformer architecture is highly dependent on the length of the input sequence, with a computational complexity of O(n2) due to the self-attention mechanism. As a result, Transformer-based models, such as Large Language Models, incur significant computational and storage overhead when processing tasks involving long input sequences. To mitigate these challenges, we propose a compression method that allows users to manually adjust the trade-off between compression efficiency and model performance. The method employs a trainable model to minimize information loss, ensuring that the impact on accuracy remains minimal. The method demonstrated an accuracy degradation within acceptable limits on LongBench v2. Full article
Show Figures

Figure 1

28 pages, 698 KB  
Article
From Innovation to Use: Configurational Pathways to High Fintech Use Across User Groups
by Hyun-Sun Ryu
Sustainability 2025, 17(17), 7762; https://doi.org/10.3390/su17177762 - 28 Aug 2025
Abstract
Despite high expectations for Fintech growth, its real-world expansion has fallen short due to its inherent complexity. Although Fintech is innovative, its multidimensional nature has made it difficult for companies to develop effective, tailored solutions for its diverse user groups. To foster the [...] Read more.
Despite high expectations for Fintech growth, its real-world expansion has fallen short due to its inherent complexity. Although Fintech is innovative, its multidimensional nature has made it difficult for companies to develop effective, tailored solutions for its diverse user groups. To foster the development of effective and practical Fintech solutions that can expand the user base, a novel and integrative approach is required. Therefore, this study aims to explore specific solutions to enhance Fintech use by holistically combining and intertwining various attributes. Based on the diffusion of innovation theory and the information systems success model, we propose a conceptual Fintech model consisting of three dimensions: innovation, financial service, and information technology. To investigate this model, we adopt fuzzy-set qualitative comparative analysis (fsQCA), a set-theoretic method suited to identifying combinations of Fintech attributes that lead to specific outcomes. The results reveal that the configurations of Fintech attributes leading to high Fintech use differ across four user groups: Infrequent users, Lurkers, Task-driven users, and Power users. The findings also show that information technology plays multifaceted roles depending on its combination with other Fintech attributes. This study explains the interdependencies among Fintech attributes and their combined effects on Fintech use, offering deeper insights into Fintech research through a configurational lens. Full article
Show Figures

Figure 1

15 pages, 6764 KB  
Article
V-PRUNE: Semantic-Aware Patch Pruning Before Tokenization in Vision–Language Model Inference
by Hyein Seo and Yong Suk Choi
Appl. Sci. 2025, 15(17), 9463; https://doi.org/10.3390/app15179463 - 28 Aug 2025
Abstract
Recent vision–language models (VLMs) achieve strong performance across multimodal benchmarks but suffer from high inference costs due to the large number of visual tokens. Prior studies have shown that many image tokens receive consistently low attention scores during inference, indicating that a substantial [...] Read more.
Recent vision–language models (VLMs) achieve strong performance across multimodal benchmarks but suffer from high inference costs due to the large number of visual tokens. Prior studies have shown that many image tokens receive consistently low attention scores during inference, indicating that a substantial portion of visual content contributes little to final predictions. These observations raise questions about the efficiency of conventional token pruning strategies, which are typically applied after all attention operations and depend on late-emerging attention scores. To address this, we propose V-PRUNE, a semantic-aware patch-level pruning framework for vision–language models that removes redundant content before tokenization. By evaluating local similarity via color and histogram statistics, our method enables lightweight and interpretable pruning without architectural changes. Applied to CLIP-based models, our approach reduces FLOPs and inference time across vision–language understanding tasks, while maintaining or improving accuracy. Qualitative results further confirm that essential regions are preserved and the pruning behavior is human-aligned, making our method a practical solution for efficient VLM inference. Full article
Show Figures

Figure 1

20 pages, 320 KB  
Article
Spatial Analysis of CO2 Shadow Prices and Influencing Factors in China’s Industrial Sector
by Fangfei Zhang and Xiaobo Shen
Sustainability 2025, 17(17), 7749; https://doi.org/10.3390/su17177749 - 28 Aug 2025
Abstract
Reducing emissions through the invisible hand of the market has become an important way to promote sustainable environmental development. The shadow price of carbon dioxide (CO2) is the core element of the carbon market, and its accuracy depends on [...] Read more.
Reducing emissions through the invisible hand of the market has become an important way to promote sustainable environmental development. The shadow price of carbon dioxide (CO2) is the core element of the carbon market, and its accuracy depends on the micro level of the measurement data. In view of this, this paper innovatively uses enterprise level input-output data and combines the stochastic frontier method to obtain CO2 shadow prices in China’s industrial sector. On this basis, the impacts of research and development (R&D) intensity, opening up level, traffic development level, population density, industrial structure, urbanization level, human resources level, degree of education, and environmental governance intensity on shadow price are discussed. In further analysis, this study introduces a Spatial Durbin Model (SDM) to evaluate the spatial spillover effects of CO2 shadow price itself and its influencing factors. The research results indicate that market-oriented emission abatement measures across industries and regions can reduce total costs, and it is necessary to consider incorporating carbon tax into low-carbon policies to compensate for the shortcomings of the carbon Emission Trading Scheme (ETS). In addition, neighboring regions should coordinate emission abatement tasks in a unified manner to realize a sustainable reduction in CO2 emissions. Full article
21 pages, 1696 KB  
Article
Residual Stress Estimation in Structures Composed of One-Dimensional Elements via Total Potential Energy Minimization Using Evolutionary Algorithms
by Fatih Uzun and Alexander M. Korsunsky
J. Manuf. Mater. Process. 2025, 9(9), 292; https://doi.org/10.3390/jmmp9090292 - 28 Aug 2025
Abstract
This study introduces a novel energy-based inverse method for estimating residual stresses in structures composed of one-dimensional elements undergoing elastic–plastic deformation. The problem is reformulated as a global optimization task governed by the principle of minimum total potential energy. Rather than solving equilibrium [...] Read more.
This study introduces a novel energy-based inverse method for estimating residual stresses in structures composed of one-dimensional elements undergoing elastic–plastic deformation. The problem is reformulated as a global optimization task governed by the principle of minimum total potential energy. Rather than solving equilibrium equations directly, the internal stress distribution is inferred by minimizing the structure’s total potential energy using a real-coded genetic algorithm. This approach avoids gradient-based solvers, matrix assembly, and incremental loading, making it suitable for nonlinear and history-dependent systems. Plastic deformation is encoded through element-wise stress-free lengths, and a dynamic fitness exponent strategy adaptively controls selection pressure during the evolutionary process. The method is validated on single- and multi-bar truss structures under axial tensile loading, using a bilinear elastoplastic material model. The results are benchmarked against nonlinear finite element simulations and analytical calculations, demonstrating excellent predictive capability with stress errors typically below 1%. In multi-material systems, the technique accurately reconstructs tensile and compressive residual stresses arising from elastic–plastic mismatch using only post-load geometry. These results demonstrate the method’s robustness and accuracy, offering a fully non-incremental, variational alternative to traditional inverse approaches. Its flexibility and computational efficiency make it a promising tool for residual stress estimation in complex structural applications involving plasticity and material heterogeneity. Full article
Show Figures

Figure 1

22 pages, 1926 KB  
Review
Biological Sequence Representation Methods and Recent Advances: A Review
by Hongwei Zhang, Yan Shi, Yapeng Wang, Xu Yang, Kefeng Li, Sio-Kei Im and Yu Han
Biology 2025, 14(9), 1137; https://doi.org/10.3390/biology14091137 - 27 Aug 2025
Viewed by 194
Abstract
Biological-sequence representation methods are pivotal for advancing machine learning in computational biology, transforming nucleotide and protein sequences into formats that enhance predictive modeling and downstream task performance. This review categorizes these methods into three developmental stages: computational-based, word embedding-based, and large language model [...] Read more.
Biological-sequence representation methods are pivotal for advancing machine learning in computational biology, transforming nucleotide and protein sequences into formats that enhance predictive modeling and downstream task performance. This review categorizes these methods into three developmental stages: computational-based, word embedding-based, and large language model (LLM)-based, detailing their principles, applications, and limitations. Computational-based methods, such as k-mer counting and position-specific scoring matrices (PSSM), extract statistical and evolutionary patterns to support tasks like motif discovery and protein–protein interaction prediction. Word embedding-based approaches, including Word2Vec and GloVe, capture contextual relationships, enabling robust sequence classification and regulatory element identification. Advanced LLM-based methods, leveraging Transformer architectures like ESM3 and RNAErnie, model long-range dependencies for RNA structure prediction and cross-modal analysis, achieving superior accuracy. However, challenges persist, including computational complexity, sensitivity to data quality, and limited interpretability of high-dimensional embeddings. Future directions prioritize integrating multimodal data (e.g., sequences, structures, and functional annotations), employing sparse attention mechanisms to enhance efficiency, and leveraging explainable AI to bridge embeddings with biological insights. These advancements promise transformative applications in drug discovery, disease prediction, and genomics, empowering computational biology with robust, interpretable tools. Full article
(This article belongs to the Special Issue Machine Learning Applications in Biology—2nd Edition)
Show Figures

Figure 1

22 pages, 1057 KB  
Article
Relation-Guided Embedding Transductive Propagation Network with Residual Correction for Few-Shot SAR ATR
by Xuelian Yu, Hailong Yu, Yan Peng, Lei Miao and Haohao Ren
Remote Sens. 2025, 17(17), 2980; https://doi.org/10.3390/rs17172980 - 27 Aug 2025
Viewed by 128
Abstract
Deep learning-based methods have shown great promise for synthetic aperture radar (SAR) automatic target recognition (ATR) in recent years. These methods demonstrate superior performance compared to traditional approaches across various recognition tasks. However, these methods often face significant challenges due to the limited [...] Read more.
Deep learning-based methods have shown great promise for synthetic aperture radar (SAR) automatic target recognition (ATR) in recent years. These methods demonstrate superior performance compared to traditional approaches across various recognition tasks. However, these methods often face significant challenges due to the limited availability of labeled samples, which is a common issue in SAR image analysis owing to the high cost and difficulty of data annotation. To address this issue, a variety of few-shot learning approaches have been proposed and have demonstrated promising results under data-scarce conditions. Nonetheless, a notable limitation of many existing few-shot methods is that their performance tends to plateau when more labeled samples become available. Most few-shot methods are optimized for scenarios with extremely limited data. As a result, they often fail to leverage the advantages of larger datasets. This leads to suboptimal recognition performance compared to conventional deep learning techniques when sufficient training data is available. Therefore, there is a pressing need for approaches that not only excel in few-shot scenarios but also maintain robust performance as the number of labeled samples increases. To this end, we propose a novel method, termed relation-guided embedding transductive propagation network with residual correction (RGE-TPNRC), specifically designed for few-shot SAR ATR tasks. By leveraging mechanisms such as relation node modeling, relation-guided embedding propagation, and residual correction, RGE-TPNRC can fully utilize limited labeled samples by deeply exploring inter-sample relations, enabling better scalability as the support set size increases. Consequently, it effectively addresses the plateauing performance problem of existing few-shot learning methods when more labeled samples become available. Firstly, input samples are transformed into support-query relation nodes, explicitly capturing the dependencies between support and query samples. Secondly, the known relations among support samples are utilized to guide the propagation of embeddings within the network, enabling manifold smoothing and allowing the model to generalize effectively to unseen target classes. Finally, a residual correction propagation classifier refines predictions by correcting potential errors and smoothing decision boundaries, ensuring robust and accurate classification. Experimental results on the moving and stationary target acquisition and recognition (MSTAR) and OpenSARShip datasets demonstrate that our method can achieve state-of-the-art performance in few-shot SAR ATR scenarios. Full article
Show Figures

Figure 1

28 pages, 4981 KB  
Article
Neurodetector: EEG-Based Cognitive Assessment Using Event-Related Potentials as a Virtual Switch
by Ryohei P. Hasegawa and Shinya Watanabe
Brain Sci. 2025, 15(9), 931; https://doi.org/10.3390/brainsci15090931 - 27 Aug 2025
Viewed by 162
Abstract
Background/Objectives: Motor decline in older adults can hinder cognitive assessments. To address this, we developed a brain–computer interface (BCI) using electroencephalography (EEG) and event-related potentials (ERPs) as a motor-independent EEG Switch. ERPs reflect attention-related neural activity and may serve as biomarkers for cognitive [...] Read more.
Background/Objectives: Motor decline in older adults can hinder cognitive assessments. To address this, we developed a brain–computer interface (BCI) using electroencephalography (EEG) and event-related potentials (ERPs) as a motor-independent EEG Switch. ERPs reflect attention-related neural activity and may serve as biomarkers for cognitive function. This study evaluated the feasibility of using ERP-based task success rates as indicators of cognitive abilities. The main goal of this article is the development and baseline evaluation of the Neurodetector system (incorporating the EEG Switch) as a motor-independent tool for cognitive assessment in healthy adults. Methods: We created a system called Neurodetector, which measures cognitive function through the ability to perform tasks using a virtual one-button EEG Switch. EEG data were collected from 40 healthy adults, mainly under 60 years of age, during three cognitive tasks of increasing difficulty. Results: The participants controlled the EEG Switch above chance level across all tasks. Success rates correlated with task difficulty and showed individual differences, suggesting that cognitive ability influences performance. In addition, we compared the pattern-matching method for ERP decoding with the conventional peak-based approaches. The pattern-matching method yielded a consistently higher accuracy and was more sensitive to task complexity and individual variability. Conclusions: These results support the potential of the EEG Switch as a reliable, non-motor-dependent cognitive assessment tool. The system is especially useful for populations with limited motor control, such as the elderly or individuals with physical disabilities. While Mild Cognitive Impairment (MCI) is an important future target for application, the present study involved only healthy adult participants. Future research should examine the sources of individual differences and validate EEG switches in clinical contexts, including clinical trials involving MCI and dementia patients. Our findings lay the groundwork for a novel and accessible approach for cognitive evaluation using neurophysiological data. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

27 pages, 2279 KB  
Article
HQRNN-FD: A Hybrid Quantum Recurrent Neural Network for Fraud Detection
by Yao-Chong Li, Yi-Fan Zhang, Rui-Qing Xu, Ri-Gui Zhou and Yi-Lin Dong
Entropy 2025, 27(9), 906; https://doi.org/10.3390/e27090906 - 27 Aug 2025
Viewed by 168
Abstract
Detecting financial fraud is a critical aspect of modern intelligent financial systems. Despite the advances brought by deep learning in predictive accuracy, challenges persist—particularly in capturing complex, high-dimensional nonlinear features. This study introduces a novel hybrid quantum recurrent neural network for fraud detection [...] Read more.
Detecting financial fraud is a critical aspect of modern intelligent financial systems. Despite the advances brought by deep learning in predictive accuracy, challenges persist—particularly in capturing complex, high-dimensional nonlinear features. This study introduces a novel hybrid quantum recurrent neural network for fraud detection (HQRNN-FD). The model utilizes variational quantum circuits (VQCs) incorporating angle encoding, data reuploading, and hierarchical entanglement to project transaction features into quantum state spaces, thereby facilitating quantum-enhanced feature extraction. For sequential analysis, the model integrates a recurrent neural network (RNN) with a self-attention mechanism to effectively capture temporal dependencies and uncover latent fraudulent patterns. To mitigate class imbalance, the synthetic minority over-sampling technique (SMOTE) is employed during preprocessing, enhancing both class representation and model generalizability. Experimental evaluations reveal that HQRNN-FD attains an accuracy of 0.972 on publicly available fraud detection datasets, outperforming conventional models by 2.4%. In addition, the framework exhibits robustness against quantum noise and improved predictive performance with increasing qubit numbers, validating its efficacy and scalability for imbalanced financial classification tasks. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

Back to TopTop