Next Issue
Volume 9, August
Previous Issue
Volume 9, June
 
 

Big Data Cogn. Comput., Volume 9, Issue 7 (July 2025) – 28 articles

Cover Story (view full-size image): This study explores the process of transfer of learning (TL) from a neuroscience perspective. EEG data were used to model the learning process on the NeuCube spiking neural network architecture. Analyses of neuron proportion values in the emerging mental effort patterns showed that prior knowledge of programming reduced the cognitive load of learners (indicated by the increase in brain activity in the alpha waveband and a corresponding decrease in the theta waveband). As a reduction in cognitive load improves the efficiency of memory use, learners with prior programming knowledge are likely to achieve better learning outcomes compared to learners without such knowledge. This study demonstrates the potential of applying a cognitive computing approach to model the neural basis of TL and support adaptive learning system development. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
32 pages, 2181 KiB  
Article
Detection of Biased Phrases in the Wiki Neutrality Corpus for Fairer Digital Content Management Using Artificial Intelligence
by Abdullah, Muhammad Ateeb Ather, Olga Kolesnikova and Grigori Sidorov
Big Data Cogn. Comput. 2025, 9(7), 190; https://doi.org/10.3390/bdcc9070190 - 21 Jul 2025
Abstract
Detecting biased language in large-scale corpora, such as the Wiki Neutrality Corpus, is essential for promoting neutrality in digital content. This study systematically evaluates a range of machine learning (ML) and deep learning (DL) models for the detection of biased and pre-conditioned phrases. [...] Read more.
Detecting biased language in large-scale corpora, such as the Wiki Neutrality Corpus, is essential for promoting neutrality in digital content. This study systematically evaluates a range of machine learning (ML) and deep learning (DL) models for the detection of biased and pre-conditioned phrases. Conventional classifiers, including Extreme Gradient Boosting (XGBoost), Light Gradient-Boosting Machine (LightGBM), and Categorical Boosting (CatBoost), are compared with advanced neural architectures such as Bidirectional Encoder Representations from Transformers (BERT), Long Short-Term Memory (LSTM) networks, and Generative Adversarial Networks (GANs). A novel hybrid architecture is proposed, integrating DistilBERT, LSTM, and GANs within a unified framework. Extensive experimentation with intermediate variants DistilBERT + LSTM (without GAN) and DistilBERT + GAN (without LSTM) demonstrates that the fully integrated model consistently outperforms all alternatives. The proposed hybrid model achieves a cross-validation accuracy of 99.00%, significantly surpassing traditional baselines such as XGBoost (96.73%) and LightGBM (96.83%). It also exhibits superior stability, statistical significance (paired t-tests), and favorable trade-offs between performance and computational efficiency. The results underscore the potential of hybrid deep learning models for capturing subtle linguistic bias and advancing more objective and reliable automated content moderation systems. Full article
46 pages, 573 KiB  
Systematic Review
State of the Art and Future Directions of Small Language Models: A Systematic Review
by Flavio Corradini, Matteo Leonesi and Marco Piangerelli
Big Data Cogn. Comput. 2025, 9(7), 189; https://doi.org/10.3390/bdcc9070189 - 21 Jul 2025
Abstract
Small Language Models (SLMs) have emerged as a critical area of study within natural language processing, attracting growing attention from both academia and industry. This systematic literature review provides a comprehensive and reproducible analysis of recent developments and advancements in SLMs post-2023. Drawing [...] Read more.
Small Language Models (SLMs) have emerged as a critical area of study within natural language processing, attracting growing attention from both academia and industry. This systematic literature review provides a comprehensive and reproducible analysis of recent developments and advancements in SLMs post-2023. Drawing on 70 English-language studies published between January 2023 and January 2025, identified through Scopus, IEEE Xplore, Web of Science, and ACM Digital Library, and focusing primarily on SLMs (including those with up to 7 billion parameters), this review offers a structured overview of the current state of the art and potential future directions. Designed as a resource for researchers seeking an in-depth global synthesis, the review examines key dimensions such as publication trends, visual data representations, contributing institutions, and the availability of public datasets. It highlights prevailing research challenges and outlines proposed solutions, with a particular focus on widely adopted model architectures, as well as common compression and optimization techniques. This study also evaluates the criteria used to assess the effectiveness of SLMs and discusses emerging de facto standards for industry. The curated data and insights aim to support and inform ongoing and future research in this rapidly evolving field. Full article
Show Figures

Figure 1

20 pages, 1798 KiB  
Article
An Approach to Enable Human–3D Object Interaction Through Voice Commands in an Immersive Virtual Environment
by Alessio Catalfamo, Antonio Celesti, Maria Fazio, A. F. M. Saifuddin Saif, Yu-Sheng Lin, Edelberto Franco Silva and Massimo Villari
Big Data Cogn. Comput. 2025, 9(7), 188; https://doi.org/10.3390/bdcc9070188 - 17 Jul 2025
Viewed by 208
Abstract
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications [...] Read more.
Nowadays, the Metaverse is facing many challenges. In this context, Virtual Reality (VR) applications allowing voice-based human–3D object interactions are limited due to the current hardware/software limitations. In fact, adopting Automated Speech Recognition (ASR) systems to interact with 3D objects in VR applications through users’ voice commands presents significant challenges due to the hardware and software limitations of headset devices. This paper aims to bridge this gap by proposing a methodology to address these issues. In particular, starting from a Mel-Frequency Cepstral Coefficient (MFCC) extraction algorithm able to capture the unique characteristics of the user’s voice, we pass it as input to a Convolutional Neural Network (CNN) model. After that, in order to integrate the CNN model with a VR application running on a standalone headset, such as Oculus Quest, we converted it into an Open Neural Network Exchange (ONNX) format, i.e., a Machine Learning (ML) interoperability open standard format. The proposed system demonstrates good performance and represents a foundation for the development of user-centric, effective computing systems, enhancing accessibility to VR environments through voice-based commands. Experiments demonstrate that a native CNN model developed through TensorFlow presents comparable performances with respect to the corresponding CNN model converted into the ONNX format, paving the way towards the development of VR applications running in headsets controlled through the user’s voice. Full article
Show Figures

Figure 1

27 pages, 7127 KiB  
Article
LeONet: A Hybrid Deep Learning Approach for High-Precision Code Clone Detection Using Abstract Syntax Tree Features
by Thanoshan Vijayanandan, Kuhaneswaran Banujan, Ashan Induranga, Banage T. G. S. Kumara and Kaveenga Koswattage
Big Data Cogn. Comput. 2025, 9(7), 187; https://doi.org/10.3390/bdcc9070187 - 15 Jul 2025
Viewed by 288
Abstract
Code duplication, commonly referred to as code cloning, is not inherent in software systems but arises due to various factors, such as time constraints in meeting project deadlines. These duplications, or “code clones”, complicate the program structure and increase maintenance costs. Code clones [...] Read more.
Code duplication, commonly referred to as code cloning, is not inherent in software systems but arises due to various factors, such as time constraints in meeting project deadlines. These duplications, or “code clones”, complicate the program structure and increase maintenance costs. Code clones are categorized into four types: Type-1, Type-2, Type-3, and Type-4. This study aims to address the adverse effects of code clones by introducing LeONet, a hybrid Deep Learning approach that enhances the detection of code clones in software systems. The hybrid approach, LeONet, combines LeNet-5 with Oreo’s Siamese architecture. We extracted clone method pairs from the BigCloneBench Java repository. Feature extraction was performed using Abstract Syntax Trees, which are scalable and accurately represent the syntactic structure of the source code. The performance of LeONet was compared against other classifiers including ANN, LeNet-5, Oreo’s Siamese, LightGBM, XGBoost, and Decision Tree. LeONet demonstrated superior performance among the classifiers tested, achieving the highest F1 score of 98.12%. It also compared favorably against state-of-the-art approaches, indicating its effectiveness in code clone detection. The results validate the effectiveness of LeONet in detecting code clones, outperforming existing classifiers and competing closely with advanced methods. This study underscores the potential of hybrid deep learning models and feature extraction techniques in improving the accuracy of code clone detection, providing a promising direction for future research in this area. Full article
Show Figures

Figure 1

18 pages, 1663 KiB  
Article
CNN-Based Framework for Classifying COVID-19, Pneumonia, and Normal Chest X-Rays
by Cristian Randieri, Andrea Perrotta, Adriano Puglisi, Maria Grazia Bocci and Christian Napoli
Big Data Cogn. Comput. 2025, 9(7), 186; https://doi.org/10.3390/bdcc9070186 - 11 Jul 2025
Cited by 1 | Viewed by 363
Abstract
This paper describes the development of a CNN model for the analysis of chest X-rays and the automated diagnosis of pneumonia, bacterial or viral, and lung pathologies resulting from COVID-19, offering new insights for further research through the development of an AI-based diagnostic [...] Read more.
This paper describes the development of a CNN model for the analysis of chest X-rays and the automated diagnosis of pneumonia, bacterial or viral, and lung pathologies resulting from COVID-19, offering new insights for further research through the development of an AI-based diagnostic tool, which can be automatically implemented and made available for rapid differentiation between normal pneumonia and COVID-19 starting from X-ray images. The model developed in this work is capable of performing three-class classification, achieving 97.48% accuracy in distinguishing chest X-rays affected by COVID-19 from other pneumonias (bacterial or viral) and from cases defined as normal, i.e., without any obvious pathology. The novelty of our study is represented not only by the quality of the results obtained in terms of accuracy but, above all, by the reduced complexity of the model in terms of parameters and a shorter inference time compared to other models currently found in the literature. The excellent trade-off between the accuracy and computational complexity of our model allows for easy implementation on numerous embedded hardware platforms, such as FPGAs, for the creation of new diagnostic tools to support medical practice. Full article
Show Figures

Figure 1

18 pages, 1199 KiB  
Article
Adaptive, Privacy-Enhanced Real-Time Fraud Detection in Banking Networks Through Federated Learning and VAE-QLSTM Fusion
by Hanae Abbassi, Saida El Mendili and Youssef Gahi
Big Data Cogn. Comput. 2025, 9(7), 185; https://doi.org/10.3390/bdcc9070185 - 9 Jul 2025
Viewed by 473
Abstract
Increased digital banking operations have brought about a surge in suspicious activities, necessitating heightened real-time fraud detection systems. Conversely, traditional static approaches encounter challenges in maintaining privacy while adapting to new fraudulent trends. In this paper, we provide a unique approach to tackling [...] Read more.
Increased digital banking operations have brought about a surge in suspicious activities, necessitating heightened real-time fraud detection systems. Conversely, traditional static approaches encounter challenges in maintaining privacy while adapting to new fraudulent trends. In this paper, we provide a unique approach to tackling those challenges by integrating VAE-QLSTM with Federated Learning (FL) in a semi-decentralized architecture, maintaining privacy alongside adapting to emerging malicious behaviors. The suggested architecture builds on the adeptness of VAE-QLSTM to capture meaningful representations of transactions, serving in abnormality detection. On the other hand, QLSTM combines quantum computational capability with temporal sequence modeling, seeking to give a rapid and scalable method for real-time malignancy detection. The designed approach was set up through TensorFlow Federated on two real-world datasets—notably IEEE-CIS and European cardholders—outperforming current strategies in terms of accuracy and sensitivity, achieving 94.5% and 91.3%, respectively. This proves the potential of merging VAE-QLSTM with FL to address fraud detection difficulties, ensuring privacy and scalability in advanced banking networks. Full article
Show Figures

Figure 1

53 pages, 2125 KiB  
Review
LLMs in Cyber Security: Bridging Practice and Education
by Hany F. Atlam
Big Data Cogn. Comput. 2025, 9(7), 184; https://doi.org/10.3390/bdcc9070184 - 8 Jul 2025
Viewed by 992
Abstract
Large Language Models (LLMs) have emerged as powerful tools in cyber security, enabling automation, threat detection, and adaptive learning. Their ability to process unstructured data and generate context-aware outputs supports both operational tasks and educational initiatives. Despite their growing adoption, current research often [...] Read more.
Large Language Models (LLMs) have emerged as powerful tools in cyber security, enabling automation, threat detection, and adaptive learning. Their ability to process unstructured data and generate context-aware outputs supports both operational tasks and educational initiatives. Despite their growing adoption, current research often focuses on isolated applications, lacking a systematic understanding of how LLMs align with domain-specific requirements and pedagogical effectiveness. This highlights a pressing need for comprehensive evaluations that address the challenges of integration, generalization, and ethical deployment in both operational and educational cyber security environments. Therefore, this paper provides a comprehensive and State-of-the-Art review of the significant role of LLMs in cyber security, addressing both operational and educational dimensions. It introduces a holistic framework that categorizes LLM applications into six key cyber security domains, examining each in depth to demonstrate their impact on automation, context-aware reasoning, and adaptability to emerging threats. The paper highlights the potential of LLMs to enhance operational performance and educational effectiveness while also exploring emerging technical, ethical, and security challenges. The paper also uniquely addresses the underexamined area of LLMs in cyber security education by reviewing recent studies and illustrating how these models support personalized learning, hands-on training, and awareness initiatives. The key findings reveal that while LLMs offer significant potential in automating tasks and enabling personalized learning, challenges remain in model generalization, ethical deployment, and production readiness. Finally, the paper discusses open issues and future research directions for the application of LLMs in both operational and educational contexts. This paper serves as a valuable reference for researchers, educators, and practitioners aiming to develop intelligent, adaptive, scalable, and ethically responsible LLM-based cyber security solutions. Full article
Show Figures

Figure 1

18 pages, 380 KiB  
Article
Gait-Based Parkinson’s Disease Detection Using Recurrent Neural Networks for Wearable Systems
by Carlos Rangel-Cascajosa, Francisco Luna-Perejón, Saturnino Vicente-Diaz and Manuel Domínguez-Morales
Big Data Cogn. Comput. 2025, 9(7), 183; https://doi.org/10.3390/bdcc9070183 - 7 Jul 2025
Viewed by 309
Abstract
Parkinson’s disease is one of the neurodegenerative conditions that has seen a significant increase in prevalence in recent decades. The lack of specific screening tests and notable disease biomarkers, combined with the strain on healthcare systems, leads to delayed detection of the disease, [...] Read more.
Parkinson’s disease is one of the neurodegenerative conditions that has seen a significant increase in prevalence in recent decades. The lack of specific screening tests and notable disease biomarkers, combined with the strain on healthcare systems, leads to delayed detection of the disease, which worsens its progression. The development of diagnostic support tools can support early detection and facilitate timely intervention. The ability of Deep Learning algorithms to identify complex features from clinical data has proven to be a promising approach in various medical domains as support tools. In this study, we present an investigation of different architectures based on Gated Recurrent Neural Networks to assess their effectiveness in identifying subjects with Parkinson’s disease from gait records. Models with Long-Short term Memory (LSTM) and Gated Recurrent Unit (GRU) layers were evaluated. Performance results reach competitive effectiveness values with the current state-of-the-art accuracy (up to 93.75% (average ± SD: 86 ± 5%)), simplifying computational complexity, which represents an advance in the implementation of executable screening and diagnostic support tools in systems with few computational resources in wearable devices. Full article
Show Figures

Figure 1

26 pages, 1804 KiB  
Article
Dependency-Aware Entity–Attribute Relationship Learning for Text-Based Person Search
by Wei Xia, Wenguang Gan and Xinpan Yuan
Big Data Cogn. Comput. 2025, 9(7), 182; https://doi.org/10.3390/bdcc9070182 - 7 Jul 2025
Viewed by 336
Abstract
Text-based person search (TPS), a critical technology for security and surveillance, aims to retrieve target individuals from image galleries using textual descriptions. The existing methods face two challenges: (1) ambiguous attribute–noun association (AANA), where syntactic ambiguities lead to incorrect associations between attributes and [...] Read more.
Text-based person search (TPS), a critical technology for security and surveillance, aims to retrieve target individuals from image galleries using textual descriptions. The existing methods face two challenges: (1) ambiguous attribute–noun association (AANA), where syntactic ambiguities lead to incorrect associations between attributes and the intended nouns; and (2) textual noise and relevance imbalance (TNRI), where irrelevant or non-discriminative tokens (e.g., ‘wearing’) reduce the saliency of critical visual attributes in the textual description. To address these aspects, we propose the dependency-aware entity–attribute alignment network (DEAAN), a novel framework that explicitly tackles AANA through dependency-guided attention and TNRI via adaptive token filtering. The DEAAN introduces two modules: (1) dependency-assisted implicit reasoning (DAIR) to resolve AANA through syntactic parsing, and (2) relevance-adaptive token selection (RATS) to suppress TNRI by learning token saliency. Experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid demonstrate state-of-the-art performance, with the DEAAN achieving a Rank-1 accuracy of 76.71% and an mAP of 69.07% on CUHK-PEDES, surpassing RDE by 0.77% in Rank-1 and 1.51% in mAP. Ablation studies reveal that DAIR and RATS individually improve Rank-1 by 2.54% and 3.42%, while their combination elevates the performance by 6.35%, validating their synergy. This work bridges structured linguistic analysis with adaptive feature selection, demonstrating practical robustness in surveillance-oriented TPS scenarios. Full article
Show Figures

Figure 1

24 pages, 2467 KiB  
Article
Laor Initialization: A New Weight Initialization Method for the Backpropagation of Deep Learning
by Laor Boongasame, Jirapond Muangprathub and Karanrat Thammarak
Big Data Cogn. Comput. 2025, 9(7), 181; https://doi.org/10.3390/bdcc9070181 - 7 Jul 2025
Viewed by 423
Abstract
This paper presents Laor Initialization, an innovative weight initialization technique for deep neural networks that utilizes forward-pass error feedback in conjunction with k-means clustering to optimize the initial weights. In contrast to traditional methods, Laor adopts a data-driven approach that enhances convergence’s stability [...] Read more.
This paper presents Laor Initialization, an innovative weight initialization technique for deep neural networks that utilizes forward-pass error feedback in conjunction with k-means clustering to optimize the initial weights. In contrast to traditional methods, Laor adopts a data-driven approach that enhances convergence’s stability and efficiency. The method was assessed using various datasets, including a gold price time series, MNIST, and CIFAR-10 across the CNN and LSTM architectures. The results indicate that the Laor Initialization achieved the lowest K-fold cross-validation RMSE (0.00686), surpassing Xavier, He, and Random. Laor demonstrated a high convergence success (final RMSE = 0.00822) and the narrowest interquartile range (IQR), indicating superior stability. Gradient analysis confirmed Laor’s robustness, achieving the lowest coefficients of variation (CV = 0.2230 for MNIST, 0.3448 for CIFAR-10, and 0.5997 for gold price) with zero vanishing layers in the CNNs. Laor achieved a 24% reduction in CPU training time for the Gold price data and the fastest runtime on MNIST (340.69 s), while maintaining efficiency on CIFAR-10 (317.30 s). It performed optimally with a batch size of 32 and a learning rate between 0.001 and 0.01. These findings establish Laor as a robust alternative to conventional methods, suitable for moderately deep architectures. Future research should focus on dynamic variance scaling and adaptive clustering. Full article
Show Figures

Figure 1

22 pages, 3702 KiB  
Article
Modeling and Simulation of Public Opinion Evolution Based on the SIS-FJ Model with a Bidirectional Coupling Mechanism
by Wenxuan Fu, Renqi Zhu, Bo Li, Xin Lu and Xiang Lin
Big Data Cogn. Comput. 2025, 9(7), 180; https://doi.org/10.3390/bdcc9070180 - 4 Jul 2025
Viewed by 335
Abstract
The evolution of public opinion on social media affects societal security and stability. To effectively control the societal impact of public opinion evolution, it is essential to study its underlying mechanisms. Public opinion evolution on social media primarily involves two processes: information dissemination [...] Read more.
The evolution of public opinion on social media affects societal security and stability. To effectively control the societal impact of public opinion evolution, it is essential to study its underlying mechanisms. Public opinion evolution on social media primarily involves two processes: information dissemination and opinion interaction. However, existing studies overlook the bidirectional coupling relationship between these two processes, with limitations such as weak coupling and insufficient consideration of individual heterogeneity. To address this, we propose the SIS-FJ model with a bidirectional coupling mechanism, which combines the strengths of the SIS (Susceptible–Infected–Susceptible) model in information dissemination and the FJ (Friedkin–Johnsen) model in opinion interaction. Specifically, the SIS model is used to describe information dissemination, while the FJ model is used to describe opinion interaction. In the computation of infection and recovery rates of the SIS model, we introduce the opinion differences between individuals and their observable neighbors from the FJ model. In the computation of opinion values in the FJ model, we introduce the node states from the SIS model, thus achieving bidirectional coupling between the two models. Moreover, the model considers individual heterogeneity from multiple aspects, including infection rate, recovery rate, and individual susceptibility. Through simulation experiments, we investigate the effects of initial opinion distribution, individual susceptibility, and network structure on public opinion evolution. Interestingly, neither initial opinion distribution, individual susceptibility, nor network structure exerts a significant influence on the proportion of disseminating and non-disseminating individuals at termination. Furthermore, we optimize the model by adjusting the functions for infection and recovery rates. Full article
(This article belongs to the Topic Social Computing and Social Network Analysis)
Show Figures

Figure 1

20 pages, 12090 KiB  
Article
Research on a Crime Spatiotemporal Prediction Method Integrating Informer and ST-GCN: A Case Study of Four Crime Types in Chicago
by Yuxiao Fan, Xiaofeng Hu and Jinming Hu
Big Data Cogn. Comput. 2025, 9(7), 179; https://doi.org/10.3390/bdcc9070179 - 3 Jul 2025
Viewed by 372
Abstract
As global urbanization accelerates, communities have emerged as key areas where social conflicts and public safety risks clash. Traditional crime prevention models experience difficulties handling dynamic crime hotspots due to data lags and poor spatiotemporal resolution. Therefore, this study proposes a hybrid model [...] Read more.
As global urbanization accelerates, communities have emerged as key areas where social conflicts and public safety risks clash. Traditional crime prevention models experience difficulties handling dynamic crime hotspots due to data lags and poor spatiotemporal resolution. Therefore, this study proposes a hybrid model combining Informer and Spatiotemporal Graph Convolutional Network (ST-GCN) to achieve precise crime prediction at the community level. By employing a community topology and incorporating historical crime, weather, and holiday data, ST-GCN captures spatiotemporal crime trends, while Informer identifies temporal dependencies. Moreover, the model leverages a fully connected layer to map features to predicted latitudes. The experimental results from 320,000 crime records from 22 police districts in Chicago, IL, USA, from 2015 to 2020 show that our model outperforms traditional and deep learning models in predicting assaults, robberies, property damage, and thefts. Specifically, the mean average error (MAE) is 0.73 for assaults, 1.36 for theft, 1.03 for robbery, and 1.05 for criminal damage. In addition, anomalous event fluctuations are effectively captured. The results indicate that our model furthers data-driven public safety governance through spatiotemporal dependency integration and long-sequence modeling, facilitating dynamic crime hotspot prediction and resource allocation optimization. Future research should integrate multisource socioeconomic data to further enhance model adaptability and cross-regional generalization capabilities. Full article
Show Figures

Figure 1

47 pages, 6244 KiB  
Review
Toward the Mass Adoption of Blockchain: Cross-Industry Insights from DeFi, Gaming, and Data Analytics
by Shezon Saleem Mohammed Abdul, Anup Shrestha and Jianming Yong
Big Data Cogn. Comput. 2025, 9(7), 178; https://doi.org/10.3390/bdcc9070178 - 3 Jul 2025
Viewed by 1230
Abstract
Blockchain’s promise of decentralised, tamper-resistant services is gaining real traction in three arenas: decentralized finance (DeFi), blockchain gaming, and data-driven analytics. These sectors span finance, entertainment, and information services, offering a representative setting in which to study real-world adoption. This survey analyzes how [...] Read more.
Blockchain’s promise of decentralised, tamper-resistant services is gaining real traction in three arenas: decentralized finance (DeFi), blockchain gaming, and data-driven analytics. These sectors span finance, entertainment, and information services, offering a representative setting in which to study real-world adoption. This survey analyzes how each domain implements blockchain, identifies the incentives that accelerate uptake, and maps the technical and organizational barriers that still limit scale. By examining peer-reviewed literature and recent industry developments, this review distils common design features such as token incentives, verifiable digital ownership, and immutable data governance. It also pinpoints the following domain-specific challenges: capital efficiency in DeFi, asset portability and community engagement in gaming, and high-volume, low-latency querying in analytics. Moreover, cross-sector links are already forming, with DeFi liquidity tools supporting in-game economies and analytics dashboards improving decision-making across platforms. Building on these findings, this paper offers guidance on stronger interoperability and user-centered design and sets research priorities in consensus optimization, privacy-preserving analytics, and inclusive governance. Together, the insights equip developers, policymakers, and researchers to build scalable, interoperable platforms and reuse proven designs while avoiding common pitfalls. Full article
(This article belongs to the Special Issue Application of Cloud Computing in Industrial Internet of Things)
Show Figures

Figure 1

24 pages, 775 KiB  
Article
Online Asynchronous Learning over Streaming Nominal Data
by Hongrui Li, Shengda Zhuo, Lin Li, Jiale Chen, Tianbo Wang, Jun Tang, Shaorui Liu and Shuqiang Huang
Big Data Cogn. Comput. 2025, 9(7), 177; https://doi.org/10.3390/bdcc9070177 - 2 Jul 2025
Viewed by 266
Abstract
Online learning has become increasingly prevalent in real-world applications, where data streams often comprise heterogeneous feature types—both nominal and numerical—and labels may not arrive synchronously with features. However, most existing online learning methods assume homogeneous data types and synchronous arrival of features and [...] Read more.
Online learning has become increasingly prevalent in real-world applications, where data streams often comprise heterogeneous feature types—both nominal and numerical—and labels may not arrive synchronously with features. However, most existing online learning methods assume homogeneous data types and synchronous arrival of features and labels. In practice, data streams are typically heterogeneous and exhibit asynchronous label feedback, making these methods insufficient. To address these challenges, we propose a novel algorithm, termed Online Asynchronous Learning over Streaming Nominal Data (OALN), which maps heterogeneous data into a continuous latent space and leverages a model pool alongside a hint mechanism to effectively manage asynchronous labels. Specifically, OALN is grounded in three core principles: (1) It utilizes a Gaussian mixture copula in the latent space to preserve class structure and numerical relationships, thereby addressing the encoding and relational learning challenges posed by mixed feature types. (2) It performs adaptive imputation through conditional covariance matrices to seamlessly handle random missing values and feature drift, while incrementally updating copula parameters to accommodate dynamic changes in the feature space. (3) It incorporates a model pool and hint mechanism to efficiently process asynchronous label feedback. We evaluate OALN on twelve real-world datasets; the average cumulative error rates are 23.31% and 28.28% under the missing rates of 10% and 50%, respectively, and the average AUC scores are 0.7895 and 0.7433, which are the best results among the compared algorithms. And both theoretical analyses and extensive empirical studies confirm the effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 27206 KiB  
Article
RecurrentOcc: An Efficient Real-Time Occupancy Prediction Model with Memory Mechanism
by Zimo Chen, Yuxiang Xie and Yingmei Wei
Big Data Cogn. Comput. 2025, 9(7), 176; https://doi.org/10.3390/bdcc9070176 - 2 Jul 2025
Viewed by 375
Abstract
Three-dimensional Occupancy Prediction provides a detailed representation of the surrounding environment, essential for autonomous driving. Long temporal image sequence fusion is a common technique used to improve the occupancy prediction performance. However, existing temporal fusion methods are inefficient due to three issues: repetitive [...] Read more.
Three-dimensional Occupancy Prediction provides a detailed representation of the surrounding environment, essential for autonomous driving. Long temporal image sequence fusion is a common technique used to improve the occupancy prediction performance. However, existing temporal fusion methods are inefficient due to three issues: repetitive feature extraction from temporal images, redundant fusion of temporal features, and suboptimal fusion of long-term historical features. To address these challenges, we propose the Recurrent Occupancy Prediction Network (RecurrentOcc). We introduce the Scene Memory Gate, a new temporal fusion module that condenses temporal scene features into a single historical feature map. This eliminates the need for repeated extraction and aggregation of multiple temporal images, reducing computational overhead. The Scene Memory Gate selectively retains valuable information from historical features and recurrently updates the historical feature map, enhancing temporal fusion performance. Additionally, we design a simple yet efficient encoder, significantly reducing the number of model parameters. Compared with other real-time methods, RecurrentOcc achieves state-of-the-art performance of 39.9 mIoU on the Occ3D-NuScenes dataset with the fewest parameters of 59.1 M and an inference speed of 23.4 FPS. Full article
(This article belongs to the Special Issue Perception and Detection of Intelligent Vision)
Show Figures

Figure 1

13 pages, 523 KiB  
Article
Using Vector Databases for the Selection of Related Occupations: An Empirical Evaluation Using O*NET
by Lino Gonzalez-Garcia, Miguel-Angel Sicilia and Elena García-Barriocanal
Big Data Cogn. Comput. 2025, 9(7), 175; https://doi.org/10.3390/bdcc9070175 - 2 Jul 2025
Viewed by 262
Abstract
Career planning agencies and other organizations can help workers if they are able to effectively identify related occupations that are relevant to the task at hand. Occupational knowledge bases such as O*NET and ESCO represent mature attempts to categorize occupations and describe them [...] Read more.
Career planning agencies and other organizations can help workers if they are able to effectively identify related occupations that are relevant to the task at hand. Occupational knowledge bases such as O*NET and ESCO represent mature attempts to categorize occupations and describe them in detail so that they can be used to search for related occupations. Vector databases offer an opportunity to find related occupations based on large pre-trained word and sentence embeddings and their associated retrieval algorithms for similarity search. This paper reports a systematic empirical evaluation of the possibilities of using vector databases for related occupation retrieval using different document structures, embeddings, and retrieval configurations for two popular open source vector databases, and using the O*NET curated database. The objective was to understand the extent to which curated relations capture all the meaningful relations in a context of retrieval. The results show that, independent of the database used, distance metrics, sentence embeddings, and the selection of text fragments are all significant in the overall retrieval performance when comparing with curated relations, but they also retrieve other relevant occupations based on text similarity. Further, the precision is high for smaller cutoffs in the results list, which is especially important for settings in which vector database retrieval is set up as part of a Retrieval Augmented Generation (RAG) pattern. The inspection of highly ranked retrieved related occupations not explicit in the curated database reveals that text similarity captures the taxonomical grouping of some occupations in some cases, but also other cross-cuts different aspects that are distinct from the hierarchical organization of the database in most of the cases. This suggests that text retrieval should be combined with querying explicit relations in practical applications. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Intelligent Environment)
Show Figures

Figure 1

17 pages, 711 KiB  
Article
Boost-Classifier-Driven Fault Prediction Across Heterogeneous Open-Source Repositories
by Philip König, Sebastian Raubitzek, Alexander Schatten, Dennis Toth, Fabian Obermann, Caroline König and Kevin Mallinger
Big Data Cogn. Comput. 2025, 9(7), 174; https://doi.org/10.3390/bdcc9070174 - 2 Jul 2025
Viewed by 227
Abstract
Ensuring reliability, availability, and security in modern software systems hinges on early fault detection, yet predicting which parts of a codebase are most at risk remains a significant challenge. In this paper, we analyze 2.4 million commits drawn from 33 heterogeneous open-source projects, [...] Read more.
Ensuring reliability, availability, and security in modern software systems hinges on early fault detection, yet predicting which parts of a codebase are most at risk remains a significant challenge. In this paper, we analyze 2.4 million commits drawn from 33 heterogeneous open-source projects, spanning healthcare, security tools, data processing, and more. By examining each repository per file and per commit, we derive process metrics (e.g., churn, file age, revision frequency) alongside size metrics and entropy-based indicators of how scattered changes are over time. We train and tune a gradient boosting model to classify bug-prone commits under realistic class-imbalance conditions, achieving robust predictive performance across diverse repositories. Moreover, a comprehensive feature-importance analysis shows that files with long lifespans (high age), frequent edits (revision count), and widely scattered changes (entropy metrics) are especially vulnerable to defects. These insights can help practitioners and researchers prioritize testing and tailor maintenance strategies, ultimately strengthening software dependability. Full article
Show Figures

Figure 1

27 pages, 2053 KiB  
Article
Modeling the Effect of Prior Knowledge on Memory Efficiency for the Study of Transfer of Learning: A Spiking Neural Network Approach
by Mojgan Hafezi Fard, Krassie Petrova, Nikola Kirilov Kasabov and Grace Y. Wang
Big Data Cogn. Comput. 2025, 9(7), 173; https://doi.org/10.3390/bdcc9070173 - 30 Jun 2025
Viewed by 496
Abstract
The transfer of learning (TL) is the process of applying knowledge and skills learned in one context to a new and different context. Efficient use of memory is essential in achieving successful TL and good learning outcomes. This study uses a cognitive computing [...] Read more.
The transfer of learning (TL) is the process of applying knowledge and skills learned in one context to a new and different context. Efficient use of memory is essential in achieving successful TL and good learning outcomes. This study uses a cognitive computing approach to identify and explore brain activity patterns related to memory efficiency in the context of learning a new programming language. This study hypothesizes that prior programming knowledge reduces cognitive load, leading to improved memory efficiency. Spatio-temporal brain data (STBD) were collected from a sample of participants (n = 26) using an electroencephalogram (EEG) device and analyzed by applying a spiking neural network (SNN) approach and the SNN-based NeuCube architecture. The findings revealed the neural patterns demonstrating the effect of prior knowledge on memory efficiency. They showed that programming learning outcomes were aligned with specific theta and alpha waveband spike activities concerning prior knowledge and cognitive load, indicating that cognitive load was a feasible metric for measuring memory efficiency. Building on these findings, this study proposes that the methodology developed for examining the relationship between prior knowledge and TL in the context of learning a programming language can be extended to other educational domains. Full article
Show Figures

Figure 1

23 pages, 1523 KiB  
Article
Deep One-Directional Neural Semantic Siamese Network for High-Accuracy Fact Verification
by Muchammad Naseer, Jauzak Hussaini Windiatmaja, Muhamad Asvial and Riri Fitri Sari
Big Data Cogn. Comput. 2025, 9(7), 172; https://doi.org/10.3390/bdcc9070172 - 30 Jun 2025
Viewed by 602
Abstract
Fake news has eroded trust in credible news sources, driving the need for tools to verify the accuracy of circulating information. Fact verification addresses this issue by classifying claims as Supports (S), Refutes (R), or Not Enough Info (NEI) based on evidence. Neural [...] Read more.
Fake news has eroded trust in credible news sources, driving the need for tools to verify the accuracy of circulating information. Fact verification addresses this issue by classifying claims as Supports (S), Refutes (R), or Not Enough Info (NEI) based on evidence. Neural Semantic Matching Networks (NSMN) is an algorithm designed for this purpose, but its reliance on BiLSTM has shown limitations, particularly overfitting. This study aims to enhance NSMN for fact verification through a structured framework comprising encoding, alignment, matching, and output layers. The proposed approach employed Siamese MaLSTM in the matching layer and introduced the Manhattan Fact Relatedness Score (MFRS) in the output layer, culminating in a novel algorithm called Deep One-Directional Neural Semantic Siamese Network (DOD–NSSN). Performance evaluation compared DOD–NSSN with NSMN and transformer-based algorithms (BERT, RoBERTa, XLM, XL-Net). Results demonstrated that DOD–NSSN achieved 91.86% accuracy and consistently outperformed other models, achieving over 95% accuracy across diverse topics, including sports, government, politics, health, and industry. The findings highlight the DOD–NSSN model’s capability to generalize effectively across various domains, providing a robust tool for automated fact verification. Full article
(This article belongs to the Special Issue Machine Learning and AI Technology for Sustainable Development)
Show Figures

Figure 1

20 pages, 3062 KiB  
Article
Cognitive Networks and Text Analysis Identify Anxiety as a Key Dimension of Distress in Genuine Suicide Notes
by Massimo Stella, Trevor James Swanson, Andreia Sofia Teixeira, Brianne N. Richson, Ying Li, Thomas T. Hills, Kelsie T. Forbush and David Watson
Big Data Cogn. Comput. 2025, 9(7), 171; https://doi.org/10.3390/bdcc9070171 - 27 Jun 2025
Viewed by 523
Abstract
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, [...] Read more.
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, and stress levels on DASS-21. Positive words cover most of the suicide notes’ vocabulary; however, co-occurrences in suicide notes overlap mostly with those produced by individuals with low anxiety (Jaccard index of 0.42 for valence and 0.38 for arousal). We introduce a “words not said” method: It removes every word that corpus A shares with a comparison corpus B and then checks the emotions of “residual” words in AB. With no leftover emotions, A and B are similar in expressing the same emotions. Simulations indicate this method can classify high/low levels of depression, anxiety and stress with 80% accuracy in a balanced task. After subtracting suicide note words, only the high-anxiety corpus displays no significant residual emotions. Our findings thus pin anxiety as a key latent feature of suicidal psychology and offer an interpretable language-based marker for suicide risk detection. Full article
Show Figures

Figure 1

27 pages, 5969 KiB  
Article
An Analysis of the Severity of Alcohol Use Disorder Based on Electroencephalography Using Unsupervised Machine Learning
by Kaloso M. Tlotleng and Rodrigo S. Jamisola, Jr.
Big Data Cogn. Comput. 2025, 9(7), 170; https://doi.org/10.3390/bdcc9070170 - 26 Jun 2025
Viewed by 1293
Abstract
This paper presents an analysis of the severity of alcohol use disorder (AUD) based on electroencephalogram (EEG) signals and alcohol drinking experiments by utilizing power spectral density (PSD) and the transitions that occur as individuals drink alcohol in increasing amounts. We use data [...] Read more.
This paper presents an analysis of the severity of alcohol use disorder (AUD) based on electroencephalogram (EEG) signals and alcohol drinking experiments by utilizing power spectral density (PSD) and the transitions that occur as individuals drink alcohol in increasing amounts. We use data from brain—computer interface (BCI) experiments using alcohol as a stimulus recorded from a group of seventeen alcohol-drinking male participants and the assessment scores of the alcohol use disorders identification test (AUDIT). This method investigates the mild, moderate, and severe symptoms of AUD using the three key domains of AUDIT, which are hazardous alcohol use, dependence symptoms, and severe alcohol use. We utilize the EEG spectral power of the theta, alpha, and beta frequency bands by observing the transitions from the initial to the final phase of alcohol consumption. Our results are compared for people with low-risk alcohol consumption, harmful or hazardous alcohol consumption, and lastly a likelihood of AUD based on the individual assessment scores of the AUDIT. We use Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) to cluster the results of the transitions in EEG signals and the overall brain activity of all the participants for the entire duration of the alcohol-drinking experiments. This study can be useful in creating an automatic AUD severity level detection tool for alcoholics to aid in early intervention and supplement evaluations by mental health professionals. Full article
Show Figures

Graphical abstract

22 pages, 3106 KiB  
Article
Confidential Intelligent Traffic Light Control System: Prevention of Unauthorized Traceability
by Ahmad Audat, Maram Bani Younes, Marah Yahia and Said Ghoul
Big Data Cogn. Comput. 2025, 9(7), 169; https://doi.org/10.3390/bdcc9070169 - 26 Jun 2025
Viewed by 388
Abstract
Many research studies have designed intelligent traffic light scheduling algorithms. Some researchers rely on specialized sensors and hardware to gather real-time traffic data at signalized road intersections. Others benefit from artificial intelligence techniques and/or cloud computing technologies. The technology of vehicular networks has [...] Read more.
Many research studies have designed intelligent traffic light scheduling algorithms. Some researchers rely on specialized sensors and hardware to gather real-time traffic data at signalized road intersections. Others benefit from artificial intelligence techniques and/or cloud computing technologies. The technology of vehicular networks has been widely used to gather the traffic characteristics of competing traffic flows at signalized road intersections. Intelligent traffic light controlling systems aim to fairly liberate competing traffic at signalized road intersections and eliminate traffic crises. These algorithms have been initially developed without focusing on the consequences of security threats or attacks. However, the accuracy of gathered traffic data at each road intersection affects its performance. Fake and corrupted packets highly affect the accuracy of the gathered traffic data. Thus, in this work, we aim to investigate the aspects of security and confidentiality of intelligent traffic light systems. The possible attacks on the confidentiality of intelligent traffic light systems are examined. Then, a confidential traffic light control system that protects the privacy of traveling vehicles and drivers is presented. The proposed algorithm mainly prevents unauthorized traceability and linkability attacks that threaten people’s lives and violate their privacy. Finally, the proposed algorithm is evaluated through extensive experiments to verify its correctness and benefits compared to traditional insecure intelligent traffic light systems. Full article
(This article belongs to the Special Issue Advances in Intelligent Defense Systems for the Internet of Things)
Show Figures

Figure 1

14 pages, 1789 KiB  
Article
Addressing Credit Card Fraud Detection Challenges with Adversarial Autoencoders
by Shiyu Ma and Carol Anne Hargreaves
Big Data Cogn. Comput. 2025, 9(7), 168; https://doi.org/10.3390/bdcc9070168 - 26 Jun 2025
Viewed by 477
Abstract
The surge in credit fraud incidents poses a critical threat to financial systems, driving the need for robust and adaptive fraud detection solutions. While various predictive models have been developed, existing approaches often struggle with two persistent challenges: extreme class imbalance and delays [...] Read more.
The surge in credit fraud incidents poses a critical threat to financial systems, driving the need for robust and adaptive fraud detection solutions. While various predictive models have been developed, existing approaches often struggle with two persistent challenges: extreme class imbalance and delays in detecting fraudulent activity. In this study, we propose an unsupervised Adversarial Autoencoder (AAE) framework designed to tackle these challenges simultaneously. The results highlight the potential of our approach as a scalable, interpretable, and adaptive solution for real-world credit fraud detection systems. Full article
Show Figures

Figure 1

25 pages, 5064 KiB  
Article
Enhancing Drone Detection via Transformer Neural Network and Positive–Negative Momentum Optimizers
by Pavel Lyakhov, Denis Butusov, Vadim Pismennyy, Ruslan Abdulkadirov, Nikolay Nagornov, Valerii Ostrovskii and Diana Kalita
Big Data Cogn. Comput. 2025, 9(7), 167; https://doi.org/10.3390/bdcc9070167 - 26 Jun 2025
Viewed by 430
Abstract
The rapid development of unmanned aerial vehicles (UAVs) has had a significant impact on the growth of the economic, industrial, and social welfare of society. The possibility of reaching places that are difficult and dangerous for humans to access with minimal use of [...] Read more.
The rapid development of unmanned aerial vehicles (UAVs) has had a significant impact on the growth of the economic, industrial, and social welfare of society. The possibility of reaching places that are difficult and dangerous for humans to access with minimal use of third-party resources increases the efficiency and quality of maintenance of construction structures, agriculture, and exploration, which are carried out with the help of drones with a predetermined trajectory. The widespread use of UAVs has caused problems with the control of the drones’ correctness following a given route, which leads to emergencies and accidents. Therefore, UAV monitoring with video cameras is of great importance. In this paper, we propose a Yolov12 architecture with positive–negative pulse-based optimization algorithms to solve the problem of drone detection on video data. Self-attention-based mechanisms in transformer neural networks (NNs) improved the quality of drone detection on video. The developed algorithms for training NN architectures improved the accuracy of drone detection by achieving the global extremum of the loss function in fewer epochs using positive–negative pulse-based optimization algorithms. The proposed approach improved object detection accuracy by 2.8 percentage points compared to known state-of-the-art analogs. Full article
Show Figures

Figure 1

37 pages, 10762 KiB  
Article
Evaluating Adversarial Robustness of No-Reference Image and Video Quality Assessment Models with Frequency-Masked Gradient Orthogonalization Adversarial Attack
by Khaled Abud, Sergey Lavrushkin and Dmitry Vatolin
Big Data Cogn. Comput. 2025, 9(7), 166; https://doi.org/10.3390/bdcc9070166 - 25 Jun 2025
Viewed by 652
Abstract
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the [...] Read more.
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the robustness of Image Quality Assessment (IQA) methods remains understudied. This paper addresses this gap by proposing FM-GOAT (Frequency-Masked Gradient Orthogonalization Attack), a novel white box adversarial method tailored for no-reference IQA models. Using a novel gradient orthogonalization technique, FM-GOAT uniquely optimizes adversarial perturbations against multiple perceptual constraints to minimize visibility, moving beyond traditional lp-norm bounds. We evaluate FM-GOAT on seven state-of-the-art NR-IQA models across three image and video datasets, revealing significant vulnerability to the proposed attack. Furthermore, we examine the applicability of adversarial purification methods to the IQA task, as well as their efficiency in mitigating white box adversarial attacks. By studying the activations from models’ intermediate layers, we explore their behavioral patterns in adversarial scenarios and discover valuable insights that may lead to better adversarial detection. Full article
Show Figures

Figure 1

24 pages, 1261 KiB  
Article
Exploring Factors Impacting User Satisfaction with Electronic Payment Services in Taiwan: A Text-Mining Analysis of User Reviews
by Shu-Fen Tu and Ching-Sheng Hsu
Big Data Cogn. Comput. 2025, 9(7), 165; https://doi.org/10.3390/bdcc9070165 - 25 Jun 2025
Viewed by 591
Abstract
Electronic payments are becoming increasingly popular in Taiwan; however, there is a lack of studies examining the factors affecting user satisfaction with electronic payments in Taiwan. This study focuses on Android phone users to identify key factors influencing their experiences based on user [...] Read more.
Electronic payments are becoming increasingly popular in Taiwan; however, there is a lack of studies examining the factors affecting user satisfaction with electronic payments in Taiwan. This study focuses on Android phone users to identify key factors influencing their experiences based on user reviews of electronic payment mobile applications. It analyzes which factors contribute to positive satisfaction and which lead to negative experiences. The study employed BERTopic for topic modeling, which flexibly accommodates multiple languages, enabling effective examination of reviews written in Chinese. Additionally, we utilized the semantic understanding capabilities of large-scale language models to preliminarily name the generated topics with the help of ChatGPT Plus. These preliminary names were then manually refined to determine the final topic titles. The findings reveal that for Android phone users, electronic payment services that enhance user convenience and offer discounts tend to foster positive satisfaction. Conversely, the instability of electronic payment applications results in many user complaints. These research results can provide valuable insights for specialized electronic payment institutions in Taiwan to enhance their services. Full article
(This article belongs to the Special Issue Business Intelligence and Big Data in E-commerce)
Show Figures

Figure 1

27 pages, 2079 KiB  
Article
Deep Learning-Based Draw-a-Person Intelligence Quotient Screening
by Shafaat Hussain, Toqeer Ehsan, Hassan Alhuzali and Ali Al-Laith
Big Data Cogn. Comput. 2025, 9(7), 164; https://doi.org/10.3390/bdcc9070164 - 24 Jun 2025
Viewed by 690
Abstract
The Draw-A-Person Intellectual Ability test for children, adolescents, and adults is a widely used tool in psychology for assessing intellectual ability. This test relies on human drawings for initial raw scoring, with the subsequent conversion of data into IQ ranges through manual procedures. [...] Read more.
The Draw-A-Person Intellectual Ability test for children, adolescents, and adults is a widely used tool in psychology for assessing intellectual ability. This test relies on human drawings for initial raw scoring, with the subsequent conversion of data into IQ ranges through manual procedures. However, this manual scoring and IQ assessment process can be time-consuming, particularly for busy psychologists dealing with a high caseload of children and adolescents. Presently, DAP-IQ screening continues to be a manual endeavor conducted by psychologists. The primary objective of our research is to streamline the IQ screening process for psychologists by leveraging deep learning algorithms. In this study, we utilized the DAP-IQ manual to derive IQ measurements and categorized the entire dataset into seven distinct classes: Very Superior, Superior, High Average, Average, Below Average, Significantly Impaired, and Mildly Impaired. The dataset for IQ screening was sourced from primary to high school students aged from 8 to 17, comprising over 1100 sketches, which were subsequently manually classified under the DAP-IQ manual. Subsequently, the manual classified dataset was converted into digital images. To develop the artificial intelligence-based models, various deep learning algorithms were employed, including Convolutional Neural Network (CNN) and state-of-the-art CNN (Transfer Learning) models such as Mobile-Net, Xception, InceptionResNetV2, and InceptionV3. The Mobile-Net model demonstrated remarkable performance, achieving a classification accuracy of 98.68%, surpassing the capabilities of existing methodologies. This research represents a significant step towards expediting and enhancing the IQ screening for psychologists working with diverse age groups. Full article
Show Figures

Figure 1

31 pages, 4896 KiB  
Article
A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators
by Jiahui Long, Xiang Jia, Bingyi Li, Lin Zhu and Miao Wang
Big Data Cogn. Comput. 2025, 9(7), 163; https://doi.org/10.3390/bdcc9070163 - 20 Jun 2025
Viewed by 422
Abstract
The accurate forecasting of key performance indicators (KPIs) is essential for enhancing the reliability and operational efficiency of engineering systems under increasingly complex security challenges. However, existing approaches often neglect the heterogeneous nature of multivariate time series data, particularly the consistency of measurements [...] Read more.
The accurate forecasting of key performance indicators (KPIs) is essential for enhancing the reliability and operational efficiency of engineering systems under increasingly complex security challenges. However, existing approaches often neglect the heterogeneous nature of multivariate time series data, particularly the consistency of measurements and the influence of external factors, which limits their effectiveness in real-world scenarios. In this work, a Consistency-aware Hybrid Static-Dynamic Multivariate forecasting Network (CHSDM-Net) is proposed, which first applies a consistency-aware, optimization-driven segmentation to ensure high internal consistency within each segment across multiple variables. Secondly, a hybrid forecasting model integrating a Static Representation Module and a Dynamic Temporal Disentanglement and Attention Module for static and dynamic data fusion is proposed. For the dynamic data, the trend and periodic components are disentangled and fed into Trend-wise Attention and Periodic-aware Attention blocks, respectively. Extensive experiments on both synthetic and real-world radar detection datasets demonstrated that CHSDM-Net achieved significant improvements compared with existing methods. Comprehensive ablation and sensitivity analyses further validated the effectiveness and robustness of each component. The proposed method offers a practical and generalizable solution for intelligent KPI forecasting and decision support in industrial engineering applications. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop