Journal Description
Big Data and Cognitive Computing
Big Data and Cognitive Computing
is an international, peer-reviewed, open access journal on big data and cognitive computing published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q1 (Computer Science, Theory and Methods) / CiteScore - Q1 (Management Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 25.3 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.7 (2023)
Latest Articles
LLM Fine-Tuning: Concepts, Opportunities, and Challenges
Big Data Cogn. Comput. 2025, 9(4), 87; https://doi.org/10.3390/bdcc9040087 (registering DOI) - 2 Apr 2025
Abstract
►
Show Figures
As a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human–AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with
[...] Read more.
As a foundation of large language models, fine-tuning drives rapid progress, broad applicability, and profound impacts on human–AI collaboration, surpassing earlier technological advancements. This paper provides a comprehensive overview of large language model (LLM) fine-tuning by integrating hermeneutic theories of human comprehension, with a focus on the essential cognitive conditions that underpin this process. Drawing on Gadamer’s concepts of Vorverständnis, Distanciation, and the Hermeneutic Circle, the paper explores how LLM fine-tuning evolves from initial learning to deeper comprehension, ultimately advancing toward self-awareness. It examines the core principles, development, and applications of fine-tuning techniques, emphasizing its growing significance across diverse field and industries. The paper introduces a new term, “Tutorial Fine-Tuning (TFT)”, which annotates a process of intensive tuition given by a “tutor” to a small number of “students”, to define the latest round of LLM fine-tuning advancements. By addressing key challenges associated with fine-tuning, including ensuring adaptability, precision, credibility and reliability, this paper explores potential future directions for the co-evolution of humans and AI. By bridging theoretical perspectives with practical implications, this work provides valuable insights into the ongoing development of LLMs, emphasizing their potential to achieve higher levels of cognitive and operational intelligence.
Full article
Open AccessArticle
Development of a Predictive Model for the Biological Activity of Food and Microbial Metabolites Toward Estrogen Receptor Alpha (ERα) Using Machine Learning
by
Maksim Kuznetsov, Olga Chernyavskaya, Mikhail Kutuzov, Daria Vilkova, Olga Novichenko, Alla Stolyarova, Dmitry Mashin and Igor Nikitin
Big Data Cogn. Comput. 2025, 9(4), 86; https://doi.org/10.3390/bdcc9040086 (registering DOI) - 1 Apr 2025
Abstract
The interaction of estrogen receptor alpha (ERα) with various metabolites—both endogenous and exogenous, such as those present in food products, as well as gut microbiota-derived metabolites—plays a critical role in modulating the hormonal balance in the human body. In this study, we evaluated
[...] Read more.
The interaction of estrogen receptor alpha (ERα) with various metabolites—both endogenous and exogenous, such as those present in food products, as well as gut microbiota-derived metabolites—plays a critical role in modulating the hormonal balance in the human body. In this study, we evaluated a suite of 27 machine learning models and, following systematic optimization and rigorous performance comparison, identified linear discriminant analysis (LDA) as the most effective predictive approach. A meticulously curated dataset comprising 75 molecular descriptors derived from compounds with known ERα activity was assembled, enabling the model to achieve an accuracy of 89.4% and an F1 score of 0.93, thereby demonstrating high predictive efficacy. Feature importance analysis revealed that both topological and physicochemical descriptors—most notably FractionCSP3 and AromaticProportion—play pivotal roles in the potential binding to ERα. Subsequently, the model was applied to chemicals commonly encountered in food products, such as indole and various phenolic compounds, indicating that approximately 70% of these substances exhibit activity toward ERα. Moreover, our findings suggest that food processing conditions, including fermentation, thermal treatment, and storage parameters, can significantly influence the formation of these active metabolites. These results underscore the promising potential of integrating predictive modeling into food technology and highlight the need for further experimental validation and model refinement to support innovative strategies for developing healthier and more sustainable food products.
Full article
(This article belongs to the Special Issue Beyond Diagnosis: Machine Learning in Prognosis, Prevention, Healthcare, Neurosciences, and Precision Medicine)
►▼
Show Figures

Figure 1
Open AccessArticle
A Verifiable, Privacy-Preserving, and Poisoning Attack-Resilient Federated Learning Framework
by
Washington Enyinna Mbonu, Carsten Maple, Gregory Epiphaniou and Christo Panchev
Big Data Cogn. Comput. 2025, 9(4), 85; https://doi.org/10.3390/bdcc9040085 - 31 Mar 2025
Abstract
►▼
Show Figures
Federated learning is the on-device, collaborative training of a global model that can be utilized to support the privacy preservation of participants’ local data. In federated learning, there are challenges to model training regarding privacy preservation, security, resilience, and integrity. For example, a
[...] Read more.
Federated learning is the on-device, collaborative training of a global model that can be utilized to support the privacy preservation of participants’ local data. In federated learning, there are challenges to model training regarding privacy preservation, security, resilience, and integrity. For example, a malicious server can indirectly obtain sensitive information through shared gradients. On the other hand, the correctness of the global model can be corrupted through poisoning attacks from malicious clients using carefully manipulated updates. Many related works on secure aggregation and poisoning attack detection have been proposed and applied in various scenarios to address these two issues. Nevertheless, existing works are based on the trust confidence that the server will return correctly aggregated results to the participants. However, a malicious server may return false aggregated results to participants. It is still an open problem to simultaneously preserve users’ privacy and defend against poisoning attacks while enabling participants to verify the correctness of aggregated results from the server. In this paper, we propose a privacy-preserving and poisoning attack-resilient federated learning framework that supports the verification of aggregated results from the server. Specifically, we design a zero-trust dual-server architectural framework instead of a traditional single-server scheme based on trust. We exploit additive secret sharing to eliminate the single point of exposure of the training data and implement a weight selection and filtering strategy to enhance robustness to poisoning attacks while supporting the verification of aggregated results from the servers. Theoretical analysis and extensive experiments conducted on real-world data demonstrate the practicability of our proposed framework.
Full article

Figure 1
Open AccessArticle
Uncertainty-Aware δ-GLMB Filtering for Multi-Target Tracking
by
M. Hadi Sepanj, Saed Moradi, Zohreh Azimifar and Paul Fieguth
Big Data Cogn. Comput. 2025, 9(4), 84; https://doi.org/10.3390/bdcc9040084 - 31 Mar 2025
Abstract
►▼
Show Figures
The -GLMB filter is an analytic solution to the multi-target Bayes recursion used in multi-target tracking. It extends the Generalised Labelled Multi-Bernoulli (GLMB) framework by providing an efficient and scalable implementation while preserving track identities, making it a widely used approach in
[...] Read more.
The -GLMB filter is an analytic solution to the multi-target Bayes recursion used in multi-target tracking. It extends the Generalised Labelled Multi-Bernoulli (GLMB) framework by providing an efficient and scalable implementation while preserving track identities, making it a widely used approach in the field. Theoretically, the -GLMB filter handles uncertainties in measurements in its filtering procedure. However, in practice, degeneration of the measurement quality affects the performance of this filter. In this paper, we discuss the effects of increasing measurement uncertainty on the -GLMB filter and also propose two heuristic methods to improve the performance of the filter in such conditions. The base idea of the proposed methods is to utilise the information stored in the history of the filtering procedure, which can be used to decrease the measurement uncertainty effects on the filter. Since GLMB filters have shown good results in the field of multi-target tracking, an uncertainty-immune -GLMB can serve as a strong tool in this area. In this study, the results indicate that the proposed heuristic ideas can improve the performance of filtering in the presence of uncertain observations. Experimental evaluations demonstrate that the proposed methods enhance track continuity and robustness, particularly in scenarios with low detection rates and high clutter, while maintaining computational feasibility.
Full article

Figure 1
Open AccessArticle
COVID-19 Severity Classification Using Hybrid Feature Extraction: Integrating Persistent Homology, Convolutional Neural Networks and Vision Transformers
by
Redet Assefa, Adane Mamuye and Marco Piangerelli
Big Data Cogn. Comput. 2025, 9(4), 83; https://doi.org/10.3390/bdcc9040083 - 31 Mar 2025
Abstract
►▼
Show Figures
This paper introduces a model that automates the diagnosis of a patient’s condition, reducing reliance on highly trained professionals, particularly in resource-constrained settings. To ensure data consistency, the dataset was preprocessed for uniformity in size, format, and color channels. Image quality was further
[...] Read more.
This paper introduces a model that automates the diagnosis of a patient’s condition, reducing reliance on highly trained professionals, particularly in resource-constrained settings. To ensure data consistency, the dataset was preprocessed for uniformity in size, format, and color channels. Image quality was further enhanced using histogram equalization to improve the dynamic range. Lung regions were isolated using segmentation techniques, which also eliminated extraneous areas from the images. A modified segmentation-based cropping technique was employed to define an optimal cropping rectangle. Feature extraction was performed using persistent homology, deep learning, and hybrid methodologies. Persistent homology captured topological features across multiple scales, while the deep learning model leveraged convolutional transition equivariance, input-adaptive weighting, and the global receptive field provided by Vision Transformers. By integrating features from both methods, the classification model effectively predicted severity levels (mild, moderate, severe). The segmentation-based cropping method showed a modest improvement, achieving 80% accuracy, while stand-alone persistent homology features reached 66% accuracy. Notably, the hybrid model outperformed existing approaches, including SVM, ResNet50, and VGG16, achieving an accuracy of 82%.
Full article

Figure 1
Open AccessArticle
Reinforced Residual Encoder–Decoder Network for Image Denoising via Deeper Encoding and Balanced Skip Connections
by
Ismail Boucherit and Hamza Kheddar
Big Data Cogn. Comput. 2025, 9(4), 82; https://doi.org/10.3390/bdcc9040082 - 31 Mar 2025
Abstract
►▼
Show Figures
Traditional image denoising algorithms often struggle with real-world complexities such as spatially correlated noise, varying illumination conditions, sensor-specific noise patterns, motion blur, and structural distortions. This paper presents an enhanced residual denoising network, R-REDNet, which stands for Reinforced Residual Encoder–Decoder Network. The proposed
[...] Read more.
Traditional image denoising algorithms often struggle with real-world complexities such as spatially correlated noise, varying illumination conditions, sensor-specific noise patterns, motion blur, and structural distortions. This paper presents an enhanced residual denoising network, R-REDNet, which stands for Reinforced Residual Encoder–Decoder Network. The proposed architecture incorporates deeper convolutional layers in the encoder and replaces additive skip connections with averaging operations to improve feature extraction and noise suppression. Additionally, the method leverages an iterative refinement approach, further enhancing its denoising performance. Experiments conducted on two real-world noisy image datasets demonstrate that R-REDNet outperforms current state-of-the-art approaches. Specifically, it attained a peak signal-to-noise ratio of 44.01 dB and a structural similarity index of 0.9931 on Dataset 1, and it obtained a peak signal-to-noise ratio of 46.15 dB with a structural similarity index of 0.9955 on Dataset 2. These findings confirm the efficiency of our method in delivering high-quality image restoration while preserving fine details.
Full article

Figure 1
Open AccessArticle
Enhancing Green Practice Detection in Social Media with Paraphrasing-Based Data Augmentation
by
Anna Glazkova and Olga Zakharova
Big Data Cogn. Comput. 2025, 9(4), 81; https://doi.org/10.3390/bdcc9040081 - 31 Mar 2025
Abstract
►▼
Show Figures
Detecting mentions of green waste practices on social networks is a crucial tool for environmental monitoring and sustainability analytics. Social media serve as a valuable source of ecological information, enabling researchers to track trends, assess public engagement, and predict the spread of sustainable
[...] Read more.
Detecting mentions of green waste practices on social networks is a crucial tool for environmental monitoring and sustainability analytics. Social media serve as a valuable source of ecological information, enabling researchers to track trends, assess public engagement, and predict the spread of sustainable behaviors. Automatic extraction of mentions of green waste practices facilitates large-scale analysis, but the uneven distribution of such mentions presents a challenge for effective detection. To address this, data augmentation plays a key role in balancing class distribution in green practice detection tasks. In this study, we compared existing data augmentation techniques based on the paraphrasing of original texts. We evaluated the effectiveness of additional explanations in prompts, the Chain-of-Thought prompting, synonym substitution, and text expansion. Experiments were conducted on the GreenRu dataset, which focuses on detecting mentions of green waste practices in Russian social media. Our results, obtained using two instruction-based large language models, demonstrated the effectiveness of the Chain-of-Thought prompting for text augmentation. These findings contribute to advancing sustainability analytics by improving automated detection and analysis of environmental discussions. Furthermore, the results of this study can be applied to other tasks that require augmentation of text data in the context of ecological research and beyond.
Full article

Figure 1
Open AccessArticle
Optimized Resource Allocation Algorithm for a Deadline-Aware IoT Healthcare Model
by
Amal EL-Natat, Nirmeen A. El-Bahnasawy, Ayman El-Sayed and Sahar Elkazzaz
Big Data Cogn. Comput. 2025, 9(4), 80; https://doi.org/10.3390/bdcc9040080 - 30 Mar 2025
Abstract
In recent years, the healthcare market has grown very fast and is dealing with a huge increase in data. Healthcare applications are time-sensitive and need quick responses with fewer delays. Fog Computing (FC) was introduced to achieve this aim. It can be applied
[...] Read more.
In recent years, the healthcare market has grown very fast and is dealing with a huge increase in data. Healthcare applications are time-sensitive and need quick responses with fewer delays. Fog Computing (FC) was introduced to achieve this aim. It can be applied in various application areas like healthcare, smart and intelligent environments, etc. In healthcare applications, some tasks are considered critical and need to be processed first; other tasks are time-sensitive and need to be processed before their deadline. In this paper, we have proposed a Task Classification algorithm based on Deadline and Criticality (TCDC) for serving healthcare applications in a fog environment. It depends on classifying tasks based on the critical level to process critical tasks first and considers the deadline of the task, which is an essential parameter to consider in real-time applications. The performance of TCDC was compared with some of the literature. The simulation results showed that the proposed algorithm can improve the overall performance in terms of some QoS parameters like makespan with an improved ratio from 60% to 70%, resource utilization, etc.
Full article
(This article belongs to the Special Issue Application of Cloud Computing in Industrial Internet of Things)
Open AccessArticle
Pyramidal Predictive Network V2: An Improved Predictive Architecture and Training Strategies for Future Perception Prediction
by
Chaofan Ling, Junpei Zhong, Weihua Li, Ran Dong and Mingjun Dai
Big Data Cogn. Comput. 2025, 9(4), 79; https://doi.org/10.3390/bdcc9040079 - 28 Mar 2025
Abstract
►▼
Show Figures
In this paper, we propose an improved version of the Pyramidal Predictive Network (PPNV2), a theoretical framework inspired by predictive coding, which addresses the limitations of its predecessor (PPNV1) in the task of future perception prediction. While PPNV1 employed a temporal pyramid architecture
[...] Read more.
In this paper, we propose an improved version of the Pyramidal Predictive Network (PPNV2), a theoretical framework inspired by predictive coding, which addresses the limitations of its predecessor (PPNV1) in the task of future perception prediction. While PPNV1 employed a temporal pyramid architecture and demonstrated promising results, its innate signal processing led to aliasing in the prediction, restricting its application in robotic navigation. We analyze the signal dissemination and characteristic artifacts of PPNV1 and introduce architectural enhancements and training strategies to mitigate these issues. The improved architecture focuses on optimizing information dissemination and reducing aliasing in neural networks. We redesign the downsampling and upsampling components to enable the network to construct images more effectively from low-frequency-input Fourier features, replacing the simple concatenation of different inputs in the previous version. Furthermore, we refine the training strategies to alleviate input inconsistency during training and testing phases. The enhanced model exhibits increased interpretability, stronger prediction accuracy, and improved quality of predictions. The proposed PPNV2 offers a more robust and efficient approach to future video-frame prediction, overcoming the limitations of its predecessor and expanding its potential applications in various robotic domains, including pedestrian prediction, vehicle prediction, and navigation.
Full article

Figure 1
Open AccessArticle
GenAI Learning for Game Design: Both Prior Self-Transcendent Pursuit and Material Desire Contribute to a Positive Experience
by
Dongpeng Huang and James E. Katz
Big Data Cogn. Comput. 2025, 9(4), 78; https://doi.org/10.3390/bdcc9040078 - 27 Mar 2025
Abstract
►▼
Show Figures
This study explores factors influencing positive experiences with generative AI (GenAI) in a learning game design context. Using a sample of 26 master’s-level students in a course on AI’s societal aspects, this study examines the impact of (1) prior knowledge and attitudes toward
[...] Read more.
This study explores factors influencing positive experiences with generative AI (GenAI) in a learning game design context. Using a sample of 26 master’s-level students in a course on AI’s societal aspects, this study examines the impact of (1) prior knowledge and attitudes toward technology and learning, and (2) personal value orientations. Results indicated that both students’ self-transcendent goals and desire for material benefits have positive correlations with collaborative, cognitive, and affective outcomes. However, self-transcendent goals are a stronger predictor, as determined by stepwise regression analysis. Attitudes toward technology were positively associated with cognitive and affective outcomes during the first week, though this association did not persist into the second week. Most other attitudinal variables were not associated with collaborative or cognitive outcomes but were linked to negative affect. These findings suggest that students’ personal values correlate more strongly with the collaborative, cognitive, and affective aspects of using GenAI for educational game design than their attitudinal attributes. This result may indicate that the design experience neutralizes the effect of earlier attitudes towards technology, with major influences deriving from personal value orientations. If these findings are borne out, this study has implications for the utility of current educational efforts to change students’ attitudes towards technology, especially those that encourage more women to study STEM topics. Thus, it may be that, rather than pro-technology instruction, a focus on value orientations would be a more effective way to encourage diverse students to participate in STEM programs.
Full article

Figure 1
Open AccessReview
A Comprehensive Survey of MapReduce Models for Processing Big Data
by
Hemn Barzan Abdalla, Yulia Kumar, Yue Zhao and Davide Tosi
Big Data Cogn. Comput. 2025, 9(4), 77; https://doi.org/10.3390/bdcc9040077 - 27 Mar 2025
Abstract
►▼
Show Figures
With the rapid increase in the amount of big data, traditional software tools are facing complexity in tackling big data, which is a huge concern in the research industry. In addition, the management and processing of big data have become more difficult, thus
[...] Read more.
With the rapid increase in the amount of big data, traditional software tools are facing complexity in tackling big data, which is a huge concern in the research industry. In addition, the management and processing of big data have become more difficult, thus increasing security threats. Various fields encountered issues in fully making use of these large-scale data with supported decision-making. Data mining methods have been tremendously improved to identify patterns for sorting a larger set of data. MapReduce models provide greater advantages for in-depth data evaluation and can be compatible with various applications. This survey analyses the various map-reducing models utilized for big data processing, the techniques harnessed in the reviewed literature, and the challenges. Furthermore, this survey reviews the major advancements of diverse types of map-reduce models, namely Hadoop, Hive, Pig, MongoDB, Spark, and Cassandra. Besides the reliable map-reducing approaches, this survey also examined various metrics utilized for computing the performance of big data processing among the applications. More specifically, this review summarizes the background of MapReduce and its terminologies, types, different techniques, and applications to advance the MapReduce framework for big data processing. This study provides good insights for conducting more experiments in the field of processing and managing big data.
Full article

Figure 1
Open AccessArticle
A Quantum Key Distribution Routing Scheme for a Zero-Trust QKD Network System: A Moving Target Defense Approach
by
Esraa M. Ghourab, Mohamed Azab and Denis Gračanin
Big Data Cogn. Comput. 2025, 9(4), 76; https://doi.org/10.3390/bdcc9040076 - 26 Mar 2025
Abstract
►▼
Show Figures
Quantum key distribution (QKD), a key application of quantum information technology and “one-time pad” (OTP) encryption, enables secure key exchange with information-theoretic security, meaning its security is grounded in the laws of physics rather than computational assumptions. However, in QKD networks, achieving long-distance
[...] Read more.
Quantum key distribution (QKD), a key application of quantum information technology and “one-time pad” (OTP) encryption, enables secure key exchange with information-theoretic security, meaning its security is grounded in the laws of physics rather than computational assumptions. However, in QKD networks, achieving long-distance communication often requires trusted relays to mitigate channel losses. This reliance introduces significant challenges, including vulnerabilities to compromised relays and the high costs of infrastructure, which hinder widespread deployment. To address these limitations, we propose a zero-trust spatiotemporal diversification framework for multipath–multi-key distribution. The proposed approach enhances the security of end-to-end key distribution by dynamically shuffling key exchange routes, enabling secure multipath key distribution. Furthermore, it incorporates a dynamic adaptive path recovery mechanism that leverages a recursive penalty model to identify and exclude suspicious or compromised relay nodes. To validate this framework, we conducted extensive simulations and compared its performance against established multipath QKD methods. The results demonstrate that the proposed approach achieves a 97.22% lower attack success rate with 20% attacker pervasiveness and a 91.42% reduction in the attack success rate for single key transmission. The total security percentage improves by 35% under 20% attacker pervasiveness, and security enhancement reaches 79.6% when increasing QKD pairs. Additionally, the proposed scheme exhibits an 86.04% improvement in defense against interception and nearly doubles the key distribution success rate compared to traditional methods. The results demonstrate that the proposed approach significantly improves both security robustness and efficiency, underscoring its potential to advance the practical deployment of QKD networks.
Full article

Figure 1
Open AccessArticle
Explicit and Implicit Knowledge in Large-Scale Linguistic Data and Digital Footprints from Social Networks
by
Maria Pilgun
Big Data Cogn. Comput. 2025, 9(4), 75; https://doi.org/10.3390/bdcc9040075 - 25 Mar 2025
Abstract
This study explores explicit and implicit knowledge in large-scale linguistic data and digital footprints from social networks. This research aims to develop and test algorithms for analyzing both explicit and implicit information in user-generated content and digital interactions. A dataset of social media
[...] Read more.
This study explores explicit and implicit knowledge in large-scale linguistic data and digital footprints from social networks. This research aims to develop and test algorithms for analyzing both explicit and implicit information in user-generated content and digital interactions. A dataset of social media discussions on avian influenza in Moscow (RF) was collected and analyzed (tokens: 1,316,387; engagement: 108,430; audience: 39,454,014), with data collection conducted from 1 March 2023, 00:00 to 31 May 2023, 23:59. This study employs Brand Analytics, TextAnalyst 2.32, ChatGPT o1, o1-mini, AutoMap, and Tableau as analytical tools. The findings highlight the advantages and limitations of explicit and implicit information analysis for social media data interpretation. Explicit knowledge analysis is more predictable and suitable for tasks requiring quantitative assessments or classification of explicit data, while implicit knowledge analysis complements it by enabling a deeper understanding of subtle emotional and contextual nuances, particularly relevant for public opinion research, social well-being assessment, and predictive analytics. While explicit knowledge analysis provides structured insights, it may overlook hidden biases, whereas implicit knowledge analysis reveals underlying issues but requires complex interpretation. The research results emphasize the importance of integrating various scientific paradigms and artificial intelligence technologies, particularly large language models (LLMs), in the analysis of social networks.
Full article
(This article belongs to the Special Issue Research Progress in Artificial Intelligence and Social Network Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
A Data Mining Approach to Identify NBA Player Quarter-by-Quarter Performance Patterns
by
Dimitrios Iatropoulos, Vangelis Sarlis and Christos Tjortjis
Big Data Cogn. Comput. 2025, 9(4), 74; https://doi.org/10.3390/bdcc9040074 - 25 Mar 2025
Abstract
►▼
Show Figures
Sports analytics is a fast-evolving domain using advanced data science methods to find useful insights. This study explores the way NBA player performance metrics evolve from quarter to quarter and affect game outcomes. Using Association Rule Mining, we identify key offensive, defensive, and
[...] Read more.
Sports analytics is a fast-evolving domain using advanced data science methods to find useful insights. This study explores the way NBA player performance metrics evolve from quarter to quarter and affect game outcomes. Using Association Rule Mining, we identify key offensive, defensive, and overall impact metrics that influence success in both regular-season and playoff contexts. Defensive metrics become more critical in late-game situations, while offensive efficiency is paramount in the playoffs. Ball handling peaks in the second quarter, affecting early momentum, while overall impact metrics, such as Net Rating and Player Impact Estimate, consistently correlate with winning. In the collected dataset we performed preprocessing, applying advanced anomaly detection and discretization techniques. By segmenting performance into five categories—Offense, Defense, Ball Handling, Overall Impact, and Tempo—we uncovered strategic insights for teams, coaches, and analysts. Results emphasize the importance of managing player fatigue, optimizing lineups, and adjusting strategies based on quarter-specific trends. The analysis provides actionable recommendations for coaching decisions, roster management, and player evaluation. Future work can extend this approach to other leagues and incorporate additional contextual factors to refine evaluation and predictive models.
Full article

Figure 1
Open AccessArticle
Home Electricity Sourcing: An Automated System to Optimize Prices for Dynamic Electricity Tariffs
by
Juan Felipe Garcia Sierra, Jesús Fernández Fernández, Diego Fernández-Lázaro, Ángel Manuel Guerrero-Higueras, Virginia Riego del Castillo and Lidia Sánchez-González
Big Data Cogn. Comput. 2025, 9(4), 73; https://doi.org/10.3390/bdcc9040073 - 21 Mar 2025
Abstract
►▼
Show Figures
Governments are focusing on citizen participation in the energy transition, e.g., with dynamic electricity tariffs, which pass part of the wholesale price volatility to end users. While often the cheapest alternative, these tariffs require micromanagement for optimization. In this research, an automated system
[...] Read more.
Governments are focusing on citizen participation in the energy transition, e.g., with dynamic electricity tariffs, which pass part of the wholesale price volatility to end users. While often the cheapest alternative, these tariffs require micromanagement for optimization. In this research, an automated system capable of supplying electricity for home use at minimal cost called Smart Relays and Controller (SRC) is presented. SRC scrapes prices online, charges a battery system during the cheapest time slots and supplies electricity to the home energy system from the cheapest source, either the battery or the grid, while optimizing battery life. To validate the system, a comparison is made between SRC, a programmable scheduler and PVPC (Spain’s dynamic tariff) using twenty-eight months of hourly historical data. SRC is shown to be superior to both the scheduler and PVPC, with the scheduler performing worse than SRC but better than PVPC (T.T., p < 0.001). SRC achieves a 36.16% discount over PVPC, 13.89% when factoring in battery life. The savings are 44.24% higher with SRC than with a scheduler. Neither inflation nor incentives to reduce costs are considered. While we studied Spain’s tariff, SRC would work in any country offering dynamic electricity tariffs, with benefit margins dependent on their particularities.
Full article

Figure 1
Open AccessArticle
TACO: Adversarial Camouflage Optimization on Trucks to Fool Object Detectors
by
Adonisz Dimitriu, Tamás Vilmos Michaletzky and Viktor Remeli
Big Data Cogn. Comput. 2025, 9(3), 72; https://doi.org/10.3390/bdcc9030072 - 19 Mar 2025
Abstract
►▼
Show Figures
Adversarial attacks threaten the reliability of machine learning models in critical applications like autonomous vehicles and defense systems. As object detectors become more robust with models like YOLOv8, developing effective adversarial methodologies is increasingly challenging. We present Truck Adversarial Camouflage Optimization (TACO), a
[...] Read more.
Adversarial attacks threaten the reliability of machine learning models in critical applications like autonomous vehicles and defense systems. As object detectors become more robust with models like YOLOv8, developing effective adversarial methodologies is increasingly challenging. We present Truck Adversarial Camouflage Optimization (TACO), a novel framework that generates adversarial camouflage patterns on 3D vehicle models to deceive state-of-the-art object detectors. Adopting Unreal Engine 5, TACO integrates differentiable rendering with a Photorealistic Rendering Network to optimize adversarial textures targeted at YOLOv8. To ensure the generated textures are both effective in deceiving detectors and visually plausible, we introduce the Convolutional Smooth Loss function, a generalized smooth loss function. Experimental evaluations demonstrate that TACO significantly degrades YOLOv8’s detection performance, achieving an AP@0.5 of 0.0099 on unseen test data. Furthermore, these adversarial patterns exhibit strong transferability to other object detection models such as Faster R-CNN and earlier YOLO versions.
Full article

Figure 1
Open AccessArticle
Data-Driven Forecasting of CO2 Emissions in Thailand’s Transportation Sector Using Nonlinear Autoregressive Neural Networks
by
Thananya Janhuaton, Supanida Nanthawong, Panuwat Wisutwattanasak, Chinnakrit Banyong, Chamroeun Se, Thanapong Champahom, Vatanavongs Ratanavaraha and Sajjakaj Jomnonkwao
Big Data Cogn. Comput. 2025, 9(3), 71; https://doi.org/10.3390/bdcc9030071 - 17 Mar 2025
Abstract
►▼
Show Figures
Accurately forecasting CO2 emissions in the transportation sector is essential for developing effective mitigation strategies. This study uses an annually spanning dataset from 1993 to 2022 to evaluate the predictive performance of three methods: NAR, NARX, and GA-T2FIS. Among these, NARX-VK, which
[...] Read more.
Accurately forecasting CO2 emissions in the transportation sector is essential for developing effective mitigation strategies. This study uses an annually spanning dataset from 1993 to 2022 to evaluate the predictive performance of three methods: NAR, NARX, and GA-T2FIS. Among these, NARX-VK, which incorporates vehicle kilometers (VK) and economic variables, demonstrated the highest predictive accuracy, achieving a MAPE of 2.2%, MAE of 1621.449 × 103 tons, and RMSE of 1853.799 × 103 tons. This performance surpasses that of NARX-RG, which relies on registered vehicle data and achieved a MAPE of 3.7%. While GA-T2FIS exhibited slightly lower accuracy than NARX-VK, it demonstrated robust performance in handling uncertainties and nonlinear relationships, achieving a MAPE of 2.6%. Sensitivity analysis indicated that changes in VK significantly influence CO2 emissions. The Green Transition Scenario, assuming a 10% reduction in VK, led to a 4.4% decrease in peak CO2 emissions and a 4.1% reduction in total emissions. Conversely, the High Growth Scenario, modeling a 10% increase in VK, resulted in a 7.2% rise in peak emissions and a 4.1% increase in total emissions.
Full article

Figure 1
Open AccessArticle
Generation Z’s Travel Behavior and Climate Change: A Comparative Study for Greece and the UK
by
Athanasios Demiris, Grigorios Fountas, Achille Fonzone and Socrates Basbas
Big Data Cogn. Comput. 2025, 9(3), 70; https://doi.org/10.3390/bdcc9030070 - 17 Mar 2025
Abstract
►▼
Show Figures
Climate change is one of the most pressing global threats, endangering the sustainability of the planet and quality of life, whilst urban mobility significantly contributes to exacerbating its effects. Recently, policies aimed at mitigating these effects have been implemented, emphasizing the promotion of
[...] Read more.
Climate change is one of the most pressing global threats, endangering the sustainability of the planet and quality of life, whilst urban mobility significantly contributes to exacerbating its effects. Recently, policies aimed at mitigating these effects have been implemented, emphasizing the promotion of sustainable travel culture. Prior research has indicated that both environmental awareness and regulatory efforts could encourage the shift towards greener mobility; however, factors that affect young people’s travel behavior remain understudied. This study examined whether and how climate change impacts travel behavior, particularly among Generation Z in Greece. A comprehensive online survey was conducted, from 31 March to 8 April 2024, within a Greek academic community, yielding 904 responses from Generation Z individuals. The design of the survey was informed by an adaptation of Triandis’ Theory of Interpersonal Behavior. The study also incorporated a comparative analysis using data from the UK’s National Travel Attitudes Survey (NTAS), offering insights from a different cultural and socio-economic context. Blending an Exploratory Factor Analysis and latent variable ordered probit and logit models, the key determinants of the willingness to reduce car use and self-reported reduction in car use in response to climate change were identified. The results indicate that emotional factors, social roles, and norms, along with socio-demographic characteristics, current behaviors, and local environmental concerns, significantly influence car-related travel choices among Generation Z. For instance, concerns about local air quality are consistently correlated with a higher likelihood of having already reduced car use due to climate change and a higher willingness to reduce car travel in the future. The NTAS data reveal that flexibility in travel habits and social norms are critical determinants of the willingness to reduce car usage. The findings of the study highlight the key role of policy interventions, such as the implementation of Low-Emission Zones, leveraging social media for environmental campaigns, and enhancing infrastructure for active travel and public transport to foster broader cultural shifts towards sustainable travel behavior among Generation Z.
Full article

Figure 1
Open AccessArticle
Defining, Detecting, and Characterizing Power Users in Threads
by
Gianluca Bonifazi, Christopher Buratti, Enrico Corradini, Michele Marchetti, Federica Parlapiano, Domenico Ursino and Luca Virgili
Big Data Cogn. Comput. 2025, 9(3), 69; https://doi.org/10.3390/bdcc9030069 - 16 Mar 2025
Abstract
Threads is a new social network that was launched by Meta in July 2023 and conceived as a direct alternative to X. It is a unique case study in the social network landscape, as it is content-based like X, but has an Instagram-based
[...] Read more.
Threads is a new social network that was launched by Meta in July 2023 and conceived as a direct alternative to X. It is a unique case study in the social network landscape, as it is content-based like X, but has an Instagram-based growth model, which makes it significantly different from X. As it was launched recently, studies on Threads are still scarce. One of the most common investigations in social networks regards power users (also called influencers, lead users, influential users, etc.), i.e., those users who can significantly influence information dissemination, user behavior, and ultimately the current dynamics and future development of a social network. In this paper, we want to contribute to the knowledge of Threads by showing that there are indeed power users in this social network and then attempt to understand the main features that characterize them. The definition of power users that we adopt here is novel and leverages the four classical centrality measures of Social Network Analysis. This ensures that our study of power users can benefit from the enormous knowledge on centrality measures that has accumulated in the literature over the years. In order to conduct our analysis, we had to build a Threads dataset, as none existed in the literature that contained the information necessary for our studies. Once we built such a dataset, we decided to make it open and thus available to all researchers who want to perform analyses on Threads. This dataset, the new definition of power users, and the characterization of Threads power users are the main contributions of this paper.
Full article
(This article belongs to the Special Issue Research Progress in Artificial Intelligence and Social Network Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Margin-Based Training of HDC Classifiers
by
Laura Smets, Dmitri Rachkovskij, Evgeny Osipov, Werner Van Leekwijck, Olexander Volkov and Steven Latré
Big Data Cogn. Comput. 2025, 9(3), 68; https://doi.org/10.3390/bdcc9030068 - 14 Mar 2025
Abstract
The explicit kernel transformation of input data vectors to their distributed high-dimensional representations has recently been receiving increasing attention in the field of hyperdimensional computing (HDC). The main argument is that such representations endow simpler last-leg classification models, often referred to as HDC
[...] Read more.
The explicit kernel transformation of input data vectors to their distributed high-dimensional representations has recently been receiving increasing attention in the field of hyperdimensional computing (HDC). The main argument is that such representations endow simpler last-leg classification models, often referred to as HDC classifiers. HDC models have obvious advantages over resource-intensive deep learning models for use cases requiring fast, energy-efficient computations both for model training and deploying. Recent approaches to training HDC classifiers have primarily focused on various methods for selecting individual learning rates for incorrectly classified samples. In contrast to these methods, we propose an alternative strategy where the decision to learn is based on a margin applied to the classifier scores. This approach ensures that even correctly classified samples within the specified margin are utilized in training the model. This leads to improved test performances while maintaining a basic learning rule with a fixed (unit) learning rate. We propose and empirically evaluate two such strategies, incorporating either an additive or multiplicative margin, on the standard subset of the UCI collection, consisting of 121 datasets. Our approach demonstrates superior mean accuracy compared to other HDC classifiers with iterative error-correcting training.
Full article
(This article belongs to the Special Issue Brain-Inspired Hyperdimensional Computing: Theoretical Perspectives and Real-World Applications)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- BDCC Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Topical Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
27 March 2025
Meet Us at the 2025 International Conference on Intelligent Systems Design and Engineering Applications (ISDEA 2025), 19–21 April 2025, Seoul, South Korea
Meet Us at the 2025 International Conference on Intelligent Systems Design and Engineering Applications (ISDEA 2025), 19–21 April 2025, Seoul, South Korea

11 March 2025
Meet Us at the 2025 IEEE International Symposium on Information Theory (ISIT 2025), 22–27 June 2025, Ann Arbor, USA
Meet Us at the 2025 IEEE International Symposium on Information Theory (ISIT 2025), 22–27 June 2025, Ann Arbor, USA

Topics
Topic in
Applied Sciences, BDCC, Future Internet, Information, Sci
Social Computing and Social Network Analysis
Topic Editors: Carson K. Leung, Fei Hao, Giancarlo Fortino, Xiaokang ZhouDeadline: 30 June 2025
Topic in
AI, BDCC, Fire, GeoHazards, Remote Sensing
AI for Natural Disasters Detection, Prediction and Modeling
Topic Editors: Moulay A. Akhloufi, Mozhdeh ShahbaziDeadline: 25 July 2025
Topic in
Algorithms, BDCC, BioMedInformatics, Information, Mathematics
Machine Learning Empowered Drug Screen
Topic Editors: Teng Zhou, Jiaqi Wang, Youyi SongDeadline: 31 August 2025
Topic in
IJERPH, JPM, Healthcare, BDCC, Applied Sciences, Sensors
eHealth and mHealth: Challenges and Prospects, 2nd EditionTopic Editors: Antonis Billis, Manuel Dominguez-Morales, Anton CivitDeadline: 31 October 2025

Conferences
Special Issues
Special Issue in
BDCC
Security, Privacy, and Trust in Artificial Intelligence Applications
Guest Editor: Giuseppe Maria Luigi SarnèDeadline: 23 April 2025
Special Issue in
BDCC
Advances in Natural Language Processing and Text Mining
Guest Editors: Zuchao Li, Min PengDeadline: 30 April 2025
Special Issue in
BDCC
Industrial Data Mining and Machine Learning Applications
Guest Editors: Yung Po Tsang, C. H. Wu, Kit-Fai PunDeadline: 30 April 2025
Special Issue in
BDCC
Perception and Detection of Intelligent Vision
Guest Editors: Hongshan Yu, Zhengeng Yang, Mingtao Feng, Qieshi ZhangDeadline: 30 April 2025