Next Article in Journal
Backpropagation Neural Network-Assisted Helmert Variance Model for Weighted Global Navigation Satellite System Localization in High Orbit
Previous Article in Journal
An Analog Circuit Fault Diagnosis Method Incorporating Multi-Objective Selection of Measurement Nodes
Previous Article in Special Issue
Transforming Personalized Travel Recommendations: Integrating Generative AI with Personality Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Computational Intelligence and Machine Learning: Advances in Models and Applications

1
Faculty of Electrical Engineering, Częstochowa University of Technology, 42-201 Częstochowa, Poland
2
Faculty of Mathematics and Computer Science, University of Łódź, 90-136 Łódź, Poland
Electronics 2025, 14(8), 1530; https://doi.org/10.3390/electronics14081530
Submission received: 25 March 2025 / Accepted: 9 April 2025 / Published: 10 April 2025

1. Introduction

Computational intelligence (CI) and machine learning (ML) have evolved into foundational pillars of modern data-driven research, with growing impacts across domains such as engineering, medicine, finance, and environmental science [1]. Their capacity to learn patterns from data and adapt to dynamic environments makes them indispensable tools for both academic and industrial innovation. The past decade has seen a surge in interest and the practical deployment of CI and ML models, ranging from classical techniques like decision trees and support vector machines to recent breakthroughs in deep learning and large language models [2].
Despite their progress, the development and application of CI and ML algorithms remain complex and challenging. A persistent issue lies in the appropriate selection of model architectures and training strategies to ensure both learning efficacy and generalization [3]. This challenge becomes even more pronounced in practical contexts where data may be noisy, sparse, high-dimensional, or subject to dynamic shifts. Moreover, the increasing societal reliance on AI systems has heightened the demand for models that are not only accurate but also interpretable, fair, and robust [4].
In response to these demands, recent research has explored both foundational improvements to learning mechanisms and application-specific enhancements. For instance, one of the highlighted contributions in this Special Issue addresses the limitations of traditional recommendation systems by incorporating generative AI with psychological modeling to personalize travel recommendations. Another study introduces self-supervised learning into graph-based collaborative filtering, achieving better representation learning and reducing the reliance on labeled data. Further advancements are seen in the use of preference-aware graph neural networks to filter social signals and in the adaptation of large transformer-based models to improve automatic speech recognition in low-resource languages like Turkish.
This Special Issue brings together ten papers selected from 40 submissions that exemplify the diversity and maturity of current research in CI and ML. These works not only introduce novel algorithms and architectures but also demonstrate how to rigorously evaluate them in real-world settings, ranging from environmental prediction using open-source ML toolkits to bias mitigation in healthcare through synthetic data generation. A recurring theme across these papers is the emphasis on data-centric methodologies, from feature engineering and data augmentation to the design of metrics for fairness, utility, and interpretability.
Together, these studies illustrate the field’s transition toward more specialized, context-aware, and socially responsible AI systems. They reflect the community’s ongoing effort to balance model complexity with usability, accuracy with equity, and innovation with reproducibility. In doing so, they offer valuable insights not only into technical progress but also into the broader implications of deploying intelligent systems across various domains of human activity.

2. Summary of the Contributions

The paper by Aribas and Daglarli addresses the challenge of improving personalized travel recommendations by integrating generative AI with personality models. Classical travel recommendation systems rely on collaborative filtering, content-based filtering, and machine learning models. However, these approaches often fail to capture the complexity of individual user preferences, leading to suboptimal and generic suggestions. Data sparsity, limited adaptability, and an inability to dynamically adjust to evolving user behaviors further limit their effectiveness.
This problem is particularly relevant because the increasing availability of travel-related data presents travelers with an overwhelming number of choices. Without an efficient way to filter through these options, users struggle to find experiences that align with their interests. Traditional recommendation systems do not account for personality-driven preferences, which can significantly impact the suitability of travel suggestions. Advances in artificial intelligence and personalization now make it possible to refine recommendations, improving user satisfaction and engagement. The integration of psychological models into AI-powered systems offers a way to address these limitations, making travel planning more efficient and enjoyable.
The study proposes a novel approach that combines Retrieval-Augmented Generation (RAG) with personality psychology to enhance personalization. The system consists of three key components: a travel data retrieval mechanism, which gathers relevant information from online sources using web crawlers and a vector-based search; an AI-driven recommendation model, built on a large language model (LLM) such as ChatGPT; and personality integration, which incorporates the Myers–Briggs Type Indicator (MBTI) and Big Five (BF) traits to refine recommendations based on psychological factors. The system continuously adapts to user feedback, improving over time.
The evaluation results show a user satisfaction rate of 78%, outperforming traditional methods. Accuracy in aligning recommendations with user preferences reached 82%, with performance varying by personality type (85% for extroverts, 75% for introverts). Precision and recall scores of 0.84 and 0.78 further validate its effectiveness. A case study at Istanbul’s Grand Bazaar demonstrates the system’s impact, increasing engagement by 25% and satisfaction by 30%. Statistical tests confirm the significance of these results.
The study introduces several innovations, including the first integration of personality models into generative AI for travel recommendations. Compared to traditional recommendation models, it improves user satisfaction by 18% and accuracy by 14%. Future research directions include expanding personality model integration, refining contextual personalization, and enhancing explainable AI methods. The study establishes a foundation for AI-driven personalization with applications beyond travel, including e-commerce and healthcare.
The paper by Zhu et al. builds upon previous research in recommendation systems, particularly in the application of graph neural networks (GNNs) for collaborative filtering. While existing models leverage GNNs to improve recommendations, they suffer from limitations such as sparse supervised signals, noise in user–item interactions, and an inability to effectively model long-tail items. The study introduces Self-Supervised Graph Attention Collaborative Filtering (SGACF) as a novel approach to addressing these challenges.
The core problem being solved is the inefficiency of existing GNN-based recommendation systems in handling sparse interactions and noisy data. Traditional models struggle to accurately represent user preferences, especially for long-tail items that lack sufficient interactions. This issue is crucial because it affects the quality and diversity of recommendations, limiting the personalization potential of recommender systems. Additionally, most models operate in a fully supervised paradigm, which heavily relies on explicit user feedback that is often scarce and biased.
To address these issues, the proposed method incorporates self-supervised learning (SSL) into a graph attention network (GAT)-based collaborative filtering framework. The model consists of two primary components: a supervised learning task using a multi-head graph attention network (GAT) and an auxiliary self-supervised learning task that enhances representation learning. The GAT component refines node representations by assigning different importance weights to neighboring nodes, mitigating the impact of noisy data. The self-supervised learning task employs contrastive learning, generating multiple views of each node through graph data augmentation techniques such as node masking, edge masking, and layer masking. The model maximizes the consistency between different views of the same node while minimizing the consistency between the views of different nodes.
This study conducts extensive experiments on three benchmark datasets—Yelp2018, Gowalla, and Amazon—to evaluate the effectiveness of SGACF. The results demonstrate significant improvements in accuracy and robustness compared to existing methods. The model outperforms state-of-the-art recommendation models, including Neural Matrix Factorization (NeuMF), Spectral Collaborative Filtering, and Neural Graph Collaborative Filtering (NGCF). Notably, SGACF achieves better recall and normalized discounted cumulative gain (NDCG) scores, particularly in mitigating the long-tail problem by enhancing the representation of low-degree nodes.
The key innovations of this work include the integration of self-supervised contrastive learning into graph-based recommendation models, the use of multi-head graph attention to improve representation learning, and the introduction of novel data augmentation strategies for graph-based learning. Unlike previous approaches, this method effectively reduces the reliance on explicit user feedback, improves model generalization, and enhances recommendation diversity.
The contributions of this research are substantial. It establishes a new paradigm for self-supervised learning in recommendation systems, demonstrating that auxiliary self-supervised tasks can significantly enhance supervised learning. The introduction of graph attention networks in combination with self-supervised contrastive learning provides a novel approach to tackling the challenges of data sparsity, interaction noise, and long-tail recommendations. Future research directions include further exploration of data augmentation techniques for graph-based learning, improving contrastive learning frameworks, and extending self-supervised learning to broader recommendation scenarios. This work marks a significant advancement in AI-driven recommendation models, offering a more efficient, scalable, and accurate approach to personalized recommendations.
The paper by Xu et al. explores the challenge of enhancing social recommendation systems by introducing a preference-aware graph neural network approach. Traditional recommendation systems, especially those based on collaborative filtering, often suffer from data sparsity, which limits their ability to provide personalized recommendations. Many existing social recommendation models incorporate user relationships to mitigate this issue, but they frequently fail to properly filter out irrelevant or negative information from high-order neighbors. This results in a decline in recommendation accuracy and effectiveness.
This problem is crucial because social recommendation systems are increasingly used in e-commerce, social media, and content recommendation platforms. The challenge of information overload makes it difficult for users to find relevant content. Introducing social connections can enhance recommendation accuracy, but only if those connections are meaningfully filtered to ensure that only relevant social signals contribute to recommendations.
To address these challenges, the authors propose the Preference-Aware Light Graph Convolutional Network (PLGCN). This model consists of several key components. First, it includes an unsupervised subgraph construction module, which clusters users into subgraphs based on their preferences. By grouping users with similar preferences, the PLGCN effectively filters out negative or irrelevant messages from users with different interests. Second, a feature aggregation module is designed to combine user embeddings with social and interaction information more effectively. Finally, the model employs a lightweight GNN framework, removing nonlinear activation and feature transformation operations to prevent overfitting and improve computational efficiency.
The authors conducted comprehensive experiments on two real-world datasets, LastFM and Ciao, to evaluate the performance of the PLGCN. The results indicate that the PLGCN outperforms state-of-the-art methods, particularly in addressing the cold-start problem, where new users or items have limited interaction data. Compared to baseline models such as NGCF, LightGCN, and SocialLGN, the PLGCN achieved superior precision, recall, and normalized discounted cumulative gain (NDCG) scores, demonstrating its effectiveness in providing more accurate and relevant recommendations.
The study introduces several key innovations. The preference-aware subgraph construction module represents a novel approach to filtering negative information in social recommendation systems, significantly improving recommendation performance. The lightweight GNN framework reduces model complexity while maintaining high accuracy, making it more suitable for large-scale applications. The feature aggregation module enhances user representations by integrating interaction and social information in a more structured way.
In terms of contributions, this work advances the field of social recommendation by introducing an efficient and scalable model that outperforms existing GNN-based recommendation approaches. The proposed methodology demonstrates improved performance in cold-start scenarios, which remains a major challenge in recommendation systems. The study also highlights the potential for further enhancements, including incorporating additional social features such as trust levels and exploring dynamic social networks where user preferences evolve over time. This research provides a strong foundation for future developments in AI-driven personalized recommendations, with practical applications extending beyond social recommendations to e-commerce, online streaming platforms, and digital marketing strategies.
The study by Polat et al. investigates the development and optimization of an automatic speech recognition (ASR) system for Turkish using the Whisper architecture and evaluates the performance gains achieved through fine-tuning with Low-Rank Adaptation (LoRA). The main problem being tackled is the limited performance of ASR systems in low-resource languages such as Turkish. Despite the capabilities of modern transformer-based models like Whisper, their accuracy in Turkish remains suboptimal due to the language’s morphological complexity, dialectal variation, and the scarcity of high-quality labeled datasets. These limitations make it difficult to achieve reliable, scalable ASR performance in real-world Turkish applications.
To overcome this, the authors implement an end-to-end ASR system using Whisper and fine-tune it with the LoRA technique. Whisper is based on a transformer architecture known for its ability to handle multilingual, noisy, and long-context inputs effectively. However, Whisper’s training is biased toward high-resource languages like English. LoRA addresses the challenge of fine-tuning large-scale models by introducing low-rank trainable matrices, drastically reducing the number of parameters to be updated during training. This makes the fine-tuning process more computationally efficient and accessible, particularly for low-resource languages.
The study uses five Turkish speech datasets—METU MS, TNST, Mozilla Common Voice, FLEURS, and TASRT—to evaluate the system’s performance before and after fine-tuning. The results show significant improvements in word error rates (WERs) and character error rates (CERs), especially in the Whisper medium and large models after applying LoRA. For example, the WER was reduced by up to 52%, with corresponding decreases in the CER, demonstrating the effectiveness of LoRA-enhanced fine-tuning. The paper also includes a comparative analysis with the Google USM ASR model, showing that the Whisper-large-v3 model outperforms Google’s system on most datasets in both the WER and CER.
A key advancement of this study is the application of LoRA to the Whisper model for Turkish ASR, combined with a thorough evaluation across multiple speech datasets and targeted improvements to dataset quality. Through the use of a transformer-based architecture optimized via a parameter-efficient fine-tuning approach, the research enhances the adaptability of large-scale ASR systems for languages with limited resources.
The study makes two primary contributions: it demonstrates that Whisper can be effectively adapted to Turkish using LoRA, and it provides a framework for improving ASR performance in other low-resource languages with similar challenges. The study underscores the value of transformer-based models combined with efficient fine-tuning techniques and sets a precedent for further research in multilingual, resource-constrained ASR development.
The paper by Huang and Li proposes a novel framework called GGTr to address the problem of human motion prediction, which involves forecasting future body movements based on past pose sequences. This task is particularly challenging due to the high complexity, variability, and uncertainty of human motion, which involves intricate spatial–temporal dependencies among body joints. Existing models often fail to simultaneously capture both local and global temporal dynamics or accurately represent spatial interactions between joints, limiting their performance in real-world applications such as robotics, surveillance, and human–computer interaction.
The authors address these limitations by proposing a new architecture that integrates Graph Convolutional Networks (GCNs), Gated Recurrent Units (GRUs), and transformer layers. The GCN module incorporates a learned positional representation, allowing the model to capture complex spatial relationships between joints beyond fixed adjacency matrices. GRUs are used to model local temporal dependencies in joint motion, while the transformer layers extract long-range temporal patterns, enabling the network to effectively handle both short-term transitions and long-term dynamics within human motion sequences.
The GGTr model is trained end-to-end using the mean per joint position error (MPJPE) as a loss function and is optimized with the AdamW optimizer. Evaluations are conducted on two benchmark datasets, Human3.6M and CMU-MoCap, where the proposed framework consistently outperforms state-of-the-art methods across both short-term and long-term motion prediction tasks. The results show especially strong performance improvements for complex, irregular, and non-periodic movements, where traditional models often struggle.
Among the novel contributions of this work is the integration of position-aware GCNs with temporal modeling using GRUs and transformer layers, creating a unified framework capable of learning intricate spatial–temporal dependencies. The use of joint-specific attention mechanisms allows the model to dynamically assess the relevance of neighboring joints, enhancing spatial representation. This design enables the network to effectively process both short-term and long-term motion patterns, resulting in improved prediction accuracy. The paper also includes detailed ablation studies that validate the role of each architectural component and identifies a transition point at 320 ms, where the complexity of prediction noticeably shifts, revealing further insight into the temporal dynamics of human motion.
The paper by Alruily focuses on the development of an optimized deep learning-based chatbot framework for Arabic, addressing the challenge of limited research and resources available for natural language understanding (NLU) in Arabic. While chatbot technology has seen significant advances for widely used languages like English and Chinese, Arabic remains underrepresented despite being one of the most used languages online. The complexity of Arabic morphology, dialectal variation, and orthographic inconsistency makes it particularly difficult to develop effective NLU systems, especially in closed-domain applications relevant to industry.
To address this, the authors propose ArRASA, a closed-domain Arabic chatbot framework built on the RASA open-source conversational AI platform. The system is structured around a four-phase pipeline: tokenization, feature extraction, intent classification, and entity extraction. Unlike previous rule-based or retrieval-based Arabic chatbots, ArRASA incorporates a transformer-based architecture, notably using the Dual Intent and Entity Transformer (DIET), which enables the joint learning of intent recognition and entity tagging. The approach also involves masked language modeling (MLM) and next sentence prediction (NSP) tasks during pre-training to better capture linguistic context. Tokenization and featurization are adapted specifically for Arabic, and the system employs various tokenizers and featurizers (e.g., Arcab, Count Vectorizer, Tf-idf) to optimize input representation. SMOTE (synthetic minority over-sampling technique) is used to address data imbalance in training samples.
The system’s architecture is enhanced through several improvements over the baseline DIET model. These include increasing the number of transformer layers, expanding embedding dimensions and hidden layer sizes, and integrating dropout strategies to prevent overfitting. Experimental results show that ArRASA achieves high accuracy: 97% for intent classification and 95% for entity extraction. Performance comparisons with baseline models, including traditional DIET, keyword-based, and fallback classifiers, demonstrate measurable improvements in both tasks. The framework was also evaluated on a custom-built dataset that reflects diverse industrial intents and entities.
This research presents a scalable and domain-adaptable framework for building Arabic language chatbots. By leveraging transformers and optimization techniques tailored for Arabic, the proposed system sets a new standard in the field of Arabic NLU, providing a strong foundation for further development in both industry and research.
The paper by Hu et al. addresses the challenge of detecting anomalies in system log data, a critical task for maintaining the stability and reliability of modern software systems. As these systems generate vast volumes of log data, traditional rule-based and statistical anomaly detection methods become inadequate due to limited scalability, sensitivity to data structure changes, and inability to capture deeper semantic patterns in log sequences. The increasing complexity of systems demands more robust, adaptive, and accurate detection methods capable of identifying subtle and previously unseen anomalies.
To tackle this problem, the authors propose LogADSBERT, a log anomaly detection framework that integrates Sentence-BERT for extracting semantic features from log events with a Bi-LSTM neural network to capture sequential dependencies. The method consists of two primary stages: model training and anomaly detection. In the training phase, a log parser converts raw logs into structured events and triples, which are then used to train the Sentence-BERT-based semantic vector model (T-SBERT). These vectors are arranged into sequences and passed through a Bi-LSTM model trained to learn normal log behavior. During detection, new logs are parsed, transformed into semantic vectors, and analyzed using the trained Bi-LSTM model to identify anomalies based on a similarity threshold.
The approach advances current practices by combining semantic feature extraction from Sentence-BERT with sequence modeling through a Bi-LSTM equipped with an attention mechanism. This design enables the model to better understand contextual relationships between log events, increasing both detection accuracy and robustness. The framework also includes a semantic matching algorithm that supports generalization to new log events, addressing a key limitation of prior models which perform poorly when log formats change or new events appear.
Evaluation on two real-world datasets, HDFS and OpenStack, shows that LogADSBERT outperforms existing deep learning-based methods such as DeepLog and LogAnomaly in terms of precision, recall, and F1-score. The model demonstrates particular strength in handling newly injected log events, maintaining high detection performance even when encountering previously unseen patterns. Experimental results also confirm the method’s resilience across different hyperparameter settings, indicating its adaptability to diverse application environments.
Overall, the study presents a semantically enriched and sequence-aware approach to log anomaly detection that significantly improves accuracy, robustness, and generalization compared to traditional and existing deep learning methods. This work highlights the importance of integrating natural language processing techniques with temporal modeling in system monitoring applications.
The paper by Memiş deals with the formalization and application of picture fuzzy soft matrices (pfs-matrices) in supervised learning, particularly by introducing a new classification algorithm called Picture Fuzzy Soft k-Nearest Neighbor (PFS-kNN). The problem being addressed arises from inconsistencies in earlier definitions of picture fuzzy sets and picture fuzzy soft sets. These inconsistencies limit the reliability and applicability of pfs-sets and their matrix forms in computational tasks, especially those involving uncertain or imprecise information, such as real-world decision-making or classification problems.
The issue is significant because many complex problems, especially in areas like medical diagnosis or preference-based decision-making, involve uncertainty that cannot be effectively captured by classical mathematical tools. Traditional fuzzy set models and even their extensions, like intuitionistic or Pythagorean fuzzy sets, struggle to fully express cases that include partial agreement, disagreement, and abstention (such as in voting scenarios). Picture fuzzy sets address this by introducing three degrees: membership, non-membership, and neutrality. However, without a consistent matrix-based representation and valid mathematical operations, their use in machine learning remains limited.
To resolve this, the author redefines the structure of pfs-matrices to eliminate logical and algebraic contradictions present in earlier models. These matrices allow for the representation of data points with complex uncertainty structures in a way that is suitable for computation. The paper then defines a set of new distance measures—such as Minkowski, Euclidean, and Hamming distances—for comparing pfs-matrices. These distance measures are used in the construction of the PFS-kNN classifier, which adapts the classical k-Nearest Neighbor algorithm to the picture fuzzy soft set context by evaluating similarity between pfs-matrices rather than standard numerical vectors.
The classifier is tested using four medical datasets from the UCI Machine Learning Repository. The proposed method demonstrates superior performance compared to existing kNN-based classifiers across multiple metrics, including accuracy, precision, recall, and F1-score. In 72 out of 128 evaluation cases, PFS-kNN outperforms the baselines.
What distinguishes this approach is the fusion of a restructured mathematical framework with a practical supervised learning application. The paper not only resolves theoretical flaws in the structure and operations of pfs-matrices but also shows that these improvements lead to better modeling of uncertainty in real-world datasets. As a result, this work establishes a more robust foundation for integrating the picture fuzzy soft set theory into machine learning, with potential applications in any domain requiring nuanced handling of vague, partial, or conflicting information.
The study by Segovia addresses the problem of accurately forecasting meteorological variables—specifically temperature, relative humidity, solar radiation, and wind speed—using machine learning techniques implemented in open-source software. The significance of this issue lies in the increasing demand for reliable weather prediction to support applications in renewable energy management, agriculture, environmental monitoring, and public health. Climate variability and the nonlinear behavior of atmospheric variables make traditional statistical approaches insufficient, especially in regions with complex weather dynamics like the study area in Ecuador.
To meet this challenge, the authors propose a forecasting system based on Python and compare the performance of six supervised learning models: multiple linear regression, polynomial regression, decision tree, random forest, XGBoost, and the multilayer perceptron neural network. The models were trained and tested using a one-year dataset collected every five minutes from a meteorological station in the Tungurahua province of Ecuador. Each model’s performance was assessed using four evaluation metrics. The findings show that the random forest model consistently delivers the most accurate predictions across most variables. However, wind speed posed the greatest forecasting challenge due to its high variability, with the best results for this variable obtained using XGBoost.
What distinguishes this work is the development of a low-cost, replicable forecasting system based entirely on open-source tools. The methodology is adaptable to other meteorological contexts and supports broader implementations in intelligent agriculture and microgrid control. The approach demonstrates the capacity of ensemble methods and neural networks to model complex atmospheric behaviors and suggests that machine learning can offer reliable and scalable solutions for real-time environmental prediction.
The paper by Shahul Hameed examines the use of synthetic data generation as a strategy for mitigating bias in artificial intelligence systems, with a particular focus on medical datasets. The problem it addresses stems from the growing concern that AI models, especially in healthcare, can replicate and even amplify societal biases present in training data. These biases can result in unfair treatment recommendations, misdiagnoses, or unequal access to healthcare services for certain demographic groups, particularly those underrepresented in existing datasets.
This issue is especially critical in clinical settings, where algorithmic decisions can have direct implications on patient outcomes. For example, biased models may lead to disparities in diagnoses or therapeutic suggestions across racial, gender, or socioeconomic groups. Traditional approaches to mitigate bias—such as algorithmic adjustments, pre- or post-processing of data, or attempts to diversify existing datasets—are often limited in effectiveness or introduce trade-offs, such as a loss of data fidelity. Synthetic data generation offers an alternative that maintains the statistical structure of the original dataset while improving representativeness and privacy.
The authors conduct a comprehensive review of seventeen peer-reviewed studies published between 2020 and 2024, selected through a structured search process involving major scientific databases including Google Scholar, PubMed, IEEE Xplore, ScienceDirect, and the ACM Digital Library. The selected studies apply a range of synthetic data generation techniques to address bias, including Generative Adversarial Networks (GANs), Bayesian networks, Structural Causal Models (SCMs), SMOTE, Gaussian copulas, and deep reinforcement learning. These methods are used to augment or replace biased data in applications such as diagnosis prediction, treatment recommendation, and health record de-identification. Several approaches emphasize the dual benefit of fairness improvement and data privacy preservation.
The paper details how GANs are widely used to generate synthetic medical images, signals, and tabular data, while Bayesian and causal models offer structured frameworks for encoding probabilistic or causal relationships among variables. SMOTE and CTGAN are frequently employed for balancing class distributions in imbalanced datasets. Across studies, the effectiveness of these methods is assessed using various fairness and performance metrics such as demographic parity, equal opportunity, ROC-AUC, F1-score, and domain-specific utility scores.
In the discussion, the authors highlight that synthetic data generation has proven effective in enhancing model fairness and performance when applied appropriately. However, the success of these methods depends heavily on the quality of the initial dataset, the suitability of the chosen technique, and the availability of domain knowledge for model tuning. Limitations include computational cost, the complexity of implementation (especially for causal models), challenges in preserving high-dimensional dependencies, and the risks of introducing new types of bias during data generation.
While the reviewed methods show promise, the authors stress that the current synthetic data techniques still face barriers to broader adoption in real-world healthcare systems. These include methodological complexity, lack of standardization, and limited validation across diverse populations. Nonetheless, the review offers a solid foundation for further research into artificial data generation as a practical and ethical solution to bias in AI, especially in domains where data privacy and fairness are both paramount.

3. Conclusions

This Special Issue presents a comprehensive snapshot of current advancements in computational intelligence and machine learning, highlighting their increasing sophistication, diversity of application, and relevance to real-world challenges. The ten featured papers collectively demonstrate how novel learning paradigms, model architectures, and data representations can address critical problems such as personalization, recommendation diversity, fairness, interpretability, and performance in low-resource or noisy data environments.
A prominent trend throughout the contributions is the growing integration of deep learning models, particularly transformers, graph neural networks, and hybrid architectures, with domain-specific knowledge and auxiliary learning objectives. From enhancing travel recommendation systems with personality profiling to deploying self-supervised learning in collaborative filtering, these studies showcase the importance of designing models that are not only accurate but also adaptive and explainable. Additionally, multiple contributions emphasize the practical viability of proposed solutions, as evidenced by experimental validation on diverse benchmark datasets and real-world scenarios.
Another key theme is the increasing emphasis on ethical and inclusive AI, particularly in works focused on bias mitigation, fairness in healthcare applications, and accessibility for underrepresented languages. The use of synthetic data generation, lightweight model adaptation, and open-source deployment reflects a broader movement toward responsible, transparent, and reproducible research practices.
Overall, the research collected in this Special Issue contributes to a deeper understanding of both the capabilities and limitations of contemporary machine learning systems. It provides a valuable resource for researchers and practitioners seeking to harness the power of computational intelligence in increasingly complex, uncertain, and socially sensitive environments. The breadth of methodological approaches and problem domains also points to promising directions for future work, including multimodal learning, continual adaptation, and trustworthy AI frameworks.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

List of Contributions

References

  1. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
  2. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar]
  3. Elsken, T.; Metzen, J.H.; Hutter, F. Neural Architecture Search: A Survey. J. Mach. Learn. Res. 2019, 20, 1–21. [Google Scholar]
  4. Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding Deep Learning Requires Rethinking Generalization. arXiv 2017, arXiv:1611.03530. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dudek, G. Computational Intelligence and Machine Learning: Advances in Models and Applications. Electronics 2025, 14, 1530. https://doi.org/10.3390/electronics14081530

AMA Style

Dudek G. Computational Intelligence and Machine Learning: Advances in Models and Applications. Electronics. 2025; 14(8):1530. https://doi.org/10.3390/electronics14081530

Chicago/Turabian Style

Dudek, Grzegorz. 2025. "Computational Intelligence and Machine Learning: Advances in Models and Applications" Electronics 14, no. 8: 1530. https://doi.org/10.3390/electronics14081530

APA Style

Dudek, G. (2025). Computational Intelligence and Machine Learning: Advances in Models and Applications. Electronics, 14(8), 1530. https://doi.org/10.3390/electronics14081530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop