Emerging Trends in Machine Learning and Artificial Intelligence

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: 31 August 2025 | Viewed by 13415

Special Issue Editor


E-Mail Website1 Website2
Guest Editor
Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0810, Australia
Interests: machine learning; deep learning; computer vision; emotion recognition; medical imaging; precision agriculture
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning (ML) and artificial intelligence (AI) are rapidly evolving fields that are fundamentally transforming various industries, such as healthcare, finance, agriculture, manufacturing, and education. Traditional approaches to AI and ML have been highly effective, but the ongoing advancements in these areas have led to the emergence of novel trends that push the boundaries of what is possible. These emerging trends are not only enhancing the performance and capabilities of AI systems but are also addressing some of the long-standing challenges in the field. Emerging AI and ML trends enhance image analysis, content creation, language processing, and real-time decision-making across diverse application areas like precision agriculture, emotion recognition, and autonomous systems. In specific, generative AI revolutionizes image, audio, video, and text applications. Despite these advancements, challenges remain. Hence, this Special Issue aims to address these challenges by disseminating recent advances in emerging AI and ML trends. It will focus on the flexibility and adaptability of these new approaches, as well as their potential to surpass traditional methods in terms of their performance. Original contributions as well as benchmarking studies with balanced literature reviews and engineering applications in emerging topics in AI and ML are welcome. Topics of interest for this Special Issue include, but are not limited to: 

  • Generative AI and its applications;
  • AI in healthcare;
  • Multi-modal AI;
  • AI and quantum computing;
  • Federated learning and privacy-preserving AI;
  • Large language models (LLMs) and their applications;
  • Edge AI and TinyML;
  • AI-enhanced cybersecurity;
  • New trends in learning algorithms: self-supervised, active, contrastive, and continual learning;
  • Graph neural networks and their applications;
  • Ethics and responsible AI;
  • Engineering applications of emerging AI and ML methods in biomedical, precision agriculture, affective computing, etc.

Dr. Thuseethan Selvarajah
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • neural networks
  • generative artificial intelligence
  • natural language processing
  • computer vision
  • large language models
  • reinforcement learning
  • learning algorithms

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 922 KiB  
Article
Accounting Support Using Artificial Intelligence for Bank Statement Classification
by Marco Lecci and Thomas Hanne
Computers 2025, 14(5), 193; https://doi.org/10.3390/computers14050193 - 15 May 2025
Viewed by 130
Abstract
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies [...] Read more.
Artificial Intelligence is a disruptive technology that is revolutionizing the accounting sector, e.g., by reducing costs, detecting fraud, and generating reports. However, the manual maintenance of booking ledgers remains a significant challenge, particularly for small and medium-sized enterprises. The usage of AI technologies in this area is rarely considered in the literature depite a significant interest in using AI for other acounting-related activities. Our study, which was conducted during 2023–2024, utilizes natural language processing and machine learning to construct a predictive model that accurately matches bank transaction statements with accounting records. The study employs Feedforward Neural Networks and Support Vector Machines with various settings and compares their performance with that of previous models embedded in similar predictive tasks. Additionally, as a baseline model, a software called Contofox, a rule-based system capable of classifying accounting records by manually creating rules to match bank statements with accounting records, is used. Furthermore, this study evaluates the business value of the model through an interview with an accounting expert, highlighting the potential benefits of artifacts in enhancing accounting processes. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 7588 KiB  
Article
Driver Distraction Detection in Extreme Conditions Using Kolmogorov–Arnold Networks
by János Hollósi, Gábor Kovács, Mykola Sysyn, Dmytro Kurhan, Szabolcs Fischer and Viktor Nagy
Computers 2025, 14(5), 184; https://doi.org/10.3390/computers14050184 - 9 May 2025
Viewed by 204
Abstract
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to [...] Read more.
Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold networks (KANs). The adversarial FGSM attack method was applied to assess the robustness of KANs in extreme driving conditions, like adverse weather, high-traffic situations, and bad visibility conditions. In this research, a custom dataset was used in collaboration with a partner company in the field of public transportation. This allows the efficiency of Kolmogorov–Arnold network solutions to be verified using real data. The results suggest that KANs can enhance driver distraction detection under challenging conditions, with improved resilience against adversarial attacks, particularly in low-complexity networks. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 699 KiB  
Article
Role of Roadside Units in Cluster Head Election and Coverage Maximization for Vehicle Emergency Services
by Ravneet Kaur, Robin Doss, Lei Pan, Chaitanya Singla and Selvarajah Thuseethan
Computers 2025, 14(4), 152; https://doi.org/10.3390/computers14040152 - 18 Apr 2025
Viewed by 177
Abstract
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for [...] Read more.
Efficient clustering algorithms are critical for enabling the timely dissemination of emergency messages across maximum coverage areas in vehicular networks. While existing clustering approaches demonstrate stability and scalability, there has been a limited amount of work focused on leveraging roadside units (RSUs) for cluster head selection. This research proposes a novel framework that utilizes RSUs to facilitate cluster head election, mitigating the cluster head selection process, clustering overhead, and broadcast storm problem. The proposed scheme mandates selecting an optimal number of cluster heads to maximize information coverage and prevent traffic congestion, thereby enhancing the quality of service through improved cluster head duration, reduced cluster formation time, expanded coverage area, and decreased overhead. The framework comprises three key components: (I) an acknowledgment-based system for legitimate vehicle entry into the RSU for cluster head selection; (II) an authoritative node behavior mechanism for choosing cluster heads from received notifications; and (III) the role of bridge nodes in maximizing the coverage of the established network. The comparative analysis evaluates the clustering framework’s performance under uniform and non-uniform vehicle speed scenarios for time-barrier-based emergency message dissemination in vehicular ad hoc networks. The results demonstrate that the proposed model’s effectiveness for uniform highway speed scenarios is 100% whereas for non-uniform scenarios 99.55% information coverage is obtained. Furthermore, the clustering process accelerates by over 50%, decreasing overhead and reducing cluster head election time using RSUs. The proposed approach outperforms existing methods for the number of cluster heads, cluster head election time, total cluster formation time, and maximum information coverage across varying vehicle densities. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 839 KiB  
Article
Introducing a New Genetic Operator Based on Differential Evolution for the Effective Training of Neural Networks
by Ioannis G. Tsoulos, Vasileios Charilogis and Dimitrios Tsalikakis
Computers 2025, 14(4), 125; https://doi.org/10.3390/computers14040125 - 28 Mar 2025
Viewed by 352
Abstract
Artificial neural networks are widely established models used to solve a variety of real-world problems in the fields of physics, chemistry, etc. These machine learning models contain a series of parameters that must be appropriately tuned by various optimization techniques in order to [...] Read more.
Artificial neural networks are widely established models used to solve a variety of real-world problems in the fields of physics, chemistry, etc. These machine learning models contain a series of parameters that must be appropriately tuned by various optimization techniques in order to effectively address the problems that they face. Genetic algorithms have been used in many cases in the recent literature to train artificial neural networks, and various modifications have been made to enhance this procedure. In this article, the incorporation of a novel genetic operator into genetic algorithms is proposed to effectively train artificial neural networks. The new operator is based on the differential evolution technique, and it is periodically applied to randomly selected chromosomes from the genetic population. Furthermore, to determine a promising range of values for the parameters of the artificial neural network, an additional genetic algorithm is executed before the execution of the basic algorithm. The modified genetic algorithm is used to train neural networks on classification and regression datasets, and the results are reported and compared with those of other methods used to train neural networks. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 5670 KiB  
Article
A Conceptual Study of Rapidly Reconfigurable and Scalable Optical Convolutional Neural Networks Based on Free-Space Optics Using a Smart Pixel Light Modulator
by Young-Gu Ju
Computers 2025, 14(3), 111; https://doi.org/10.3390/computers14030111 - 20 Mar 2025
Viewed by 227
Abstract
The smart-pixel-based optical convolutional neural network was proposed to improve kernel refresh rates in scalable optical convolutional neural networks (CNNs) by replacing the spatial light modulator with a smart pixel light modulator while preserving benefits such as an unlimited input node size, cascadability, [...] Read more.
The smart-pixel-based optical convolutional neural network was proposed to improve kernel refresh rates in scalable optical convolutional neural networks (CNNs) by replacing the spatial light modulator with a smart pixel light modulator while preserving benefits such as an unlimited input node size, cascadability, and direct kernel representation. The smart pixel light modulator enhances weight update speed, enabling rapid reconfigurability. Its fast updating capability and memory expand the application scope of scalable optical CNNs, supporting operations like convolution with multiple kernel sets and difference mode. Simplifications using electrical fan-out reduce hardware complexity and costs. An evolution of this system, the smart-pixel-based bidirectional optical CNN, employs a bidirectional architecture and single lens-array optics, achieving a computational throughput of 8.3 × 1014 MAC/s with a smart pixel light modulator resolution of 3840 × 2160. Further advancements led to the two-mirror-like smart-pixel-based bidirectional optical CNN, which emulates 2n layers using only two physical layers, significantly reducing hardware requirements despite increased time delay. This architecture was demonstrated for solving partial differential equations by leveraging local interactions as a sequence of convolutions. These advancements position smart-pixel-based optical CNNs and their derivatives as promising solutions for future CNN applications. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 1040 KiB  
Article
FungiLT: A Deep Learning Approach for Species-Level Taxonomic Classification of Fungal ITS Sequences
by Kai Liu, Hongyuan Zhao, Dongliang Ren, Dongna Ma, Shuangping Liu and Jian Mao
Computers 2025, 14(3), 85; https://doi.org/10.3390/computers14030085 - 28 Feb 2025
Viewed by 743
Abstract
With the explosive growth of sequencing data, rapidly and accurately classifying and identifying species has become a critical challenge in amplicon analysis research. The internal transcribed spacer (ITS) region is widely used for fungal species classification and identification. However, most existing ITS databases [...] Read more.
With the explosive growth of sequencing data, rapidly and accurately classifying and identifying species has become a critical challenge in amplicon analysis research. The internal transcribed spacer (ITS) region is widely used for fungal species classification and identification. However, most existing ITS databases cover limited fungal species diversity, and current classification methods struggle to efficiently handle such large-scale data. This study integrates multiple publicly available databases to construct an ITS sequence database encompassing 93,975 fungal species, making it a resource with broader species diversity for fungal taxonomy. In this study, a fungal classification model named FungiLT is proposed, integrating Transformer and BiLSTM architectures while incorporating a dual-channel feature fusion mechanism. On a dataset where each fungal species is represented by 100 ITS sequences, it achieves a species-level classification accuracy of 98.77%. Compared to BLAST, QIIME2, and the deep learning model CNN_FunBar, FungiLT demonstrates significant advantages in ITS species classification. This study provides a more efficient and accurate solution for large-scale fungal classification tasks and offers new technical support and insights for species annotation in amplicon analysis research. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Graphical abstract

19 pages, 2545 KiB  
Article
Distinguishing Human Journalists from Artificial Storytellers Through Stylistic Fingerprints
by Van Hieu Tran, Yakub Sebastian, Asif Karim and Sami Azam
Computers 2024, 13(12), 328; https://doi.org/10.3390/computers13120328 - 5 Dec 2024
Viewed by 2786
Abstract
Background: Artificial intelligence poses a critical challenge to the authenticity of journalistic documents. Objectives: This research proposes a method to automatically identify AI-generated news articles based on various stylistic features. Methods/Approach: We used machine learning algorithms and trained five classifiers [...] Read more.
Background: Artificial intelligence poses a critical challenge to the authenticity of journalistic documents. Objectives: This research proposes a method to automatically identify AI-generated news articles based on various stylistic features. Methods/Approach: We used machine learning algorithms and trained five classifiers to distinguish journalistic news articles from their AI-generated counterparts based on various lexical, syntactic, and readability features. BERTopic was used to extract salient keywords from these articles, which were then used to prompt Google’s Gemini to generate new artificial articles on the same topic. Results: The Random Forest classifier performed the best on the task (accuracy = 98.3%, precision = 0.984, recall = 0.983, and F1-score = 0.983). Random Forest feature importance, Analysis of Variance (ANOVA), Mutual Information, and Recursive Feature Elimination revealed the top five important features: sentence length range, paragraph length coefficient of variation, verb ratio, sentence complex tags, and paragraph length range. Conclusions: This research introduces an innovative approach to prompt engineering using the BERTopic modelling technique and identifies key stylistic features to distinguish AI-generated content from human-generated content. Therefore, it contributes to the ongoing efforts to combat disinformation, enhancing the credibility of content in various industries, such as academic research, education, and journalism. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 1050 KiB  
Article
Siamese Network-Based Lightweight Framework for Tomato Leaf Disease Recognition
by Selvarajah Thuseethan, Palanisamy Vigneshwaran, Joseph Charles and Chathrie Wimalasooriya
Computers 2024, 13(12), 323; https://doi.org/10.3390/computers13120323 - 4 Dec 2024
Cited by 1 | Viewed by 916
Abstract
In this paper, a novel Siamese network-based lightweight framework is proposed for automatic tomato leaf disease recognition. This framework achieves the highest accuracy of 96.97% on the tomato subset obtained from the PlantVillage dataset and 95.48% on the Taiwan tomato leaf disease dataset. [...] Read more.
In this paper, a novel Siamese network-based lightweight framework is proposed for automatic tomato leaf disease recognition. This framework achieves the highest accuracy of 96.97% on the tomato subset obtained from the PlantVillage dataset and 95.48% on the Taiwan tomato leaf disease dataset. Experimental results further confirm that the proposed framework is effective with imbalanced and small data. The backbone network integrated with this framework is lightweight with approximately 2.9629 million trainable parameters, which is second to SqueezeNet and significantly lower than other lightweight deep networks. Automatic tomato disease recognition from leaf images is vital to avoid crop losses by applying control measures on time. Even though recent deep learning-based tomato disease recognition methods with classical training procedures showed promising recognition results, they demand large labeled data and involve expensive training. The traditional deep learning models proposed for tomato disease recognition also consume high memory and storage because of a high number of parameters. While lightweight networks overcome some of these issues to a certain extent, they continue to show low performance and struggle to handle imbalanced data. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 3335 KiB  
Article
Unified Ecosystem for Data Sharing and AI-Driven Predictive Maintenance in Aviation
by Igor Kabashkin and Vitaly Susanin
Computers 2024, 13(12), 318; https://doi.org/10.3390/computers13120318 - 28 Nov 2024
Cited by 2 | Viewed by 2794
Abstract
The aviation industry faces considerable challenges in maintenance management due to the complexities of data standardization, data sharing, and predictive maintenance capabilities. This paper introduces a unified ecosystem for data sharing and AI-driven predictive maintenance designed to address these challenges by integrating real-time [...] Read more.
The aviation industry faces considerable challenges in maintenance management due to the complexities of data standardization, data sharing, and predictive maintenance capabilities. This paper introduces a unified ecosystem for data sharing and AI-driven predictive maintenance designed to address these challenges by integrating real-time and historical data from diverse sources, including aircraft sensors, maintenance logs, and operational records. The proposed ecosystem enables predictive analytics and anomaly detection, enhancing decision-making processes for airlines, maintenance, repair, and overhaul providers, and regulatory bodies. Key elements of the ecosystem include a modular design with feedback loops, scalable AI models for predictive maintenance, and robust data-sharing frameworks. This paper outlines the architecture of a unified aviation maintenance ecosystem built around multiple data sources, including aircraft sensors, maintenance logs, flight data, weather data, and manufacturer specifications. By integrating various components and stakeholders, the system achieves its full potential through several key use cases: monitoring aircraft component health, predicting component failures, receiving maintenance alerts, performing preventive maintenance, and generating compliance reports. Each use case is described in detail and supported by illustrative dataflow diagrams. The findings underscore the transformative impact of such an ecosystem on aviation maintenance practices, marking a significant step toward safer, more efficient, and sustainable aviation operations. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1382 KiB  
Article
On the Robustness of Compressed Models with Class Imbalance
by Baraa Saeed Ali, Nabil Sarhan and Mohammed Alawad
Computers 2024, 13(11), 297; https://doi.org/10.3390/computers13110297 - 16 Nov 2024
Viewed by 980
Abstract
Deep learning (DL) models have been deployed in various platforms, including resource-constrained environments such as edge computing, smartphones, and personal devices. Such deployment requires models to have smaller sizes and memory footprints. To this end, many model compression techniques proposed in the literature [...] Read more.
Deep learning (DL) models have been deployed in various platforms, including resource-constrained environments such as edge computing, smartphones, and personal devices. Such deployment requires models to have smaller sizes and memory footprints. To this end, many model compression techniques proposed in the literature successfully reduce model sizes and maintain comparable accuracy. However, the robustness of compressed DL models against class imbalance, a natural phenomenon in real-life datasets, is still under-explored. We present a comprehensive experimental study of the performance and robustness of compressed DL models when trained on class-imbalanced datasets. We investigate the robustness of compressed DL models using three popular compression techniques (pruning, quantization, and knowledge distillation) with class-imbalanced variants of the CIFAR-10 dataset and show that compressed DL models are not robust against class imbalance in training datasets. We also show that different compression techniques have varying degrees of impact on the robustness of compressed DL models. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 1950 KiB  
Review
Machine Learning and Deep Learning Paradigms: From Techniques to Practical Applications and Research Frontiers
by Kamran Razzaq and Mahmood Shah
Computers 2025, 14(3), 93; https://doi.org/10.3390/computers14030093 - 6 Mar 2025
Cited by 3 | Viewed by 2064
Abstract
Machine learning (ML) and deep learning (DL), subsets of artificial intelligence (AI), are the core technologies that lead significant transformation and innovation in various industries by integrating AI-driven solutions. Understanding ML and DL is essential to logically analyse the applicability of ML and [...] Read more.
Machine learning (ML) and deep learning (DL), subsets of artificial intelligence (AI), are the core technologies that lead significant transformation and innovation in various industries by integrating AI-driven solutions. Understanding ML and DL is essential to logically analyse the applicability of ML and DL and identify their effectiveness in different areas like healthcare, finance, agriculture, manufacturing, and transportation. ML consists of supervised, unsupervised, semi-supervised, and reinforcement learning techniques. On the other hand, DL, a subfield of ML, comprising neural networks (NNs), can deal with complicated datasets in health, autonomous systems, and finance industries. This study presents a holistic view of ML and DL technologies, analysing algorithms and their application’s capacity to address real-world problems. The study investigates the real-world application areas in which ML and DL techniques are implemented. Moreover, the study highlights the latest trends and possible future avenues for research and development (R&D), which consist of developing hybrid models, generative AI, and incorporating ML and DL with the latest technologies. The study aims to provide a comprehensive view on ML and DL technologies, which can serve as a reference guide for researchers, industry professionals, practitioners, and policy makers. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop