Advanced Machine Learning, Pattern Recognition, and Deep Learning Technologies: Methodologies and Applications, 2nd Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 January 2026 | Viewed by 3361

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China
Interests: machine learning; biometrics; data mining; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen 518107, China
Interests: anomaly detection; multimedia analysis; object detection; image/video compression; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer and Information Science, University of Macau, Macau, China
Interests: biometrics; pattern recognition; image processing; medical image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, machine learning, pattern recognition, and deep learning techniques have been successfully applied to science and engineering research. For example, biometric recognition, i.e., the recognition of palmprints, faces, and irises, has enabled personal security authentication for airports, banks, and online payments. These techniques have also allowed us to retrieve the information we are interested in from the internet. Furthermore, image processing technology can help us obtain more beautiful photos. Deep learning, in particular, has a powerful ability to extract discriminant patterns and make accurate predictions from large-scale databases. However, the performances of machine learning, pattern recognition, and deep learning algorithms rely significantly on model design, mathematical interpretation, and optimization. A good fusion of theories and models is crucial to the success of the applications listed above. The aim of this Special Issue is to highlight recent advances in machine learning, pattern recognition, and deep learning methodologies and theories. Papers with interesting/significant new applications of the abovementioned methods are also welcome. The topics of interest for this Special Issue include, but are not limited to, the following:

  1. Advanced machine intelligence methods and applications;
  2. Advanced pattern analysis methods and applications;
  3. Deep-learning-based methods and applications;
  4. Biometric recognition algorithms and applications;
  5. Multi-view/modal learning and fusion;
  6. Data mining and analysis;
  7. Hashing learning-based methods and applications;
  8. Dimensionality reduction and discriminant representation;
  9. Subspace learning and clustering;
  10. Graph learning-based methods and applications;
  11. Super-resolution/enhancement/restoration of images;
  12. Advanced models within computer vision, such as object tracking and detection;
  13. Sparse representations and their applications.

Dr. Shuping Zhao
Dr. Jie Wen
Dr. Chao Huang
Dr. Bob Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • pattern recognition
  • deep learning
  • mathematical optimization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1720 KiB  
Article
A Hybrid Quantum–Classical Network for Eye-Written Digit Recognition
by Kimsay Pov, Tara Kit, Myeongseong Go, Won-Du Chang and Youngsun Han
Electronics 2025, 14(16), 3220; https://doi.org/10.3390/electronics14163220 - 13 Aug 2025
Abstract
Eye-written digit recognition presents a promising alternative communication method for individuals affected by amyotrophic lateral sclerosis. However, the development of robust models in this field is limited by the availability of datasets, due to the complex and unstable procedure of collecting eye-written samples. [...] Read more.
Eye-written digit recognition presents a promising alternative communication method for individuals affected by amyotrophic lateral sclerosis. However, the development of robust models in this field is limited by the availability of datasets, due to the complex and unstable procedure of collecting eye-written samples. Previous work has proposed both conventional techniques and deep neural networks to classify eye-written digits, achieving moderate to high accuracy with variability across runs. In this study, we explore the potential of quantum machine learning by presenting a hybrid quantum–classical model that integrates a variational quantum circuit into a classical deep neural network architecture. While classical models already achieve strong performance, this work examines the potential of quantum-enhanced models to achieve such performance with fewer parameters and greater expressive capacity. To further improve robustness and stability, we employ an ensemble strategy that aggregates predictions from multiple trained instances of the hybrid model. This study serves as a proof-of-concept to evaluate the feasibility of incorporating a compact 4-qubit quantum circuit within a lightweight hybrid model. The proposed model achieves 98.52% accuracy with a standard deviation of 1.99, supporting the potential of combining quantum and classical computing for assistive communication technologies and encouraging further research in quantum biosignal interpretation and human–computer interaction. Full article
Show Figures

Figure 1

22 pages, 1710 KiB  
Article
Machine Learning Techniques Improving the Box–Cox Transformation in Breast Cancer Prediction
by Sultan S. Alshamrani
Electronics 2025, 14(16), 3173; https://doi.org/10.3390/electronics14163173 - 9 Aug 2025
Viewed by 241
Abstract
Breast cancer remains a major global health problem, characterized by high incidence and mortality rates. Developing accurate prediction models is essential to improving early detection and treatment outcomes. Machine learning (ML) has become a valuable resource in breast cancer prediction; however, the complexities [...] Read more.
Breast cancer remains a major global health problem, characterized by high incidence and mortality rates. Developing accurate prediction models is essential to improving early detection and treatment outcomes. Machine learning (ML) has become a valuable resource in breast cancer prediction; however, the complexities inherent in medical data, including biases and imbalances, can hinder the effectiveness of these models. This paper explores combining the Box–Cox transformation with ML models to normalize data distributions and stabilize variance, thereby enhancing prediction accuracy. Two datasets were analyzed: a synthetic gamma-distributed dataset that simulates skewed real-world data and the Surveillance, Epidemiology, and End Results (SEER) breast cancer dataset, which displays imbalanced real-world data. Four distinct experimental scenarios were conducted on the ML models with a synthetic dataset, the SEER dataset with the Box–Cox transformation, a SEER dataset with the logarithmic transformation, and with Synthetic Minority Over-sampling Technique (SMOTE) augmentation to evaluate the impact of the Box–Cox transformation through different lambda values. The results show that the Box–Cox transformation significantly improves the performance of Artificial Intelligence (AI) models, particularly the stacking model, achieving the highest accuracy with 94.53% and 94.74% of the F1 score. This study demonstrates the importance of feature transformation in healthcare analytics, offering a scalable framework for improving breast cancer prediction and potentially applicable to other medical datasets with similar challenges. Full article
Show Figures

Figure 1

16 pages, 2283 KiB  
Article
Recognition of Japanese Finger-Spelled Characters Based on Finger Angle Features and Their Continuous Motion Analysis
by Tamon Kondo, Ryota Murai, Zixun He, Duk Shin and Yousun Kang
Electronics 2025, 14(15), 3052; https://doi.org/10.3390/electronics14153052 - 30 Jul 2025
Viewed by 236
Abstract
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing [...] Read more.
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing the influence of background and surrounding individuals. We constructed a large-scale dataset that includes not only the basic 50 Japanese syllables but also those with diacritical marks, such as voiced sounds (e.g., “ga”, “za”, “da”) and semi-voiced sounds (e.g., “pa”, “pi”, “pu”), to enhance the model’s ability to recognize a wide variety of characters. In addition, the application of a change-point detection algorithm enabled accurate segmentation of sign language motion boundaries, improving word-level recognition performance. These efforts laid the foundation for a highly practical recognition system. However, several challenges remain, including the limited size and diversity of the dataset and the need for further improvements in segmentation accuracy. Future work will focus on enhancing the model’s generalizability by collecting more diverse data from a broader range of participants and incorporating segmentation methods that consider contextual information. Ultimately, the outcomes of this research should contribute to the development of educational support tools and sign language interpretation systems aimed at real-world applications. Full article
Show Figures

Figure 1

24 pages, 26672 KiB  
Article
Short-Term Electric Load Forecasting Using Deep Learning: A Case Study in Greece with RNN, LSTM, and GRU Networks
by Vasileios Zelios, Paris Mastorocostas, George Kandilogiannakis, Anastasios Kesidis, Panagiota Tselenti and Athanasios Voulodimos
Electronics 2025, 14(14), 2820; https://doi.org/10.3390/electronics14142820 - 14 Jul 2025
Viewed by 636
Abstract
The increasing volatility in energy markets, particularly in Greece where electricity costs reached a peak of 236 EUR/MWh in 2022, underscores the urgent need for accurate short-term load forecasting models. In this study, the application of deep learning techniques, specifically Recurrent Neural Network [...] Read more.
The increasing volatility in energy markets, particularly in Greece where electricity costs reached a peak of 236 EUR/MWh in 2022, underscores the urgent need for accurate short-term load forecasting models. In this study, the application of deep learning techniques, specifically Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), to forecast hourly electricity demand is investigated. The proposed models were trained on historical load data from the Greek power system spanning the years 2013 to 2016. Various deep learning architectures were implemented and their forecasting performances using statistical metrics such as Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE) were evaluated. The experiments utilized multiple time horizons (1 h, 2 h, 24 h) and input sequence lengths (6 h to 168 h) to assess model accuracy and robustness. The best performing GRU model achieved an RMSE of 83.2 MWh and a MAPE of 1.17% for 1 h ahead forecasting, outperforming both LSTM and RNN in terms of both accuracy and computational efficiency. The predicted values were integrated into a dynamic Power BI dashboard, to enable real-time visualization and decision support. These findings demonstrate the potential of deep learning architectures, particularly GRUs, for operational load forecasting and their applicability to intelligent energy systems in a market-strained environment. Full article
Show Figures

Figure 1

14 pages, 1835 KiB  
Article
Cybersecurity Applications of Near-Term Large Language Models
by Casimer DeCusatis, Raymond Tomo, Aurn Singh, Emile Khoury and Andrew Masone
Electronics 2025, 14(13), 2704; https://doi.org/10.3390/electronics14132704 - 4 Jul 2025
Viewed by 488
Abstract
This paper examines near-term generative large language models (GenLLM) for cybersecurity applications. We experimentally study three common use cases, namely the use of GenLLM as a digital assistant, analysts for threat hunting and incident response, and analysts for access management in zero trust [...] Read more.
This paper examines near-term generative large language models (GenLLM) for cybersecurity applications. We experimentally study three common use cases, namely the use of GenLLM as a digital assistant, analysts for threat hunting and incident response, and analysts for access management in zero trust systems. In particular, we establish that one of the most common GenLLMs, ChatGPT, can pass cybersecurity certification exams for security fundamentals, hacking and penetration testing, and mobile device security, as well as perform competitively in cybersecurity ethics assessments. We also identify issues associated with hallucinations in these environments. The ability of ChatGPT to analyze network scans and security logs is also evaluated. Finally, we attempt to jailbreak ChatGPT in order to assess its application to access management systems. Full article
Show Figures

Figure 1

21 pages, 817 KiB  
Article
C3-VULMAP: A Dataset for Privacy-Aware Vulnerability Detection in Healthcare Systems
by Jude Enenche Ameh, Abayomi Otebolaku, Alex Shenfield and Augustine Ikpehai
Electronics 2025, 14(13), 2703; https://doi.org/10.3390/electronics14132703 - 4 Jul 2025
Viewed by 479
Abstract
The increasing integration of digital technologies in healthcare has expanded the attack surface for privacy violations in critical systems such as electronic health records (EHRs), telehealth platforms, and medical device software. However, current vulnerability detection datasets lack domain-specific privacy annotations essential for compliance [...] Read more.
The increasing integration of digital technologies in healthcare has expanded the attack surface for privacy violations in critical systems such as electronic health records (EHRs), telehealth platforms, and medical device software. However, current vulnerability detection datasets lack domain-specific privacy annotations essential for compliance with healthcare regulations like HIPAA and GDPR. This study presents C3-VULMAP, a novel and large-scale dataset explicitly designed for privacy-aware vulnerability detection in healthcare software. The dataset comprises over 30,000 vulnerable and 7.8 million non-vulnerable C/C++ functions, annotated with CWE categories and systematically mapped to LINDDUN privacy threat types. The objective is to support the development of automated, privacy-focused detection systems that can identify fine-grained software vulnerabilities in healthcare environments. To achieve this, we developed a hybrid construction methodology combining manual threat modeling, LLM-assisted synthetic generation, and multi-source aggregation. We then conducted comprehensive evaluations using traditional machine learning algorithms (Support Vector Machines, XGBoost), graph neural networks (Devign, Reveal), and transformer-based models (CodeBERT, RoBERTa, CodeT5). The results demonstrate that transformer models, such as RoBERTa, achieve high detection performance (F1 = 0.987), while Reveal leads GNN-based methods (F1 = 0.993), with different models excelling across specific privacy threat categories. These findings validate C3-VULMAP as a powerful benchmarking resource and show its potential to guide the development of privacy-preserving, secure-by-design software in embedded and electronic healthcare systems. The dataset fills a critical gap in privacy threat modeling and vulnerability detection and is positioned to support future research in cybersecurity and intelligent electronic systems for healthcare. Full article
Show Figures

Graphical abstract

16 pages, 6657 KiB  
Article
Experimental Assessment of YOLO Variants for Coronary Artery Disease Segmentation from Angiograms
by Eduardo Díaz-Gaxiola, Arturo Yee-Rendon, Ines F. Vega-Lopez, Juan Augusto Campos-Leal, Iván García-Aguilar, Ezequiel López-Rubio and Rafael M. Luque-Baena
Electronics 2025, 14(13), 2683; https://doi.org/10.3390/electronics14132683 - 2 Jul 2025
Viewed by 587
Abstract
Coronary artery disease (CAD) is one of the leading causes of mortality worldwide, highlighting the importance of developing accurate and efficient diagnostic tools. This study presents a comparative evaluation of three recent YOLO architecture versions (YOLOv8, YOLOv9, and YOLOv11) for the tasks of [...] Read more.
Coronary artery disease (CAD) is one of the leading causes of mortality worldwide, highlighting the importance of developing accurate and efficient diagnostic tools. This study presents a comparative evaluation of three recent YOLO architecture versions (YOLOv8, YOLOv9, and YOLOv11) for the tasks of coronary vessel segmentation and stenosis detection using the ARCADE dataset. Two workflows were explored: one with original angiographic images and another incorporating Contrast Limited Adaptive Histogram Equalization (CLAHE) for image enhancement. Models were trained for 100 epochs using the AdamW optimizer and evaluated with precision, recall, and F1-score under a pixel-based segmentation framework. YOLOv9-E achieved the highest performance in vessel segmentation with an F1-score of 0.4524, while YOLOv11-X was most effective for stenosis detection, achieving an F1-score of 0.7826. Although CLAHE improved local contrast, it did not consistently improve segmentation results and occasionally introduced artifacts that negatively affected model performance. Compared to state-of-the-art methods, the YOLO models demonstrated competitive results, especially for large, well-defined coronary segments, but showed limitations in detecting smaller or more complex pathological structures. These findings support the use of YOLO-based architectures for real-time CAD segmentation tasks and highlight opportunities for future improvement through the integration of attention mechanisms or hybrid deep learning strategies. Full article
Show Figures

Figure 1

Back to TopTop