Special Issue "Advances in Machine Learning and Applications"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2023 | Viewed by 7541

Special Issue Editors

1. Department of Computer Technologies and Natural Sciences, ISMA University, Latvia, Riga
2. Department of Software Engineering, Institute of Automation and Information Technologies, Satbayev University, Satpayev str., 22A, Almaty 050013, Kazakhstan
Interests: applications of machine learning; data processing; scientometrics and decision support systems
Department of Digital Technologies of Data Processing, MIREA – Russian Technological University, 119454 Moscow, Russia
Interests: symmetry groups; lie groups; dynamic systems modeling; experimental processing; artificial intellectual technologies; information systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning realizes the potential inherent in the idea of artificial intelligence. The main expectation associated with machine learning is the realization of flexible, adaptive, “teachable” computational methods or algorithms. These methods provide new functions of programs and systems.

Machine learning is widely used in various practical applications. A far from complete list includes medicine, biology, chemistry, agriculture, mining, finance, industry, natural language processing, astronomy, etc. Along with applications, this field of knowledge is characterized by high dynamics of theoretical research, especially in the field of deep learning. Machine learning methods and algorithms can be divided into classical and new ones. Classical algorithms and methods are described in sufficient detail and are widely used in practice. Where researchers have access to large amounts of data, impressive results are emerging with the use of deep learning methods. New architectures of deep neural networks and their modifications for various applications appear almost daily. At the same time, despite the significant differences in algorithms and methods, many practical applications are developed using similar techniques.

The purpose of this Special Issue is to gather a collection of articles reflecting the similarities and differences of the latest applied implementations of machine learning in different areas. This will allow researchers to apply the developed machine learning cases to obtain new results in various application areas. We look forward to receiving your contributions.

Prof. Dr. Ravil Muhamedyev
Prof. Dr. Evgeny Nikulchev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2100 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • regression
  • classification
  • unsupervised learning
  • supervisor learning
  • semi supervisor learning
  • reinforcement learning
  • transfer learning
  • transformers
  • natural test processing
  • speech processing
  • image processing
  • machine vision
  • convolution neural network
  • recurrent neural networks

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
A Machine Learning Approach for Improving Wafer Acceptance Testing Based on an Analysis of Station and Equipment Combinations
Mathematics 2023, 11(7), 1569; https://doi.org/10.3390/math11071569 (registering DOI) - 23 Mar 2023
Viewed by 74
Abstract
Semiconductor manufacturing is a complex and lengthy process. Even with their expertise and experience, engineers often cannot quickly identify anomalies in an extensive database. Most research into equipment combinations has focused on the manufacturing process’s efficiency, quality, and cost issues. There has been [...] Read more.
Semiconductor manufacturing is a complex and lengthy process. Even with their expertise and experience, engineers often cannot quickly identify anomalies in an extensive database. Most research into equipment combinations has focused on the manufacturing process’s efficiency, quality, and cost issues. There has been little consideration of the relationship between semiconductor station and equipment combinations and throughput. In this study, a machine learning approach that allows for the integration of control charts, clustering, and association rules were developed. This approach was used to identify equipment combinations that may harm production processes by analyzing the effect on Vt parameters of the equipment combinations used in wafer acceptance testing (WAT). The results showed that when the support is between 70% and 80% and the confidence level is 85%, it is possible to quickly select the specific combinations of 13 production stations that significantly impact the Vt values of all 39 production stations. Stations 046000 (EH308), 049200 (DW005), 049050 (DI303), and 060000 (DC393) were found to have the most abnormal equipment combinations. The results of this research will aid the detection of equipment errors during semiconductor manufacturing and assist the optimization of production scheduling. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Article
Ensemble Methods in Customer Churn Prediction: A Comparative Analysis of the State-of-the-Art
Mathematics 2023, 11(5), 1137; https://doi.org/10.3390/math11051137 - 24 Feb 2023
Viewed by 376
Abstract
In the past several single classifiers, homogeneous and heterogeneous ensembles have been proposed to detect the customers who are most likely to churn. Despite the popularity and accuracy of heterogeneous ensembles in various domains, customer churn prediction models have not yet been picked [...] Read more.
In the past several single classifiers, homogeneous and heterogeneous ensembles have been proposed to detect the customers who are most likely to churn. Despite the popularity and accuracy of heterogeneous ensembles in various domains, customer churn prediction models have not yet been picked up. Moreover, there are other developments in the performance evaluation and model comparison level that have not been introduced in a systematic way. Therefore, the aim of this study is to perform a large scale benchmark study in customer churn prediction implementing these novel methods. To do so, we benchmark 33 classifiers, including 6 single classifiers, 14 homogeneous, and 13 heterogeneous ensembles across 11 datasets. Our findings indicate that heterogeneous ensembles are consistently ranked higher than homogeneous ensembles and single classifiers. It is observed that a heterogeneous ensemble with simulated annealing classifier selection is ranked the highest in terms of AUC and expected maximum profits. For accuracy, F1 measure and top-decile lift, a heterogenous ensemble optimized by non-negative binomial likelihood, and a stacked heterogeneous ensemble are, respectively, the top ranked classifiers. Our study contributes to the literature by being the first to include such an extensive set of classifiers, performance metrics, and statistical tests in a benchmark study of customer churn. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Article
Transformer-Based Seq2Seq Model for Chord Progression Generation
Mathematics 2023, 11(5), 1111; https://doi.org/10.3390/math11051111 - 23 Feb 2023
Viewed by 468
Abstract
Machine learning is widely used in various practical applications with deep learning models demonstrating advantages in handling huge data. Treating music as a special language and using deep learning models to accomplish melody recognition, music generation, and music analysis has proven feasible. In [...] Read more.
Machine learning is widely used in various practical applications with deep learning models demonstrating advantages in handling huge data. Treating music as a special language and using deep learning models to accomplish melody recognition, music generation, and music analysis has proven feasible. In certain music-related deep learning research, recurrent neural networks have been replaced with transformers. This has achieved significant results. In traditional approaches with recurrent neural networks, input sequences are limited in length. This paper proposes a method to generate chord progressions for melodies using a transformer-based sequence-to-sequence model, which is divided into a pre-trained encoder and decoder. A pre-trained encoder extracts contextual information from melodies, whereas a decoder uses this information to produce chords asynchronously and finally outputs chord progressions. The proposed method addresses length limitation issues while considering the harmony between chord progressions and melodies. Chord progressions can be generated for melodies in practical music composition applications. Evaluation experiments are conducted using the proposed method and three baseline models. The baseline models included the bidirectional long short-term memory (BLSTM), bidirectional encoder representation from transformers (BERT), and generative pre-trained transformer (GPT2). The proposed method outperformed the baseline models in [email protected] (k = 1) by 25.89, 1.54, and 2.13 %, respectively. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Article
Unsupervised Representation Learning with Task-Agnostic Feature Masking for Robust End-to-End Speech Recognition
Mathematics 2023, 11(3), 622; https://doi.org/10.3390/math11030622 - 26 Jan 2023
Viewed by 478
Abstract
Unsupervised learning-based approaches for training speech vector representations (SVR) have recently been widely applied. While pretrained SVR models excel in relatively clean automatic speech recognition (ASR) tasks, such as those recorded in laboratory environments, they are still insufficient for practical applications with various [...] Read more.
Unsupervised learning-based approaches for training speech vector representations (SVR) have recently been widely applied. While pretrained SVR models excel in relatively clean automatic speech recognition (ASR) tasks, such as those recorded in laboratory environments, they are still insufficient for practical applications with various types of noise, intonation, and dialects. To cope with this problem, we present a novel unsupervised SVR learning method for practical end-to-end ASR models. Our approach involves designing a speech feature masking method to stabilize SVR model learning and improve the performance of the ASR model in a downstream task. By introducing a noise masking strategy into diverse combinations of the time and frequency regions of the spectrogram, the SVR model becomes a robust representation extractor for the ASR model in practical scenarios. In pretraining experiments, we train the SVR model using approximately 18,000 h of Korean speech datasets that included diverse speakers and were recorded in environments with various amounts of noise. The weights of the pretrained SVR extractor are then frozen, and the extracted speech representations are used for ASR model training in a downstream task. The experimental results show that the ASR model using our proposed SVR extractor significantly outperforms conventional methods. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Article
Implementing Magnetic Resonance Imaging Brain Disorder Classification via AlexNet–Quantum Learning
Mathematics 2023, 11(2), 376; https://doi.org/10.3390/math11020376 - 10 Jan 2023
Viewed by 890
Abstract
The classical neural network has provided remarkable results to diagnose neurological disorders against neuroimaging data. However, in terms of efficient and accurate classification, some standpoints need to be improved by utilizing high-speed computing tools. By integrating quantum computing phenomena with deep neural network [...] Read more.
The classical neural network has provided remarkable results to diagnose neurological disorders against neuroimaging data. However, in terms of efficient and accurate classification, some standpoints need to be improved by utilizing high-speed computing tools. By integrating quantum computing phenomena with deep neural network approaches, this study proposes an AlexNet–quantum transfer learning method to diagnose neurodegenerative diseases using magnetic resonance imaging (MRI) dataset. The hybrid model is constructed by extracting an informative feature vector from high-dimensional data using a classical pre-trained AlexNet model and further feeding this network to a quantum variational circuit (QVC). Quantum circuit leverages quantum computing phenomena, quantum bits, and different quantum gates such as Hadamard and CNOT gate for transformation. The classical pre-trained model extracts the 4096 features from the MRI dataset by using AlexNet architecture and gives this vector as input to the quantum circuit. QVC generates a 4-dimensional vector and to transform this vector into a 2-dimensional vector, a fully connected layer is connected at the end to perform the binary classification task for a brain disorder. Furthermore, the classical–quantum model employs the quantum depth of six layers on pennyLane quantum simulators, presenting the classification accuracy of 97% for Parkinson’s disease (PD) and 96% for Alzheimer’s disease (AD) for 25 epochs. Besides this, pre-trained classical neural models are implemented for the classification of disorder and then, we compare the performance of the classical transfer learning model and hybrid classical–quantum transfer learning model. This comparison shows that the AlexNet–quantum learning model achieves beneficial results for classifying PD and AD. So, this work leverages the high-speed computational power using deep network learning and quantum circuit learning to offer insight into the practical application of quantum computers that speed up the performance of the model on real-world data in the healthcare domain. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Article
XAI for Churn Prediction in B2B Models: A Use Case in an Enterprise Software Company
Mathematics 2022, 10(20), 3896; https://doi.org/10.3390/math10203896 - 20 Oct 2022
Cited by 2 | Viewed by 1232
Abstract
The literature related to Artificial Intelligence (AI) models and customer churn prediction is extensive and rich in Business to Customer (B2C) environments; however, research in Business to Business (B2B) environments is not sufficiently addressed. Customer churn in the business environment and more so [...] Read more.
The literature related to Artificial Intelligence (AI) models and customer churn prediction is extensive and rich in Business to Customer (B2C) environments; however, research in Business to Business (B2B) environments is not sufficiently addressed. Customer churn in the business environment and more so in a B2B context is critical, as the impact on turnover is generally greater than in B2C environments. On the other hand, the data used in the context of this paper point to the importance of the relationship between customer and brand through the Contact Center. Therefore, the recency, frequency, importance and duration (RFID) model used to obtain the customer’s assessment from the point of view of their interactions with the Contact Center is a novelty and an additional source of information to traditional models based on purchase transactions, recency, frequency, and monetary (RFM). The objective of this work consists of the design of a methodological process that contributes to analyzing the explainability of AI algorithm predictions, Explainable Artificial Intelligence (XAI), for which we analyze the binary target variable abandonment in a B2B environment, considering the relationships that the partner (customer) has with the Contact Center, and focusing on a business software distribution company. The model can be generalized to any environment in which classification or regression algorithms are required. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Review

Jump to: Research

Review
Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges
Mathematics 2022, 10(15), 2552; https://doi.org/10.3390/math10152552 - 22 Jul 2022
Cited by 5 | Viewed by 3137
Abstract
Artificial intelligence (AI) is an evolving set of technologies used for solving a wide range of applied issues. The core of AI is machine learning (ML)—a complex of algorithms and methods that address the problems of classification, clustering, and forecasting. The practical application [...] Read more.
Artificial intelligence (AI) is an evolving set of technologies used for solving a wide range of applied issues. The core of AI is machine learning (ML)—a complex of algorithms and methods that address the problems of classification, clustering, and forecasting. The practical application of AI&ML holds promising prospects. Therefore, the researches in this area are intensive. However, the industrial applications of AI and its more intensive use in society are not widespread at the present time. The challenges of widespread AI applications need to be considered from both the AI (internal problems) and the societal (external problems) perspective. This consideration will identify the priority steps for more intensive practical application of AI technologies, their introduction, and involvement in industry and society. The article presents the identification and discussion of the challenges of the employment of AI technologies in the economy and society of resource-based countries. The systematization of AI&ML technologies is implemented based on publications in these areas. This systematization allows for the specification of the organizational, personnel, social and technological limitations. This paper outlines the directions of studies in AI and ML, which will allow us to overcome some of the limitations and achieve expansion of the scope of AI&ML applications. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Applications)
Show Figures

Figure 1

Back to TopTop