Special Issue "Algorithms for Machine Learning and Pattern Recognition Tasks"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (1 June 2022) | Viewed by 7667

Special Issue Editors

Prof. Dr. Hui Yu
E-Mail Website
Guest Editor
Prof. Dr. Mounim A. El Yacoubi
E-Mail Website
Guest Editor
Telecom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
Interests: machine learning; deep learning; pattern recognition; modeling behavioral and physiological human data; human activity and gesture recognition; handwriting and voice analysis; human mobility analysis; biometrics; human–computer interaction; detection and assessment of neurodegenerative diseases from biometric signals
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Mehdi Ammi
E-Mail Website
Guest Editor
Department of Computer Science, University of Paris 8, 93526 Saint-Denis, France
Interests: human activity recognition; modeling physiological functions; emotions recognition; affective and social interaction; human–computer interaction; pervasive and ubiquitous environments; Internet of Things; e-health
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Pattern recognition, the automatic recognition of patterns in the input data for solving different kinds of tasks, is a mature research field with more than 50 years of active research, which has resulted in the development of real-life successful applications such as speech recognition, handwritten mail sorting, medical imaging and natural language processing.

Pattern recognition is witnessing currently a spectacular development. The reason for which is sevenfold: the breakthrough in deep and representation learning has not only led to significantly improved performance, but it has also allowed breakthroughs in new pattern recognition. Beyond the usual dichotomy of supervised learning and classification vs. unsupervised learning and data mining/knowledge discovery in databases, significant advances have been achieved in research areas, such as self-supervised learning, hybrid deep reinforcement learning, pattern mining and graph neural networks. Moreover, while pattern recognition has been associated mainly with machine learning over the last few decades, symbolic AI and expert systems have also recently attracted increasing attention, especially with the advances in neural-symbolic computing.

This Special Issue aims to gather recent advances in algorithms for pattern recognition, particularly advanced machine/deep learning, as well as symbolic, AI techniques—investigated in the context of different tasks of classification, prediction or knowledge discovery. In addition, this Special Issue seeks to bring together academics and industrials to contribute and discuss the latest research and innovations in this field.

Prof. Dr. Hui Yu
Prof. Dr. Mounim A. El Yacoubi
Prof. Dr. Mehdi Ammi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Pattern recognition;
  • Supervised and unsupervised learning;
  • Self-supervised learning, reinforcement learning;
  • Classification, clustering, prediction;
  • Data mining, knowledge discovery in databases;
  • Artificial intelligence and machine learning;
  • Deep learning, CNN, RNN (LSTM, GRU, etc.), transformer models;
  • Transfer learning;
  • Explainable and attentional models;
  • Adversarial attacks and robust models;
  • Robustness of neural networks;
  • AI fairness;
  • Computer graphics, signal processing, bioinformatics, NLP, information retrieval;
  • Bayesian models;
  • Ensemble learning;
  • Model fusion;
  • Review of recent development in trends.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Short Text Classification with Tolerance-Based Soft Computing Method
Algorithms 2022, 15(8), 267; https://doi.org/10.3390/a15080267 - 30 Jul 2022
Viewed by 793
Abstract
Text classification aims to assign labels to textual units such as documents, sentences and paragraphs. Some applications of text classification include sentiment classification and news categorization. In this paper, we present a soft computing technique-based algorithm (TSC) to classify sentiment polarities of tweets [...] Read more.
Text classification aims to assign labels to textual units such as documents, sentences and paragraphs. Some applications of text classification include sentiment classification and news categorization. In this paper, we present a soft computing technique-based algorithm (TSC) to classify sentiment polarities of tweets as well as news categories from text. The TSC algorithm is a supervised learning method based on tolerance near sets. Near sets theory is a more recent soft computing methodology inspired by rough sets where instead of set approximation operators used by rough sets to induce tolerance classes, the tolerance classes are directly induced from the feature vectors using a tolerance level parameter and a distance function. The proposed TSC algorithm takes advantage of the recent advances in efficient feature extraction and vector generation from pre-trained bidirectional transformer encoders for creating tolerance classes. Experiments were performed on ten well-researched datasets which include both short and long text. Both pre-trained SBERT and TF-IDF vectors were used in the experimental analysis. Results from transformer-based vectors demonstrate that TSC outperforms five well-known machine learning algorithms on four datasets, and it is comparable with all other datasets based on the weighted F1, Precision and Recall scores. The highest AUC-ROC (Area under the Receiver Operating Characteristics) score was obtained in two datasets and comparable in six other datasets. The highest ROC-PRC (Area under the Precision–Recall Curve) score was obtained in one dataset and comparable in four other datasets. Additionally, significant differences were observed in most comparisons when examining the statistical difference between the weighted F1-score of TSC and other classifiers using a Wilcoxon signed-ranks test. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Automatic Classification of Foot Thermograms Using Machine Learning Techniques
Algorithms 2022, 15(7), 236; https://doi.org/10.3390/a15070236 - 06 Jul 2022
Cited by 1 | Viewed by 703
Abstract
Diabetic foot is one of the main complications observed in diabetic patients; it is associated with the development of foot ulcers and can lead to amputation. In order to diagnose these complications, specialists have to analyze several factors. To aid their decisions and [...] Read more.
Diabetic foot is one of the main complications observed in diabetic patients; it is associated with the development of foot ulcers and can lead to amputation. In order to diagnose these complications, specialists have to analyze several factors. To aid their decisions and help prevent mistakes, the resort to computer-assisted diagnostic systems using artificial intelligence techniques is gradually increasing. In this paper, two different models for the classification of thermograms of the feet of diabetic and healthy individuals are proposed and compared. In order to detect and classify abnormal changes in the plantar temperature, machine learning algorithms are used in both models. In the first model, the foot thermograms are classified into four classes: healthy and three categories for diabetics. The second model has two stages: in the first stage, the foot is classified as belonging to a diabetic or healthy individual, while, in the second stage, a classification refinement is conducted, classifying diabetic foot into three classes of progressive severity. The results show that both proposed models proved to be efficient, allowing us to classify a foot thermogram as belonging to a healthy or diabetic individual, with the diabetic ones divided into three classes; however, when compared, Model 2 outperforms Model 1 and allows for a better performance classification concerning the healthy category and the first class of diabetic individuals. These results demonstrate that the proposed methodology can be a tool to aid medical diagnosis. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Exploring the Suitability of Rule-Based Classification to Provide Interpretability in Outcome-Based Process Predictive Monitoring
Algorithms 2022, 15(6), 187; https://doi.org/10.3390/a15060187 - 27 May 2022
Viewed by 682
Abstract
The development of models for process outcome prediction using event logs has evolved in the literature with a clear focus on performance improvement. In this paper, we take a different perspective, focusing on obtaining interpretable predictive models for outcome prediction. We propose to [...] Read more.
The development of models for process outcome prediction using event logs has evolved in the literature with a clear focus on performance improvement. In this paper, we take a different perspective, focusing on obtaining interpretable predictive models for outcome prediction. We propose to use association rule-based classification, which results in inherently interpretable classification models. Although association rule mining has been used with event logs for process model approximation and anomaly detection in the past, its application to an outcome-based predictive model is novel. Moreover, we propose two ways of visualising the rules obtained to increase the interpretability of the model. First, the rules composing a model can be visualised globally. Second, given a running case on which a prediction is made, the rules influencing the prediction for that particular case can be visualised locally. The experimental results on real world event logs show that in most cases the performance of the rule-based classifier (RIPPER) is close to the one of traditional machine learning approaches. We also show the application of the global and local visualisation methods to real world event logs. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Squeezing Backbone Feature Distributions to the Max for Efficient Few-Shot Learning
Algorithms 2022, 15(5), 147; https://doi.org/10.3390/a15050147 - 26 Apr 2022
Cited by 5 | Viewed by 1140
Abstract
In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, [...] Read more.
In many real-life problems, it is difficult to acquire or label large amounts of data, resulting in so-called few-shot learning problems. However, few-shot classification is a challenging problem due to the uncertainty caused by using few labeled samples. In the past few years, many methods have been proposed with the common aim of transferring knowledge acquired on a previously solved task, which is often achieved by using a pretrained feature extractor. As such, if the initial task contains many labeled samples, it is possible to circumvent the limitations of few-shot learning. A shortcoming of existing methods is that they often require priors about the data distribution, such as the balance between considered classes. In this paper, we propose a novel transfer-based method with a double aim: providing state-of-the-art performance, as reported on standardized datasets in the field of few-shot learning, while not requiring such restrictive priors. Our methodology is able to cope with both inductive cases, where prediction is performed on test samples independently from each other, and transductive cases, where a joint (batch) prediction is performed. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Machine Learning Algorithms: An Experimental Evaluation for Decision Support Systems
Algorithms 2022, 15(4), 130; https://doi.org/10.3390/a15040130 - 15 Apr 2022
Viewed by 1110
Abstract
Decision support systems with machine learning can help organizations improve operations and lower costs with more precision and efficiency. This work presents a review of state-of-the-art machine learning algorithms for binary classification and makes a comparison of the related metrics between them with [...] Read more.
Decision support systems with machine learning can help organizations improve operations and lower costs with more precision and efficiency. This work presents a review of state-of-the-art machine learning algorithms for binary classification and makes a comparison of the related metrics between them with their application to a public diabetes and human resource datasets. The two mainly used categories that allow the learning process without requiring explicit programming are supervised and unsupervised learning. For that, we use Scikit-learn, the free software machine learning library for Python language. The best-performing algorithm was Random Forest for supervised learning, while in unsupervised clustering techniques, Balanced Iterative Reducing and Clustering Using Hierarchies and Spectral Clustering algorithms presented the best results. The experimental evaluation shows that the application of unsupervised clustering algorithms does not translate into better results than with supervised algorithms. However, the application of unsupervised clustering algorithms, as the preprocessing of the supervised techniques, can translate into a boost of performance. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Fine-Grained Pests Recognition Based on Truncated Probability Fusion Network via Internet of Things in Forestry and Agricultural Scenes
Algorithms 2021, 14(10), 290; https://doi.org/10.3390/a14100290 - 30 Sep 2021
Viewed by 1080
Abstract
Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight [...] Read more.
Accurate identification of insect pests is the key to improve crop yield and ensure quality and safety. However, under the influence of environmental conditions, the same kind of pests show obvious differences in intraclass representation, while the different kinds of pests show slight similarities. The traditional methods have been difficult to deal with fine-grained identification of pests, and their practical deployment is low. In order to solve this problem, this paper uses a variety of equipment terminals in the agricultural Internet of Things to obtain a large number of pest images and proposes a fine-grained identification model of pests based on probability fusion network FPNT. This model designs a fine-grained feature extractor based on an optimized CSPNet backbone network, mining different levels of local feature expression that can distinguish subtle differences. After the integration of the NetVLAD aggregation layer, the gated probability fusion layer gives full play to the advantages of information complementarity and confidence coupling of multi-model fusion. The comparison test shows that the PFNT model has an average recognition accuracy of 93.18% for all kinds of pests, and its performance is better than other deep-learning methods, with the average processing time drop to 61 ms, which can meet the needs of fine-grained image recognition of pests in the Internet of Things in agricultural and forestry practice, and provide technical application reference for intelligent early warning and prevention of pests. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Article
Knowledge-Driven Network for Object Detection
Algorithms 2021, 14(7), 195; https://doi.org/10.3390/a14070195 - 28 Jun 2021
Viewed by 1038
Abstract
Object detection is a challenging computer vision task with numerous real-world applications. In recent years, the concept of the object relationship model has become helpful for object detection and has been verified and realized in deep learning. Nonetheless, most approaches to modeling object [...] Read more.
Object detection is a challenging computer vision task with numerous real-world applications. In recent years, the concept of the object relationship model has become helpful for object detection and has been verified and realized in deep learning. Nonetheless, most approaches to modeling object relations are limited to using the anchor-based algorithms; they cannot be directly migrated to the anchor-free frameworks. The reason is that the anchor-free algorithms are used to eliminate the complex design of anchors and predict heatmaps to represent the locations of keypoints of different object categories, without considering the relationship between keypoints. Therefore, to better fuse the information between the heatmap channels, it is important to model the visual relationship between keypoints. In this paper, we present a knowledge-driven network (KDNet)—a new architecture that can aggregate and model keypoint relations to augment object features for detection. Specifically, it processes a set of keypoints simultaneously through interactions between their local and geometric features, thereby allowing the modeling of their relationship. Finally, the updated heatmaps were used to obtain the corners of the objects and determine their positions. The experimental results conducted on the RIDER dataset confirm the effectiveness of the proposed KDNet, which significantly outperformed other state-of-the-art object detection methods. Full article
(This article belongs to the Special Issue Algorithms for Machine Learning and Pattern Recognition Tasks)
Show Figures

Figure 1

Back to TopTop